0% found this document useful (0 votes)
73 views

Lecture Notes On Lie Algebras

This document contains lecture notes on Lie algebras and Lie groups. It begins with an introduction to group theory, including the formal definition of a group as a set with a binary operation that is closed, associative, has an identity element, and has inverses. The notes then cover topics related to Lie groups and Lie algebras, such as the Lie algebra of a Lie group, basic notions of Lie algebras, examples of Lie algebra prototypes, the structure of semisimple Lie algebras, and the representation theory of Lie algebras including weights, highest weight states, and tensor products of representations.

Uploaded by

alin444444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Lecture Notes On Lie Algebras

This document contains lecture notes on Lie algebras and Lie groups. It begins with an introduction to group theory, including the formal definition of a group as a set with a binary operation that is closed, associative, has an identity element, and has inverses. The notes then cover topics related to Lie groups and Lie algebras, such as the Lie algebra of a Lie group, basic notions of Lie algebras, examples of Lie algebra prototypes, the structure of semisimple Lie algebras, and the representation theory of Lie algebras including weights, highest weight states, and tensor products of representations.

Uploaded by

alin444444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 150

Lecture Notes on Lie Algebras

and Lie Groups


Luiz Agostinho Ferreira

Instituto de Fsica de Sao Carlos - IFSC/USP


Universidade de Sao Paulo
Caixa Postal 369, CEP 13560-970
Sao Carlos-SP, Brasil
August - 2011

2
.

Contents
1 Elements of Group Theory
1.1 The concept of group . . .
1.2 Subgroups . . . . . . . . .
1.3 Direct Products . . . . . .
1.4 Cosets . . . . . . . . . . .
1.5 Representations . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

2 Lie Groups and Lie Algebras


2.1 Lie groups . . . . . . . . . . . . . . . . .
2.2 Lie Algebras . . . . . . . . . . . . . . . .
2.3 The Lie algebra of a Lie group . . . . . .
2.4 Basic notions on Lie algebras . . . . . .
2.5 su(2) and sl(2): Lie algebra prototypes .
2.6 The structure of semisimple Lie algebras
2.7 The algebra su(3) . . . . . . . . . . . . .
2.8 The Properties of roots . . . . . . . . . .
2.9 The Weyl group . . . . . . . . . . . . . .
2.10 Weyl Chambers and simple roots . . . .
2.11 Cartan matrix and Dynkin diagrams . .
2.12 Root strings . . . . . . . . . . . . . . . .
2.13 Commutation relations from Dynkin
diagrams . . . . . . . . . . . . . . . . . .
2.14 Finding the cocycles (, ) . . . . . . .
2.15 The classification of simple Lie algebras .

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

5
5
13
18
19
22

.
.
.
.
.
.
.
.
.
.
.
.

35
35
37
40
43
48
54
63
66
69
73
77
82

. . . . . . . . . . . . . 84
. . . . . . . . . . . . . 92
. . . . . . . . . . . . . 96

3 Representation theory
of Lie algebras
107
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2 The notion of weights . . . . . . . . . . . . . . . . . . . . . . . . 108
3

CONTENTS
3.3
3.4
3.5
3.6
3.7
3.8

3.9

The highest weight state . . . . . . . . . . . . .


Weight strings and multiplicities . . . . . . . . .
The weight . . . . . . . . . . . . . . . . . . .
Casimir operators . . . . . . . . . . . . . . . . .
3.6.1 The Quadratic Casimir operator . . . . .
Characters . . . . . . . . . . . . . . . . . . . . .
Construction of matrix representations . . . . .
3.8.1 The irreducible representations of SU (2)
3.8.2 The triplet representation of SU (3) . . .
3.8.3 The anti-triplet representation of SU (3)
Tensor product of representations . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

112
118
121
123
124
125
132
135
136
139
140

Chapter 1
Elements of Group Theory
1.1

The concept of group

The idea of groups is one that has evolved from some very intuitive concepts
we have acquired in our attempts of understanding Nature. One of these is
the concept of mathematical structure. A set of elements can have a variety of
degrees of structure. The set of the letters of the alphabet has some structure
in it. They are ordered as A < B < C... < Z. Although this order is
fictitious, since it is a convenction, it endows the set with a structure that
is very useful. Indeed, the relation between the letters can be extended to
words such that a telephone directory can be written in an ordered way.
The set of natural numbers possesses a higher mathematical structure. In
addition of being naturally ordered we can perform operations on it. We
can do binary operations like adding or multiplying two elements and also
unary operations like taking the square root of an element (in this case the
result is not always in the set). The existence of an operation endows the set
with a mathematical structure. In the case when this operation closes within
the set, i.e. the composition of two elements is again an element of the set, the
endowed structure has very nice properties. Let us consider some examples.
Example 1.1 The set of integer numbers (positive and negative) is closed under the operations of addition, subtration and multiplication, but is not closed
under, division. The set of natural numbers, on the other hand is not closed
under subtraction and division but does close under addition and multiplication.
Example 1.2 Consider the set of all human beings living and dead and define
a binary operation as follows: for any two persons take the latest common
5

CHAPTER 1. ELEMENTS OF GROUP THEORY

forefather. For the case of two brothers this would be their father; for two
cousins their common grandfather; for a mother and his son, the mothers
father, etc. This set is closed or not under such operation depending, of course,
on how we understand everything has started.
Example 1.3 Take a rectangular box and imagine three mutually orthogonal
axis, x, y and z, passing through the center of the box and each of them being
orthogonal to two sides of the box. Consider the set of three rotations:
x a half turn about the x-axis
y a half turn about the y-axis
z a half turn about the z-axis
and let the operation on this set be the composition of rotations. So if we
perform y and then x we get z, z then y we get x, and x then z we get y.
However if we perform x then y and then z we get that the box gets back to
its original position. Therefore the set is not closed. If we add to the set the
operation (identity) I leaves the box as it is, then we get a closed set of
rotations.
For a set to be considered a group it has to have, in addition of a binary
operation and closure, some other special structures. We now start discussing
them by giving the formal definition of a group.
Definition 1.1 An abstract group G is a set of elements furnished with a
composition law (or product) defined for every pair of elements of G and that
satisfies:
a) If g1 and g2 are elements of G, then the product g1 g2 is also an element
of G. (closure property)
b) The composition law is associative, that is (g1 g2 )g3 = g1 (g2 g3 ) for every
g1 , g2 and g3 G.
c) There exists an unique element e in G , called identity element such that
eg = ge = g for every g G.
d) For every element g of G, there exists an unique inverse element, denoted
g 1 , such that g 1 g = gg 1 = e.
There are some redundancies in these definition, and the axioms c) and d)
could, in fact, be replaced by the weaker ones:
c0 ) There exists an element e in G, called left identity such that eg = g for
every g G.

1.1. THE CONCEPT OF GROUP

d0 ) For every element g of G, there exists a left inverse, denoted g 1 , such


that g 1 g = e.
These weaker axioms c0 ) and d0 ) together with the associativity property
imply c) and d). The proof is as follows:
Let g2 be a left inverse of g1 , i.e. (g2 g1 = e), and g3 be a left inverse of g2 ,
i.e. (g3 g2 = e). Then we have, since e is a left identity, that
e = ee
g2 g1 = (g2 g1 )e
since g2 g1 = e
g3 (g2 g1 ) = g3 ((g2 g1 )e) multiplying both sides by g3
(g3 g2 )g1 = (g3 g2 )g1 e using associativity
eg1 = eg1 e
since g3 g2 = e
g1 = g1 e
using the f act e is a lef t identity.
Therefore e is also a right identity. We now want to show that a left inverse is
also a right inverse. Since we know that e is both a left and right identity we
have:
eg2 = g2 e
(g2 g1 )g2 = g2 e
since g2 is a lef t inverse of g1
g3 ((g2 g1 )g2 ) = g3 (g2 e) multiplying by g3 where g3 g2 = e
(g3 g2 )(g1 g2 ) = (g3 g2 )e using associativity.
e(g1 g2 ) = ee
since g3 g2 = e.
g1 g2 = e
since e is identity.
Therefore g2 is also a right inverse of g1 . Let us show the uniqueness of the
identity and the inverses.
Any right and left identity is unique independently of the fact of the product
being associative or not. Suppose there exist two identities e and e0 such that
ge = eg = e0 g = ge0 = g for any g G. Then for g = e we have ee0 = e and
for g = e0 we have ee0 = e0 . Therefore e = e0 and the identity is unique.
Suppose that g has two right inverses g1 and g2 such that gg1 = gg2 = e
and suppose g3 is a left inverse of g, i.e. g3 g = e . Then g3 (gg1 ) = g3 (gg2 ) and
using associativity we get (g3 g)g1 = (g3 g)g2 and so eg1 = eg2 and then g1 = g2
. Therefore the right inverse is unique. A similar argument can be used to
show the uniqueness of the left inverse. Now if g3 and g1 are respectively the
left and right inverses of g, we have g3 g = e = gg1 and then using associativity
we get (g3 g)g1 = eg1 = g1 = g3 (gg1 ) = g3 e = g3 . So the left and right inverses
are the the same.
We are very used to the fact that the inverse of the product of two elements
(of a group, for instance) is the product of their inverses in the reversed order,
i.e., the inverse of g1 g2 is g21 g11 . However this result is true for products (or
composition laws) which are associative. It may not be true for non associative

CHAPTER 1. ELEMENTS OF GROUP THEORY

products.
Example 1.4 The subtraction of real numbers is not an associative operation,
since (xy)z 6= x(yz) , for x, y and z being real numbers. This operation
possesses a right unity element, namely zero, but does not possess left unity
since, x0 = x but 0x 6= x . The left and right inverses of x are equal and are
x itself, since xx = 0 . Now the inverse of (xy) is not (y 1 x1 ) = (y x)
. Since (x y) (y x) = 2(x y) 6= 0 . This is an ilustration of the fact that
for a non associative operation, the inverse of x y is not necessarily y 1 x1
.
The definition of abstract group given above is not the only possible one.
There is an alternative definition that does not require inverse and identity.
We could define a group as follows:
Definition 1.2 (alternative) Take the definition of group given above (assuming it is a non empty set) and replace axioms c) and d) by: For any given
elements g1 , g2 G there exists a unique g satisfying g1 g = g2 and also a
unique g 0 satisfying g 0 g1 = g2 .
This definition is equivalent to the previous one since it implies that, given
any two elements g1 and g2 there must exist unique elements eL1 and eL2 in G
such that eL1 g1 = g1 and eL2 g2 = g2 . But it also implies that there exists a
unique g such that g1 g = g2 . Therefore, using associativity, we get
(eL1 g1 )g = g1 g = g2 = eL1 (g1 g) = eL1 g2

(1.1)

From the uniquiness of eL2 we conclude that eL1 = eL2 .Thus this alternative
definition implies the existence of a unique left identity element eL . On the
other hand it also implies that for every g G there exist an unique gL1 such
that gL1 g = eL . Consequently axioms c) and d) follows from the alternative
axiom above.
Example 1.5 The set of real numbers is a group under addition but it is not
under multiplication, division, and subtraction. The last two operations are
not associative and the element zero has no inverse under multiplication. The
natural numbers under addition are not a group since there are no inverse
elements.
Example 1.6 The set of all nonsingular n n matrices is a group under
matrix product. The set of p q matrices is a group under matrix addition.

1.1. THE CONCEPT OF GROUP

Example 1.7 The set of rotations of a box discussed in example 1.3 is a group
under composition of rotations when the identity operation I is added to the
set. In fact the set of all rotations of a body in 3 dimensions (or in any number
of dimensions) is a group under the composition of rotations. This is called
the rotation group and is denoted SO(3).
Example 1.8 The set of all human beings living and dead with the operation
defined in example 1.2 is not a group. There are no unity and inverse elements
and the operation is not associative
Example 1.9 Consider the permutations of n elements which we shall represent graphically. In the case of three elements, for instance, the graph shown
in figure 1.1 means the element 1 replaces 3, 2 replaces 1 and 3 replaces 2. We
can compose permutations as shown in fig. 1.2. The set of all permutations
of n elements forms a group under the composition of permutations. This is
called the symmetric group of degree n, and it is generally denoted by Sn .
The number of elements of this group is n!, since this is the number of distint
permutations of n elements.
1

@  
@
 
 @
  @

Figure 1.1: A permutation of three objects

@  
@
 
 @
  @
A 
A
A
 A

@
@
@
@

Figure 1.2: A composition of permutations

10

CHAPTER 1. ELEMENTS OF GROUP THEORY

Example 1.10 The N th roots of the unity form a group under multiplication.
These roots are exp(i2m/N ) with m=0,1,2..., N-1. The identity elements is
1(m = 0) and the inverse of exp(i2m/N ) is exp(i2(N m)/N ) . This group
is called the cyclic group of order N and is denoted by ZN .
We say two elements, g1 and g2 , of a group commute with each other if their
product is independent of the order, i.e., if g1 g2 = g2 g1 . If all elements of a
given group commute with one another then we say that this group is abelian.
The real numbers under addition or multiplication (without zero) form an
abelian group. The cyclic groups Zn (see example 1.10 ) are abelian for any
n. The symmetric group Sn (see example 1.9 ) is not abelian for n > 2, but it
is abelian for n = 2 .
Let us consider some groups of order two, i.e., with two elements. The elements
0 and 1 form a group under addition modulo 2. We have
0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 0

(1.2)

The elements 1 and 1 also form a group, but under multiplication. We have
1.1 = 1.(1) = 1, 1.(1) = (1).1 = 1

(1.3)

The symmetric group of degree 2, S2 , (see example 1.9 ) has two elements as
shown in fig. 1.3.

e=

a=

A 
A
A
 A

Figure 1.3: The elements of S2


They satisfy
e.e = e, e.a = a.e = a, a.a = e

(1.4)

These three examples of groups are in fact different realizations of the same
abstract group. If we make the identifications as shown in fig. 1.4 we see that
the structure of these groups are the same. We say that these groups are
isomorphic.

1.1. THE CONCEPT OF GROUP

11

-1

A 
A
A
 A

Figure 1.4: Isomorphism


Definition 1.3 Two groups G and G0 are isomorphic if their elements can
be put into one-to-one correspondence which is preserved under the composition laws of the groups. The mapping between these two groups is called an
isomorphism.
If g1 , g2 and g3 are elements of a group G satisfying g1 g2 = g3 and if G is
isomorphic to another group G0 , then the corresponding elements g10 , g20 and
g30 in G0 have to satisfy g10 g20 = g30 .
There is the possibility of a group G being mapped into another group G0
but not in a one-to-one manner, i.e. two or more elements of G are mapped into
just one element of G0 . If such mapping respects the product law of the groups
we say they are homomorphic. The mapping is then called a homomorphism
between G and G0 .
Example 1.11 Consider the cyclic groups Z6 with elements e, a, a2 , ... a5
and a6 = e, and Z2 with elements e0 and b (b2 = e0 ). The mapping : Z6 Z2
defined by
(e) = (a2 ) = (a4 ) = e0
(a) = (a3 ) = (a5 ) = b

(1.5)

is a homomorphism between Z6 and Z2 .


Analogously one can define mappings of a given group G into itself, i.e.,
for each element g G one associates another element g 0 . The one-to-one
mappings which respect the product law of G are called automorphisms of G.
In other words, an automorphism of G is an isomorphism of G onto itself.

12

CHAPTER 1. ELEMENTS OF GROUP THEORY

Definition 1.4 A mapping : G G is said to be an automorphism of G if


it respects the product law in G, i.e., if gg 0 = g 00 then (g)(g 0 ) = (g 00 ).
Example 1.12 Consider again the cyclic group Z6 and the mapping : Z6
Z6 defined by
(e) = e (a) = a5 (a2 ) = a4
(a3 ) = a3 (a4 ) = a2 (a5 ) = a

(1.6)

This is an automorphism of Z6 .
In fact the above example is just a particular case of the automorphism of any
abelian group where a given element is mapped into its inverse.
Notice that if and 0 are two automorphisms of a group G, then the
composition of both 0 is also an automorphism of G. Such composition
is an associative operation. In addition, since automorphisms are one-to-one
mappings, they are invertible. Therefore, if one considers the set of all automorphisms of a group G together with the identity mapping of G into G, one
gets a group which is called the automorphism group of G.
Any element of G gives rise to an automorphism. Indeed, define the mapping g : G G
g (g) g g g1

g, g G and g fixed

(1.7)

Then
g (gg 0 ) = g gg 0 g1
= g g
g 1 gg 0 g1
= g (g) g (g 0 )

(1.8)

and so it constitutes an automorphism of G. That is called an inner automorphism. The automorphism group that they generate is isomorphic to G,
since
(1.9)
g1 (g2 (g)) = g1 g2 g g21 g11 = g1 g2 (g)
All automorphisms which are not of such type are called outer automorphisms.

1.2. SUBGROUPS

1.2

13

Subgroups

A subset H of a group G which satisfies the group postulates under the same
composition law used for G, is said to be a subgroup of G. The identity element
and the whole group G itself are subgroups of G. They are called improper
subgroups. All other subgroups of a group G are called proper subgroups. If H
is a subgroup of G, and K a subgroup of H then K is a subgroup of G.
In order to find if a subset H of a group G is a subgroup we have to check only
two of the four group postulates. We have to chek if the product of any two
elements of H is in H (closure) and if the inverse of each element of H is in
H. The associativity property is guaranteed since the composition law is the
same as the one used for G. As G has an identity element it follows from the
closure and inverse element properties of H that this identity element is also
in H.
Example 1.13 The real numbers form a group under addition.The integer
numbers are a subset of the real numbers and also form a group under the
addition. Therefore the integers are a subgroup of the reals under addition.
However the reals without zero also form a group under multiplication, but the
integers (with or without zero) do not. Consequently the integers are not a
subgroup of the reals under multiplication.
Example 1.14 Take G to be the group of all integers under addition, H1 to
be all even integers under addition, H2 all multiples of 22 = 4 under addition,
H3 all multiples of 23 = 8 under addition and son on. Then we have
G:
... 2,
1,
H1 : ... 4,
2,
H2 : ... 8,
4,
H3 : ... 16, 8,
Hn : ... 2.2n , 2n ,

0, 1,
2...
0, 2,
4...
0, 4,
8...
0, 8, 16...
0, 2n , 2.2n ...

We see that each group is a subgroup of all groups above it, i.e.
G H1 H2 ... Hn ...

(1.10)

Moreover there is a one to one correspondence between any two groups of this
list such that the composition law is preserved. Therefore all these groups are
isomorphic to one another
G H1 H2 ... Hn ...

(1.11)

This shows that a group can be isomorphic to one of its proper subgroups. The
same can not happen for finite groups.

14

CHAPTER 1. ELEMENTS OF GROUP THEORY


1

...n-1

a2 =

1
a

@ @
@ @

@

 
@

n-1

...
n-1


@

... @
n-3 n-2

an1=

...n-1 !n
!
A
A
A
A

...
n-2 n-1

1a 2

...n-1

...n-1 n

A A A
A A !
A!
A !A! A
!!
A A A

a
a  
 aaa

  
  

n



... aa
n 1

Figure 1.5: The cyclic permutations of n objects


Example 1.15 The cyclic group Zn , defined in example 1.10 , is a subgroup of
the symmetric group Sn , defined in example 1.9 . In order to see this, consider
the elements of Sn corresponding to cyclic permutations given in figure1.5.
These elements form a subgroup of Sn which has the same structure as the
group formed by the nth roots of unity under ordinary multiplication of complex
numbers, i.e., Zn .
This example is a particular case of a general theorem in the theory of finite
groups, which we now state without proof. For the proof, see [HAM 62, chap
1] or [BUD 72, chap 9].
Theorem 1.1 (Cayley) Every group G of order n is isomorphic to a subgroup of the symmetric group Sn .
Definition 1.5 The order of a finite group is the number of elements it has.
Another important theorem about finite groups is the following.
Theorem 1.2 (Lagrange) The order of a subgroup of a finite group is a
divisor of the order of the group.
Corollary 1.1 If the order of a finite group is a prime number then it has no
proper subgroups.

1.2. SUBGROUPS

15

The proof involves the concept of cosets and it is given in section 1.4. A
finite group of prime order is necessarily a cyclic group and can be generated
from any of its elements other than the identity element.
We say an element g of a group G is conjugate to an element g 0 G if there
exists g G such that
g = gg 0 g1
(1.12)
This concept of conjugate elements establishes an equivalence relation on the
group. Indeed, g is conjugate to itself (just take g = e), and if g is conjugate to
g 0 , so is g 0 conjugate to g (since g 0 = g1 g
g ). In addition, if g is conjugate to g 0
0
00
0
00 1
and g to g , i.e. g = gg g , then g is conjugate to g 00 , since g = ggg 00 g1 g1 .
One can use such equivalence relation to divide the group G into classes.
Definition 1.6 The set of elements of a group G which are conjugate to each
other constitute a conjugacy class of G.
Obviously different conjugacy classes have no common elements. The indentity
element e constitute a conjugacy class by itself in any group. Indeed, if g 0 is
conjugate to the identity e, e = gg 0 g 1 , then g 0 = e.
Given a subgroup H of a group G we can form the set of elements g 1 Hg
where g is any fixed element of G and H stands for any element of the subgroup
H. This set is also a subgroup of G and is said to be a conjugate subgroup of
H in G. In fact the conjugate subgroups of H are all isomorphic to H, since if
h1 , h2 H and h1 h2 = h3 we have that h01 = g 1 h1 g and h02 = g 1 h2 g satisfy
h01 h02 = g 1 h1 gg 1 h2 g = g 1 h1 h2 g = g 1 h3 g = h03

(1.13)

Notice that the images of two different elements of H, under conjugation by


g G, can not be the same. Because if they were the same we would have
g 1 h1 g = g 1 h2 g g(g 1 h1 g)g 1 = h2 h1 = h2

(1.14)

and that is a contradiction.


By choosing various elements g G we can form different conjugate subgroups
of H in G. However it may happen that for all g G we have
g 1 Hg = H

(1.15)

This means that all conjugate subgroups of H in G are not only isomorphic
to H but are identical to H. In this case we say that the subgroup H is an
invariant subgroup of G. This implies that, given an element h1 H we can
find, for any element g G, an element h2 H such that
g 1 h1 g = h2 h1 g = gh2

(1.16)

16

CHAPTER 1. ELEMENTS OF GROUP THEORY

We can write this as


gH = Hg

(1.17)

and say that the invariant subgroup H, taken as an entity, commutes with all
elements of G. The identity element and the group G itself are trivial examples
of invariant subgroups of G. Any subgroup of an abelian group is an invariant
subgroup.
Definition 1.7 We say a group G is simple if its only invariant subgroups
are the identity element and the group G itself. In other words, G is simple if
it has no invariant proper subgroups. We say G is semisimple if none of its
invariant subgroups is abelian.
Example 1.16 Consider the group of the non-singular real n n matrices,
which is generally denoted by GL(n). The matrices of this group with unit determinant form a subgroup since if detM = detN = 1 we have det(M.N ) = 1
and detM 1 = detM = 1. This subgroup of GL(n) is denoted by SL(n). If
g GL(n) and M SL(n) we have that g 1 M g SL(n) since det(g 1 M g) =
detM = 1 . Therefore SL(n) is an invariant subgroup of GL(n) and consequently the latter is not simple. Consider now the matrices of the form
R x 1lnn , with x being a non-zero real number, and 1lnn being the n n
identity matrix. Notice, that such set of matrices constitute a subgroup of
GL(n), since the identity belongs to it, the product of any two of them belongs
to the set, and the inverse of R x 1lnn is R1 (1/x) 1lnn , which is also an
element of the set. In addition, such subgroup is invariant since any matrix R
commutes with any element of GL(n) and so it is invariant under conjugation.
Since that subgroup is abelian, it follows that GL(n) is not semisimple.
Definition 1.8 Given an element g of a group G we can form the set of all
elements of G which commute with g, i.e., all x G such that xg = gx. This
set is called the centralizer of g and it is a subgroup of G.
In order to see it is a subgroup of G, take two elements x1 and x2 of the
centralizer of g, i.e., x1 g = gx1 and x2 g = gx2 . Then it follows that (x1 x2 )g =
x1 (x2 g) = x1 (gx2 ) = g(x1 x2 ). Therefore x1 x2 is also in the centralizer. On the
other hand, we have that
1
1
1
1
1
x1
1 (x1 g)x1 = x1 (gx1 )x1 gx1 = x1 g.

(1.18)

So the inverse of an element of the centralizer is also in the centralizer. Therefore the centralizer of an element g G is a subgroup of G. Notice that

1.2. SUBGROUPS

17

although all elements of the centralizer commute with a given element g they
do not have to commute among themselves and therefore it is not necessarily
an abelian subgroup of G.
Definition 1.9 The center of a group G is the set of all elements of G which
commute with all elements of G.
We could say that the center of G is the intersection of the centralizers of all
elements of G. The center of a group G is a subgroup of G and it is abelian ,
since by definition its elements have to commute with one another. In addition,
it is an (abelian) invariant subgroup.
Example 1.17 The set of all unitary n n matrices form a group, called
U (n), under matrix multiplicaton. That is because if U1 and U2 are unitary
(U1 = U11 and U2 = U21 ) then U3 U1 U2 is also unitary. In addition the
inverse of U is just U and the identity is the unity n n matrix. The unitary
matrices with unity determinant constitute a subgroup, because the product of
two of them, as well as their inverses, have unity determinant. That subgroup
is denoted SU (n). It is an invariant subgroup of U (n) because the conjugation
of a matrix of unity determinant
by any unitary matrix gives a matrix of unity


determinat, i.e. det U M U = detM = 1, with U U (n) and M SU (n).
Therefore, U (n) is not simple. However, it is not semisimple either, because it
has an abelian subgroup constituted by the matrices R ei 1lnn , with being
real. Indeed, the multiplication of any two Rs is again in the set of matrices
R, the inverse of R is R1 = ei 1lnn , and so a matrix in the set. Notice
the subgroup constituted by the matrices R is isomorphic to U (1), the group of
11 unitary matrices, i.e. phases ei . Since, the matrices R commute with any
unitary matrix, it follows they are invariant under conjugation by elements of
U (n). Therefore, the subgroup U (1) is an abelian invariant subgroup of U (n),
and so U (n) is not semisimple. The subgroup U (1) is in fact the center of
U (n), i.e. the set of matrices commuting with all unitary matrices. Notice, that
such U (1) is not a subgroup of SU (n), since their elements do not have unity
deteminant. However, the discrete subset of matrices e2im/n 1lnn with m =
0, 1, 2...(n 1) have unity determinant and belong to SU (n). They certainly
commute with all n n matrices, and constitute the center of SU (n). Those
matrices form an abelian invariant subgroup of SU (n), which is isomorphic to
Zn . Therefore, SU (n) is not semisimple.

18

CHAPTER 1. ELEMENTS OF GROUP THEORY

1.3

Direct Products

We say a group G is the direct product of its subgroups H1 , H2 ...Hn , denoted


by G = H1 H2 H3 ... Hn , if
1. the elements of different subgroups commute
2. Every element g G can be expressed in one and only one way as
g = h1 h2 ...hn

(1.19)

where hi is an element of the subgroup Hi , i = 1, 2, ..., n .


From these requirements it follows that the subgroups Hi have only the identity
e in common. Because if f 6= e is a common element to H2 and H5 say, then the
element g = h1 f h3 h4 f 1 h6 ...hn could be also written as g = h1 f 1 h3 h4 f h6 ...hn
. Every subgroup Hi is an invariant subgroup of G, because if h0i Hi then
0
g 1 h0i g = (h1 h2 ...hn )1 h0i (h1 h2 ...hn ) = h1
i hi hi Hi

(1.20)

Example 1.18 Consider the cyclic group Z6 with elements e, a, a2 , a3 , a4


and a5 (and a6 = e ). It can be written as the direct product of its subgroups
H1 = {e, a2 , a4 } and and H2 = {e, a3 } since
e = ee, a = a4 a3 , a2 = a2 e, a3 = ea3 , a4 = a4 e, a5 = a2 a3

(1.21)

Therefore we write Z6 = H1 H2 (or Z6 = Z3 Z2 ).


Given two groups G and G0 we can construct another group by taking the
direct product of G and G0 as follows: the elements of G00 = G G0 are formed
by the pairs (g, g 0 ) where g G and g 0 G0 . The composition law for G00 is
defined by
(1.22)
(g1 , g10 )(g2 , g20 ) = (g1 g2 , g10 g20 )
where g1 g2 , (g10 g20 ) is the product of g1 by g2 , (g10 by g20 ) according to the
composition law of G (G0 ). If e and e0 are respectively the identity elements of
G and G0 , then the sets G1 = {(g, e0 ) | g G} and 1G0 = {(e, g 0 ) | g 0 G0 }
are subgroups of G00 = G G0 and are isomorphic respectively to G and G0 .
Obviously G 1 and 1 G0 are invariant subgroups of G00 = G G0 .

1.4. COSETS

1.4

19

Cosets

Given a group G and a subgroup H of G we can divide the group G into


disjoint sets such that any two elements of a given set differ by an element of
H multiplied from the right. That is, we construct the sets
gH { all elements gh of G such that h is any element of H and g is a fixed
element of G}
If g = e the set eH is the subgroup H itself. All elements in a set gH are different, because if gh1 = gh2 then h1 = h2 . Therefore the numbers of elements
of a given set gH is the same as the number of elements of the subgroup H.
Also an element of a set gH is not contained by any other set g 0 H with g 0 6= g
. Because if gh1 = g 0 h2 then g = g 0 h2 h1
1 and therefore g would be contained
0
0 1
in g H and consequently gH g H . Thus we have split the group G into
disjoint sets, each with the same number of elements, and a given element
g G belongs to one and only one of these sets.
Proof of Lagranges theorem(section 1.2).
From the considerations above we see that for a finite group G of order m with
a proper subgroup H of order n, we can write
m = kn

(1.23)

where k is the number of disjoint sets gH.2


The set of elements gH are called left cosets of H in G. They are certainly
not subgroups of G since they do not contain the identity element, except for
the set eH = H.
Analogously we could have split G into sets Hg which are formed by elements of G which differ by an element of H multiplied from the left. The same
results would be true for these sets. They are called right cosets of H in G.
The set of left cosets of H in G is denoted by G/H and is called the left coset
space. An element of G/H is a set of elements of G, namely gH. Analogously
the set of right cosets of H in G is denoted by H \ G and it is called the right
coset space.
If the subgroup H of G is an invariant subgroup then the left and right
cosets are the same since g 1 Hg = H implies gH = Hg . In addition, the
coset space G/H, for the case in which H is invariant, has the structure of a
Notice that two sets gH and g 0 H may coincide for g 0 6= g. However, in that case g and
g differ by an element of H, i.e. g 0 = gh.
1

20

CHAPTER 1. ELEMENTS OF GROUP THEORY

group and it is called the factor group or the quocient group. In order to show
this we consider the product of two elements of two different cosets. We get
gh1 g 0 h2 = gg 0 g 01 h1 g 0 h2 = gg 0 h3 h2

(1.24)

where we have used the fact that H is invariant, and therefore there exists
h3 H such that g 01 h1 g 0 = h3 . Thus we have obtained an element of a
third coset, namely gg 0 H. If we had taken any other elements of the cosets
gH and g 0 H, their product would produce an element of the same coset gg 0 H.
Consequently we can introduce, in a well defined way, the product of elements
of the coset space G/H, namely
gHg 0 H gg 0 H

(1.25)

The invariant subgroup H plays the role of the identity element since
(gH)H = H(gH) = gH

(1.26)

The inverse element is g 1 H since


g 1 HgH = g 1 gH = H = gHg 1 H

(1.27)

The associativity is guaranteed by the associativity of the composition law of


the group G. Therefore the coset space G/H H \ G is a group in the case
where H is an invariant subgroup. Notice that such group is not necessarily a
subgroup of G or H.
Example 1.19 The real numbers without the zero, R0 , form a group under
multiplication. The positive real numbers, R+ , close under multiplication and
the inverse of a positive real number x is also positive (1/x) . Therefore R+
is a subgroup of R 0 . In addition we have that the conjugation of a real x
by another real y is equal to x , (y 1 xy = x) . Therefore R+ is an invariant
subgroup of R 0 . The coset space (R 0)/R+ has two elements, namely
R+ and R (the negative real numbers). This coset space is a group and it
is isomorphic to the cyclic group of order 2, Z2 (see example 1.10), since its
elements satisfy R+ .R+ R+ , R+ .R R , R .R R+ .
Example 1.20 Any subgroup of an abelian group is an invariant subgroup.
Example 1.21 Consider the cyclic group Z6 with elements e, a, a2 , ... a5
and a6 = e and the subgroup Z2 with elements e and a3 . Then the cosets are
given by
c0 = {e, a3 } , c1 = {a, a4 } , c2 = {a2 , a5 }
(1.28)

1.4. COSETS

21

Since Z2 is an invariant subgroup of Z6 the coset space Z6 /Z2 is a group.


Following the definition of the product law on the coset given above one easily
sees it is isomorphic to Z3 since
c0 .c0 = c0 , c0 .c1 = c1 , c0 .c2 = c2
c1 .c1 = c2 , c1 .c2 = c0 , c2 .c2 = c1

(1.29)

If we now take the subgroup Z3 of Z6 with elements e, a2 and a4 we get the


cosets
d0 = {e, a2 , a4 } , d1 = {a, a3 , a5 }
(1.30)
Again the coset space Z6 /Z3 is a group and it is isomorphic to Z2 since
d0 .d0 = d0 , d0 .d1 = d1 , d1 .d1 = d0

(1.31)

22

1.5

CHAPTER 1. ELEMENTS OF GROUP THEORY

Representations

The concept of abstract groups we have been discussing plays an important


role in Physics. However, its importance only appears when some quantities
in the physical theory realize, in a concentre way, the structure of the abstract
group. Here comes the concept of representation of an abstract group.
Suppose we have a set of operators D1 , D2 ... acting on a vector space V
Di | vi =| v 0 i ; | vi, | v 0 i V

(1.32)

We can define the product of these operators by the composition of their action,
i.e., an operator D3 is the product of two other operators D1 and D2 if
D1 (D2 | vi) = D1 | v 0 i = D3 | vi

(1.33)

for all | vi V . We then write


D1 .D2 = D3 .

(1.34)

Suppose that these operators form a group under this product law. We call it
an operator group or group of transformations.
If we can associate to each element g of an abstract group G an operator,
which we shall denote by D(g), such that the group structure of G is preserved,
i.e., if for g, g 0 G we have
D(g)D(g 0 ) = D(gg 0 )

(1.35)

then we say that such set of operators is a representation of the abstract group
G in the representation space V . In fact, the mapping between the operator
group D and the abstract group G is a homomorphism. In addition to eq.(1.35)
one also has that
D(g 1 ) = D1 (g)
D(e) = 1

(1.36)

where 1 stands for the unit operator in D.


Definition 1.10 The dimension of the representation is the dimension of the
representation space.
Notice that we can associate the same operator to two or more elements of
G, but we can not do the converse. In the case where there is a one-to-one
correspondence between the elements of the abstract group and the set of
operators, i.e., to one operator D there is only one element g associated, we
say that we have a faithful representation .

1.5. REPRESENTATIONS

23

Example 1.22 The unit matrix of any order is a trivial representation of any
group. Indeed, if we associate all elements of a given group to the operator 1
we have that the relation 1.1 = 1 reproduces the composition law of the group
g.g 0 = g 00 . This is an example of an extremely non faithful representation.
When the operators D are linear operators, i.e.,
D(| vi+ | v 0 i) = D | vi + D | v 0 i
D(a | vi) = aD | vi

(1.37)

with | vi, | v 0 i V and a being a c-number, we say they form a linear representation of G.
Given a basis | vi i (i = 1, 2...n) of the vector space V (of dimension n)
we can construct the matrix representatives of the operators D of a given
representation. The action of an operator D on an element | vi i of the basis
produces an element of the vector space which can be written as a linear
combination of the basis
D | vi i =| vj iDji
(1.38)
The coefficients Dji of this expansion constitute the matrix representatives of
the operator D. Indeed, we have
0
D0 (D | vi i) = D0 | vj iDji =| vk iDkj
Dji =| vk i(D0 D)ki

(1.39)

So, we can now associate to the matrix Dij , the element of the abstract group
that is associated to the operator D. We have then what is called a matrix
representation of the abstract group. Notice that the matrices in each representation have to be non singular because of the existence of the inverse element.
In addition the unit element e is always represented by the unit matrix, i.e.,
Dij (e) = ij .
Example 1.23 In example 1.9 we have defined the group Sn . We can construct a representation for this group in terms of n n matrices as follows:
take a vector space Vn and let | vi i, i = 1, 2, ...n, be a basis of Vn . One can
define n! operators that acting on the basis permute them, reproducing the n!
permutations of n elements. Using (1.38) one then obtains the matrices. For
instance, in the case of S3 , consider the matrices

1 0 0

D(a0 ) = 0 1 0 ;
0 0 1

0 1 0

D(a1 ) = 1 0 0 ;
0 0 1

24

CHAPTER 1. ELEMENTS OF GROUP THEORY

1 0 0

D(a2 ) = 0 0 1 ;
0 1 0

0 0 1

D(a3 ) = 0 1 0 ;
1 0 0

0 1 0

D(a4 ) = 0 0 1
;
1 0 0

0 0 1

D(a5 ) = 1 0 0

0 1 0

(1.40)

where am , m = 0, 1, 2, 3, 4, 5, are the 6 elements of S3 . One can check that the


action
D(am ) | vi i =| vj iDji (am )
(1.41)
gives the 6 permutations of the three basis vectors | vi i, i = 1, 2, 3, of V3 .
In addition the product of these matrices reproduces the composition law of
permutations in S3 .
By considering V3 as the space of column vectors 3 1 , and taking the
canonical basis

1
0
0

| v1 i = 0 ; | v2 i = 1 ; | v3 i = 0
0
0
1

(1.42)

one can check that the matrices given above play the role of the operators
permuting the basis too
Dij (am ) | vk ij =| vl ii Dlk (am )

(1.43)

In a non faithful representation of a group G, the set of elements which are


mapped on the unit operator constitute an invariant subgroup of G. Indeed,
if the representatives of the elements h and h0 of G are the unit operator, i.e.,
D(h) = D(h0 ) = 1, then D(hh0 ) = D(h)D(h0 ) = 1. In addition one has that
D(h1 ) = 1 since D(h)D(h1 ) = D(e) = 1 = 1D(h1 ). So, such subset of G
is indeed a subgroup. To see it is invariant one uses eq.1.36 to get
D(g 1 hg) = D(g)1 D(h)D(g) = D1 (g)1D(g) = 1

(1.44)

Denoting by H this invariant subgroup, we see that all elements in a given


coset gH of the coset space G/H are mapped on the same matrix D(g) since
D(gh) = D(g)D(h) = D(g)1 = D(g) ; h H

(1.45)

Therefore the representation D of G constitute a faithful representation of the


factor group G/H.

1.5. REPRESENTATIONS

25

Two representations D and D0 of an abstract group G are said to be equivalent representations if there exists an operator C such that
D0 (g) = CD(g)C 1

(1.46)

with C being the same for every g G. Such thing happens, for instance,
when one changes the basis of the representation
| vi0 i =| vj iji

(1.47)

Then
0
D(g) | vi0 i | vj0 iDji
(g)
= | vk iDkl (g)li
= | vn inj 1
jk Dkl (g)li

= | vj0 i1
jk Dkl (g)li

(1.48)

Therefore the new matrix representatives are


0
Dji
(g) = 1
jk Dkl (g)li

(1.49)

So, the matrix representatives change as in (1.46) with C = 1 . Although


the structure of the representation does not change the matrices look different.
As we have said before the operators of a given representation act on the
representation space V as a group of transformations. In the case where a
subspace of V is left invariant by all transformations, we say the representation
is reducible . This implies that if a matrix representation is reducible then there
exists a basis where the matrices can be written in the form
A C
0 B

D(g) =

(1.50)

where A, B and C are respectively m m, n n and m n matrices. The


dimension of the representation is m + n. The subspace V1 of V generated by
the first m elements of the basis is left invariant, since
A C
0 B

v1
0

Av1
0

(1.51)

i.e., V1 does not mix with the rest of V . The subspace V2 of V generated by
the last n elements of the basis is not invariant since
A C
0 B

0
v2

Cv2
Bv2

(1.52)

26

CHAPTER 1. ELEMENTS OF GROUP THEORY

When both subspaces V1 and V2 are invariant we say the representation is


completely reducible. In this case the matrices take the form
D(g) =

A 0
0 B

(1.53)

Lemma 1.1 (Schur) Any matrix which commutes with all matrices of a given irreducible representation of a group G must be a multiple of the unit
matrix.
Proof Let A be a matrix that commutes will all matrices D(g) of a given
irreducible representation of G, i.e.
AD(g) = D(g)A

(1.54)

for any g G. Consider the eigenvalue equation


A | vi = | vi

(1.55)

where | vi is some vector in the representation space V . Notice that, if v is an


eigenvector with eigenvalue , then D(g) | vi has also eigenvalue since
AD(g) | vi = D(g)A | vi = D(g) | vi.

(1.56)

Therefore the subspace of V generated by all eigenvectors of A with eigenvalue


is an invariant subspace of V . But if the representation is irreducible that
means this subspace is the zero vector or is the entire V . In the first case we
get that A = 0, and in the second we get that A has only one eigenvalue and
therefore A = 1. 2
Corollary 1.2 Every irreducible representation of an abelian group is one dimensional.
Proof Since the group is abelian any matrix has to commute with all other
matrices of the representation. According to Schurs lemma they have to be
proportional to the identity matrix. So, any vector of the representation space
V generates an invariant subspace. Therefore V has to be unidimensional if
the representation is irreducible. 2
Definition 1.11 A representation D is said to be unitary if the matrices Dij
of the operators are unitary, i.e. D = D1 .
An important result in the theory of finite groups is the following theorem

1.5. REPRESENTATIONS

27

Theorem 1.3 Any representation of a finite group is equivalent to a unitary


representation
Proof Let G be a finite group of order N , and D be a representation of G of
dimension d. We introduce a hermitian matrix H (H = H) by
H

1 X
D (g)D(g)
N gG

(1.57)

For any g 0 G
D (g 0 )HD(g 0 ) =

1 X 0
D (gg )D(gg 0 ) = H
N gG

(1.58)

by redefining the sum (remember that if g1 g 0 = g2 g 0 then g1 = g2 ). Since


H is hermitian it can be diagonalized by a unitary matrix, i.e. H 0 U HU
is diagonal. For any non zero colunm vector v (with complex entries), the
quantity
1 X
| D(g)v |2
(1.59)
v Hv =
N gG
is real and positive. But, introducing v 0 U v
v Hv = v 0 H 0 v 0
=

d
X

Hii0 | vi0 |2

(1.60)

i=1

where vi0 are the components of v 0 . Since vi0 are arbitrary we conclude that each
entry Hii0 of H 0 is q
real and positive. We then define a diagonal real matrix h
with entries hii = Hii0 , i.e. H 0 = hh. Therefore
H = U H 0 U = U hhU SS

(1.61)

where we have defined S = U hU . Notice that S is hermitian, since h is real


and diagonal.
Defining the representation of G given by the matrices
D0 (g) SD(g)S 1

(1.62)

we then get from eq. (1.58)




S 1 D0 (g)S

(SS) S 1 D0 (g)S = SS

(1.63)

28

CHAPTER 1. ELEMENTS OF GROUP THEORY

and so
D0 (g)D0 (g) = 1l

(1.64)

Therefore the representation D(g) is equivalent to the unitary representation


D0 (g).This result, as we will discuss later, is also true for compact Lie groups.2
Definition 1.12 Given two representations D and D0 of a given group G, one
can construct what is called the tensor product representation of D and D0 .
Denoting by | vi i, i = 1, 2, . . . dim D, and | vl0 i, l = 1, 2, . . . dim D0 , the basis of
D and D0 respectively, one constructs the basis of D D0 as
| wil i =| vi i | vl0 i

(1.65)

The operators representing the group elements act as


D (g) | wil i = D (g) D0 (g) | wil i = D (g) | vi i D0 (g) | vl0 i

(1.66)

The dimension of the representation D D0 is the product of the dimensions


of D and D0 , i.e. dim D D0 = dim D dim D0 .
The matrices representing a given group element in two equivalent representations may look quite different one from the other. That means the matrices
contain a lot of redundant information. Much of the relevant properties of a
representation can be encoded in the character.
Definition 1.13 In a given representation D of a group G we define the character D (g) of a group element g G as the trace of the matrix representing
it, i.e.
D

(g) T r(D(g)) =

dimD
X

Dii (g)

(1.67)

i=1

Obviously, the characters of a given group element in two equivalent representations are the same, since from (1.46)
0

T r(D0 (g)) = T r(CD(g)C 1 ) = T r(D(g)) D (g) = D (g)

(1.68)

Analogously, the elements of a given conjugacy class have the same character.
Indeed, from definition 1.6, if two elements g 0 and g 00 are conjugate, g 0 =
gg 00 g 1 , then in any representation D one has T r(D(g 0 )) = T r(D(g 00 )). Nothing
prevents however, the elements of two different conjugacy class of having the
same character in some particular representation. In fact, this happens in the
representation discussed in example 1.22.

1.5. REPRESENTATIONS

29

We have seen that the identity element e of a group G is always represented


by the unity matrix. Therefore the character of e gives the dimension of the
representation
D (e) = dim D
(1.69)
We now state, without proof, some theorems concerning characters. For
the proofs see, for instance, [COR 84].
Theorem 1.4 Let D and D0 be two irreducible representations of a finite
0
group G and D and D the corresponding characters. Then
1 X D
0
( (g)) D (g) = DD0
N (G) gG

(1.70)

where N (G) is the order of G, DD0 = 1 if D and D0 are equivalent representations and DD0 = 0 otherwise.
Theorem 1.5 A sufficient conditions for two representations of a finite group
G to be equivalent is the equality of their character systems.
Theorem 1.6 The number of times nD that an irreducible representation D
appears in a given reducible representation D0 of a finite group G is given by
nD =

1 X D0
(g)(D (g))
N (G) gG

(1.71)

where D and D are the characters of D and D0 respectively, and N (G) is


the order of G.
Theorem 1.7 A necessary and suficient condition for a representation D of
a finite group G to be irreducible is
1 X
| D (g) |2 = 1
N (G) gG

(1.72)

where D are the characters of D and N (G) the order of G.


All these four theorems are also true for compact Lie groups (see definition
1 P
in chapter 2) with the replacement of the sum N (G)
gG by the invariant
R
integration G Dg on the group manifold.
Characters are also used to prove theorems about the number of inequivalent irreducible representations of a finite group.

30

CHAPTER 1. ELEMENTS OF GROUP THEORY

Theorem 1.8 The sum of the squares of the dimensions of the inequivalent
irreducible representations of a finite group G is equal to the order of G.
Theorem 1.9 The number of inequivalent irreducible representations of a finite group G is equal to the number of conjugacy classes of G.
For the proofs see [COR 84].
Definition 1.14 If all the matrices of a representation are real the representation is said to be real.
Notice that if D is a matrix representation of a group G, then the matrices
D (g), g G, also constitute a representation of G of the same dimension as
D, since
D(g)D(g 0 ) = D(gg 0 ) D (g)D (g 0 ) = D (gg 0 )
(1.73)
If D is equivalent to a real representation DR , then D is equivalent to D . The
reason is that there exists a matrix C such that
DR (g) = CD(g)C 1

(1.74)

DR (g) = C D (g)(C )1

(1.75)

D (g) = (C 1 C )1 D(g)C 1 C

(1.76)

and so
Therefore
and D is equivalent to D . However the converse is not always true, i.e. , if D is
equivalent to D it does not means D is equivalent to a real representation. So
we classify the representations into three classes regarding the relation between
D and D .
Definition 1.15
1. If D is equivalent to a real representation it is said
potentially real.
2. If D is equivalent to D but not equivalent to a real representation it is
said pseudo real.
3. If D is not equivalent to D then it is said essentially complex.
Notice that if D is potentially real or pseudo real then its characters are real.

1.5. REPRESENTATIONS

31

Example 1.24 The rotation group on the plane, denoted SO(2), can be represented by the matrices
cos sin
sin cos

R() =

(1.77)

such that
R()

x
y

x cos + y sin
x sin + y cos

(1.78)

One can easily check that R()R() = R( + ). This group is abelian and
according to corollary 1.2 such representation is reducible. Indeed, one gets
M R()M

ei 0
0 ei

(1.79)

where
1 i
i 1

M=

(1.80)

The vectors of the representation space are then transformed as


M

x
y

x + iy
ix + y

(1.81)

The characters of these equivalent representations are


() = 2 cos

(1.82)

Example 1.25 In example 1.23 we have discussed a 3-dimensional matrix


representation of S3 . From the definition 1.13 one can easily evaluate the
characters in such representation
D (a0 ) = 3
D (a1 ) = D (a2 ) = D (a3 ) = 1
D (a4 ) = D (a5 ) = 0

(1.83)

Therefore
5
1X
| D (ai ) |2 = 2
6 i=0

(1.84)

32

CHAPTER 1. ELEMENTS OF GROUP THEORY

From theorem 1.7 one sees that such 3-dimensional representation is not irreducible. Indeed, the one dimensional subspace generated by the vector

1
1
| w3 i = 1
3 1

(1.85)

is an invariant subspace. The basis of the orthogonal complement of such


subspace can be taken as

1
1

| w1 i = 1 ;
2
0

1
1

| w2 i = 1
6 2

(1.86)

Such a basis is related to the canonical basis defined in (1.42) by


| wi i =| vj iji

(1.87)

where i, j = 1, 2, 3 and

12

2

1
6
1
6
2

1
3
1
3
1
3

(1.88)

According to (1.49) the matrix representatives of the elements of S3 change as


D0 (am ) = 1 D(am )

(1.89)

where m = 0, 1, 2, 3, 4, 5 and 1 = > . One can easily check that


D00 (am ) 0
0
1

D (am ) =

(1.90)

where D00 (am ) is a 2-dimensional representation of S3 given by


00

D (a0 ) =
00

D (a2 ) =
00

D (a4 ) =

1
2
3
2
1
2

3
2

1 0
0 1

3
2
1
2

3
2
1
2

00

D (a1 ) =

D00 (a3 ) =

00

D (a5 ) =

1 0
0 1
1
2

3
2
1
2
3
2

3
2
1
2

3
2
1
2

;
!

;
!

(1.91)

1.5. REPRESENTATIONS

33

The characters in the representation D00 are given by


00

D (a0 ) = 2
00
00
00
D (a1 ) = D (a2 ) = D (a3 ) = 0
00
00
D (a4 ) = D (a5 ) = 1

(1.92)

Therefore
5
1X
00
| D (ai ) |2 = 1
6 i=0

(1.93)

According to theorem 1.7 the representation D00 is irreducible. Consequentely


the 3-dimensional representation D defined in (1.40) is completely reducible.
It decomposes into the irreducible 2-dimensional representation D00 and the
1-dimensional representation given by 1.
We have seen so far that S3 has two irreducible representations, the two dimensional representation D00 , and the scalar representation (one dimensional)
where all elements are represented by the number 1. Since, 22 + 12 = 5 and
since the order of S3 is 6, we observe from theorem 1.8 that it is missing one
irreducible representation of dimension 1. That is easy to construct, and in
fact any Sn group has it. It is the representation where the permutations made
of an even number of simple permutations is representated by 1, and those with
an odd number by 1. Since the composition of permutations add up the number of simple permutations it follows it is indeed a representation. Therefore,
the missing one dimensional irreducible representation of S3 is given by
D000 (a0 ) = D000 (a4 ) = D000 (a5 ) = 1
D000 (a1 ) = D000 (a2 ) = D000 (a3 ) = 1

(1.94)

34

CHAPTER 1. ELEMENTS OF GROUP THEORY

Chapter 2
Lie Groups and Lie Algebras
2.1

Lie groups

So far we have been looking at groups as set of elements satisfying certain


postulates. However we can take a more geometrical point of view and look
at the elements of a group as being points of a space. The groups Sn and Zn ,
discussed in examples 1.9 and 1.10, have a finite number of elements and therefore their corresponding spaces are discrete spaces. Groups like these ones are
called finite discrete groups. The group formed by the integer numbers under
addition is also discrete but has an infinite number of elements. It constitutes
a one dimensional regular lattice. These type of groups are called infinite discrete groups. The interesting geometrical properties of groups appear when
their elements correspond to the points of a continuous space. We have then
what is called a continuous group. The real numbers under addition constitute
a continuous group since its elements can be seen as the points of an infinite
line. The group of rotations on a two dimensional plane is also a continuous
group. Its elements can be parametrized by an angle varying from O to 2 and
therefore they define a space which is a circle. In this sense the real numbers
under addition constitute a non compact group and the rotations on the plane
a compact group.
Given a group G we can parametrize its elements by a set of parameters x1
, x2 , ... xn . If the group is continuous these parameters are continuous and
can be taken to be real parameters. The elements of the group can then be
denoted as g = g(x1 , x2 ...xn ). A set of continuous parameters x1 , x2 , ... xn is
said to be essential if one can not find a set of continuous parameters y1 , y2 ,
... ym , with m < n, which suffices to label the elements of the group. When
35

36

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

we take the product of two elements of a group


g(x)g(x0 ) = g(x00 )

(2.1)

the parameters of the resulting element is a function of the parameters of the


other two elements.
x00 = F (x, x0 )
(2.2)
Analogously the parameters of the inverse element of a given g G is a
function of the parameters of g and vice-versa. If
g(x)g(x0 ) = e = g(x0 )g(x)

(2.3)

x0 = f (x)

(2.4)

then
If the elements of a group G form a topological space and if the functions
F (x, x0 ) and f (x) are continuous functions of its arguments then we say that
G is a topological group. Notice that in a topological group we have to have
some compatibility between the algebraic and the topological structures.
When the elements of a group G constitute a manifold and when the functions F (x, x0 ) and f (x), discussed above, possess derivatives of all orders with
respect to its arguments, i.e., are analytic functions, we say the group G is a
Lie group . This definition can be given in a formal way.
Definition 2.1 A Lie group is an analytic manifold which is also a group
such that the analytic structure is compatible with the group structure, i.e. the
operation G G G is an analytic mapping.
For more details about the geometrical concepts involved here see [HEL 78,
CBW 82, ALD 86, FLA 63].
Example 2.1 The real numbers under addition constitute a Lie group. Indeed, we can use a real variable x to parametrize the group elements. Therefore
for two elements with parameters x and x0 the function in (2.2) is given by
x00 = F (x, x0 ) = x + x0

(2.5)

The function given in (2.4) is just


f (x) = x
These two functions are obviously analytic functions of the parameters.

(2.6)

2.2. LIE ALGEBRAS

37

Example 2.2 The group of rotations on the plane, discussed in example 1.24,
is a Lie group. In fact the groups of rotations on IRn , denoted by SO(n), are
Lie groups. These are the groups of orthogonal n n real matrices O with unit
determinant (O> O = 1l, detO = 1)
Example 2.3 The groups GL(n) and SL(n) discussed in example 1.16 are
Lie groups, as well as the group SU (n) discussed in example 1.17
Example 2.4 The groups Sn and Zn discussed in examples 1.9 and 1.10 are
not Lie groups.

2.2

Lie Algebras

The fact that Lie groups are differentiable manifolds has very important consequences. Manifolds are locally Euclidean spaces. Using the differentiable
structure we can approximate the neighborhood of any point of a Lie group
G by an Euclidean space which is the tangent space to the Lie group at that
particular point. This approximation is some sort of local linearization of the
Lie group and it is the approach we are going to use in our study of the algebraic structure of Lie groups. Obviously this approach does not tell us much
about the global properties of the Lie groups.
Let us begin by making some comments about tangent planes and tangent
vectors. A convenient way of describing tangent vectors is through linear
operators acting on functions. Consider a differentiable curve on a manifold
M and let the coordinates xi , i = 1, 2, ...dim M , of its points be parametrized
by a continuous variable t varying, let us say, from 1 to 1. Let f be any
differentiable function defined on a neighbourhood of the point p of the curve
corresponding to t = 0. The vector Vp tangent to the curve at the point p is
defined by
f
dxi (t)
|t=0
(2.7)
Vp (f ) =
dt
xi
Since the function f is arbitrary the tangent vector is independent of it. The
vector Vp is a tangent vector to M at the point p.
The tangent vectors at p to all differentiable curves passing through p form
the tangent space Tp M of the manifold M at the point p. This space is a
vector space since the sum of tangent vectors is again a tangent vector and the
muliplication of a tangent vector by a scalar (real or complex number) is also
a tangent vector.

38

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Given a set of local coordinates xi , i = 1, 2, ...dim M in a neighbourhood

of a point p of M we have that the operators x


i are linearly independent and
constitute a basis for the tangent space Tp M . Then, any tangent vector Vp on
Tp M can be written as a linear combination of this basis
Vp = Vpi

xi

(2.8)

Now suppose that we vary the point p along a differentiable curve. As we


do that we obtain vectors tangent to the curve at each of its points. These
tangent vectors are continuously and differentiably related. If we choose a
tangent vector on Tp M for each point p of the manifold M such that this set
of vectors are differentiably related in the manner described above we obtain
what is called a vector field . Given a set of local coordinates on M we can
write a vector field V , in that coordinate neighbourhood, in terms of the basis

, and its components V i are differentiable functions of these coordinates.


xi
V = V i (x)

xi

(2.9)

Given two vector fields V and W in a coordinate neighbourhood we can


evaluate their composite action on a function f . We have
W (V f ) = W j

2
V i f
j i f
+
W
V
xj xi
xj xi

(2.10)

Due to the second term on the r.h.s of (2.10) the operator W V is not a vector
field and therefore the ordinary composition of vector fields is not a vector
field. However if we take the commutator of the linear operators V and W we
get
!
j
j

i V
i W
W
(2.11)
[V, W ] = V
i
i
x
x xj
and this is again a vector field. So, the set of vector fields close under the
operation of commutation and they form what is called a Lie algebra.
Definition 2.2 A Lie algebra G is a vector space over a field k with a bilinear
composition law
(x, y) [x, y]
[x, ay + bz] = a[x, y] + b[x, z]
with x, y, z L and a, b k, and such that

(2.12)

2.2. LIE ALGEBRAS

39

1. [x, x] = 0
2. [x, [y, z]] + [z, [x, y]] + [y, [z, x]] = 0; (Jacobi identity)
Notice that (2.12) implies that [x, y] = [y, x], since
[x + y, x + y] = 0
= [x, y] + [y, x]

(2.13)

Definition 2.3 A field is a set k together with two operations


(a, b) a + b

(2.14)

(a, b) ab

(2.15)

and
called respectively addition and multiplication such that
1. k is an abelian group under addition
2. k without the identity element of addition is an abelian group under multiplication
3. multiplication is distributive with respect to addition, i.e.
a (b + c) = ab + ac
(a + b) c = ac + bc
The real and complex numbers are fields.

40

2.3

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

The Lie algebra of a Lie group

We have seen that vector fields on a manifold form a Lie algebra. We now
want to show that the Lie algebra of some special vector fields on a Lie group
is related to its group structure.
If we take a fixed element g of a Lie group G and multiply it from the left
by every element of G, we obtain a transformation of G onto G which is called
left translation on G by g. In a similar way we can define right translations
on G. Under a left translation by g, an element g 0 , which is parametrized by
the coordinates x0i (i = 1, 2, ... dim G), is mapped into the element g 00 = gg 0 ,
and the parameters x00i of g 00 are analytic functions of x0i . This mapping of
G onto G induces a mapping between the tangent spaces of G as follows: let
V be a vector field on G which corresponds to the tangent vectors Vg0 and Vg00
on the tangent spaces to G at g 0 and g 00 respectively. Let f be an arbitrary
function of the parameters x00i of g 00 . We define a tangent vector Wg00 on Tg00 G
(the tangent plane to G at g 00 ) by
Wg00 f Vg0 (f x00 ) = Vgi0

00j

f
00
i x
f
(x
)
=
V
0
g
0i
0i
x
x x00j

(2.16)

This defines a mapping between the tangent spaces of G since, given Vg0 in
Tg0 G, we have associated a tangent vector Wg00 in Tg00 G. The vector Wg00 does
not have necessarily to coincide with the value of the vector field V at Tg00 G,
namely Vg00 . However, when that happens we say that the vector field V is a
left invariant vector field on G, since that transformation was induced by left
translations on G.
The commutator of two left invariant vector fields, V and V , is again a left
invariant vector field. To check this consider the commutator of this vector
fields at group element g 0 . According to (2.11)

V j0
V j0

Vg0 [Vg0 , Vg0 ] = Vgi0 g0i Vgi0 g0i 0j


x
x
x

(2.17)

Since V and V are left invariant, at the group element g 00 = gg 0 we have,


according to (2.16), that
Vg00 [Vg00 , Vg00 ]

j
j00

V
V
00

= Vgi00 g00i Vgi00 g00i 00j


x
x
x

2.3. THE LIE ALGEBRA OF A LIE GROUP


x00i
Vgk0 0k

00j
00i
00j

l x
l0 x
k0 x
=
V

V
V
0
g
g
g
x x00i
x0l
x0k x00i
x0l

V j0 x00k
V j0
= Vgi0 g0i Vgi0 g0i
x
x
x0j x0k

x00k
= Vgj0 0j
x x0k

41
!!

x00j

(2.18)

So, V is also left invariant. Therefore the set of left invariant vector fields form
a Lie algebra. They constitute in fact a Lie subalgebra of the Lie algebra of
all vector fields on G.
Definition 2.4 A vector subspace H of a Lie algebra G is said to be a Lie
subalgebra of G if it closes under the Lie bracket, i.e.
[H , H] H

(2.19)

and if H itself is a Lie algebra.


One should notice that a left invariant vector field is completely determined
by its value at any particular point of G. In particular it is determined by its
value at the group identity e . An important consequence of this is that the
Lie algebra of the left invariant vector fields at any point of G is completely
determined by the Lie algebra of these fields at the identity element of G.
Definition 2.5 The Lie algebra of the left invariant vector fields on a Lie
group is the Lie algebra of this Lie group.
Notice that the Lie algebra of a Lie group G is a subalgebra of the Lie algebra
of all vector fields on G. The Lie algebra of right invariant vector fields is
isomorphic to the Lie algebra of left invariant vector fields. Therefore the
definition above could also be given in terms of right invariant vector fields.
For any Lie group G it is always possible to find a number of linearly
independent left-invariant vector fields which is equal to the dimension of G.
These vector fields, which we shall denote by Ta (a = 1, 2, ...dim G), constitute
a basis of the tangent plane to G at any particular point, and they satisfy
c
[Ta , Tb ] = ifab
Tc

(2.20)

If we move from one point of G to another, this relation remains unchanged,


c
and therefore the quantities fab
are point independent. For this reason they
are called the structure constants of the Lie algebra of G. Later we will see that

42

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

these constants contain all the information about the Lie algebra of G. Since
the relation above is point independent we are going to fix the tangent plane
to G at the identity element, Te G, as the vector space of the Lie algebra of G.
We could have defined right invariant vector fields in a similar way. Their Lie
algebra is isomorphic to the Lie algebra of the left-invariant fields.
A one parameter subgroup of a Lie group G is a differentiable curve, i.e., a
differentiable mapping from the real numbers onto G, t g(t) such that
g(t)g(s) = g(t + s)
g(0) = e

(2.21)

If we take a fixed element g 0 of G, we obtain that the mapping t g 0 g(t) is a


differentiable curve on G. However this curve is not a one parameter subgroup,
since g 0 g(t)g 0 g(s) 6= g 0 g(t + s). If we let g 0 to vary over G we obtain a family
of curves which completely covers G. There are several curves of this family
passing through at a given point of G. However, one can show (see [AUM 77])
that all curves of the family passing through a point have the same tangent
vector at that point. Therefore the family of curves g 0 g(t) can be used to define
a vector field on G. One can also show that this is a left-invariant vector field.
Consequently to each one parameter subgroup of G we have associated a left
invariant vector field.
If T is the tangent vector at the identity element to a differentiable curve
g(t) which is a one parameter subgroup, then it is possible to show that
g(t) = exp(tT )

(2.22)

This means that the straight line on the tangent plane to G at the identity
element, Te G, is mapped onto the one parameter subgroup of G, g(t). This is
called the exponential mapping of the Lie algebra of G (Te G) onto G. In fact,
it is possible to prove that in general, the exponential mapping is an analytic
mapping of Te G onto G and that it maps a neighbourhood of the zero element
of Te G in a one to one manner onto a neighbourhood of the identity element
of G. In several cases this mapping can be extended globally on G.
For more details about the exponential mapping and other geometrical
concepts involved here see [HEL 78, ALD 86, CBW 82, AUM 77].

2.4. BASIC NOTIONS ON LIE ALGEBRAS

2.4

43

Basic notions on Lie algebras

In the last section we have seen that the Lie algebra, G ,of a Lie group G
possesses a basis Ta , a = 1, 2, ... dim G (= dim G, satisfying
c
[Ta , Tb ] = ifab
Tc

(2.23)

c
where the quantities fab
are called the structure constants of the algebra. We
have introduced the imaginary unity i on the r.h.s of (2.23) because if the
generators Ta are hermitian, Ta = Ta , then the structure constants are real.
c
c
. From the definition of Lie algebra given in section
= fba
Notice that fab
2.2 we have that the generators Ta satisfy the Jacobi identity

[Ta , [Tb , Tc ]] + [Tc , [Ta , Tb ]] + [Tb , [Tc , Ta ]] = 0

(2.24)

and consequently the structure constants have to satisfy


e d
e d
e d
fad
fbc + fcd
fab + fbd
fca = 0

(2.25)

with sum over repeated indices. We have also seen that the elements g of G
close to the identity element can be written, using the exponential mapping,
as
g = exp (i a Ta )
(2.26)
where a are the parameters of the Lie group. Under certain circunstances this
relation is also true for elements quite away from the identity element (which
corresponds to a = 0).
If we conjugate elements of the Lie algebra by elements of the Lie group
we obtain elements of the Lie algebra again. Indeed, if L and T are elements
of the algebra one gets
exp (L)T exp (L) = T + [L, T ] +

1
1
[L, [L, T ]] + [L, [L, [L, T ]]] + ... (2.27)
2!
3!

In order to prove that relation consider que quantity


f () exp (L)T exp (L)

(2.28)

then
f0
f 00
...
f (n)

=
=
=
=

exp (L) [ L , T ] exp (L)


exp (L) [ L , [ L , T ] ] exp (L)
...
exp (L) [ L , . . . [ L , [ L , T ] ] ] exp (L)

(2.29)

44

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Then using Taylor expansion around = 0 one gets


f () =

X
n=0

n n
ad T
n! L

(2.30)

where we have denoted adL T [ L , T ]. Taking = 1 one gets (2.27).


The r.h.s. of (2.27) is and element of the algebra, and therefore the conjugation gT g 1 defines a transformation on the algebra. In addition if g 00 = g 0 g
we see that the composition of the transformations associated to g 0 and g give
the transformation associated to g 00 . Consequently, according to the concepts
discussed in section 1.5, these transformations define a representation of the
group G on a representation space which is the Lie algebra of G. Such representation is called the adjoint representation of the Lie group G . The matrices
d(g) representing the elements g G in this representation are given by
gTa g 1 = Tb dba (g)

(2.31)

One can easily check that the n n matrices dba (g) , n = dim G, form a
representation of G, since if we take the element g1 g2 we get
g1 g2 Ta (g1 g2 )1 =
=
=
=

Tb dba (g1 g2 )
g1 (g2 Ta g21 )g11
g1 Tc g11 dca (g2 )
Tb dbc (g1 )dca (g2 )

(2.32)

Since the generators Ta are linearly independent we have


d(g1 g2 ) = d(g1 )d(g2 )

(2.33)

From the defintion (2.31) we see that the dimension of the adjoint representation d(g) of G is equal to the dimension of G. It is a real representation in the
sense that the entries of the matrices d(g) are real.
Notice that the conjugation defines a mapping of the Lie algebra G into
itself which respects the commutation relations. Defining : G G
(T ) gT g 1

(2.34)

for a fixed g G and any T G, one has


[(T ), (T 0 )] = [gT g 1 , gT 0 g 1 ]
= g[T, T 0 ]g 1
= ([T, T 0 ])
Such mapping is called an automorphism of the Lie algebra.

(2.35)

2.4. BASIC NOTIONS ON LIE ALGEBRAS

45

Definition 2.6 A mapping of a Lie algebra G into itself is an automorphism


if it preserves the Lie bracket of the algebra, i.e.
[(T ), (T 0 )] = ([T, T 0 ])

(2.36)

for any T, T 0 G.
The mapping (2.34) in particular, is called an inner automorphism. All other
automorphism which are not conjugations are called outer automorphism.
If g is an element of G infinitesimally close to the identity, its parameters
in (2.26) are very small and we can write
g = 1 + ia Ta

(2.37)

with a infinitesimally small. From (2.31) we have


(1 + ia Ta )Tb (1 ic Tc ) =
=
=
=

Tc dcb (1 + ia Ta )
Tc (bc + ia dcb (Ta ))
Tb + ia [Ta , Tb ]
c
Tb a fab
Tc

(2.38)

Since the infinitesimal parameters are arbitrary we get


c
dcb (Ta ) = ifab

(2.39)

Therefore in the adjoint representation the matrices representing the generators are given by the structure constants of the algebra. This defines a matrix
representation of the Lie algebra. In fact, whenever one has a matrix representation of a Lie group one gets, through the exponential mapping, a matrix
representation of the corresponding Lie algebra.
The concept of representation of a Lie algebra is basically the same as the
one we discussed in section 1.5 for the case of groups. The representation
theory of Lie algebras will be discussed in more details later, but here we give
the formal definition.
Definition 2.7 If one can associate to every element T of a Lie algebra G a
n n matrix D(t) such that
1. D(T + T 0 ) = D(T ) + D(T 0 )
2. D(aT ) = aD(T )

46

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


3. D([T, T 0 ]) = [D(T ), D(T 0 )]

for T, T 0 G and a being a c-number. Then we say that the matrices D define
a n-dimensional matrix representation of G.
Notice that given an element T of a Lie algebra G, one can define a transformation in G as
T : G G0 = [ T , G ]
(2.40)
Using the Jacobi identity one can easily verify that the commutator of the composition of two of such transformations reproduces the Lie bracket operation
on G, i.e.
[T , [T0 , G ]] [T0 , [T , G ]] = [[T , T0 ], G ]
(2.41)
Therefore such transformations define a representation of G on G, which is
called the adjoint representation of G. Obviously, it has the same dimension
as G. Introducing the coeeficients dba (T ) as
[ T , Ta ] Tb dba (T )

(2.42)

where Ta s constitute a basis for G, one then gets (2.41)


[ T , [ T 0 , Ta ] ] [ T 0 , [ T , Ta ] ] = Tc dcb (T )dba (T 0 ) Tc dcb (T 0 )dba (T )
= [ [ T , T 0 ] , Ta ]
= Tc dca ([ T , T 0 ])
(2.43)
and so
[ d(T ) , d(T 0 ) ] = d([ T , T 0 ])

(2.44)

Therefore, the matrices defined in (2.42) constitute a matrix representation of


G, which is the adjoint representation G. Using (2.23) and (2.42) one gets that
c
dcb (Ta ) is indeed equal to ifab
, as obtained in (2.39).
Notice that if G has an invariant subalgebra H, i.e. [ G , H ] H, then from
(2.41) one observes that the vector space of H defines a representation of G,
which is in fact an invariant subspace of the adjoint representation. Therefore,
for non-simple Lie algebras, the adjoint representation is not irreducible.
In a given finite dimensional representation D of a Lie algebra we define
the quantity
D (T, T 0 ) T r (D(T )D(T 0 ))
(2.45)
which is symmetric and bilinear
1. D (T, T 0 ) = D (T 0 , T )

2.4. BASIC NOTIONS ON LIE ALGEBRAS

47

2. D (T, xT 0 + yT 00 ) = x D (T, T 0 ) + y D (T, T 00 )


It satisfies
D ([T, T 0 ], T 00 ) + D (T, [T 00 , T 0 ] = 0

(2.46)

since using the cyclic property of the trace


T r([D(T ), D(T 0 )]D(T 00 )) = T r(D(T )[D(T 0 ), D(T 00 )])

(2.47)

Eq. (2.46) is an invariance property of D (T, T 0 ). Indeed from (2.45) we see


that
D (T, T 0 ) = D (gT g 1 , gT 0 g 1 )
(2.48)
and taking g to be of the form (2.37) we obtain (2.46) as the first order approximation in of (2.48). So D is a symmetric rank two tensor invariant
under the adjoint representation.
The quantity D (T, T 0 ) is called an invariant bilinear trace form for the Lie
algebra G. In the adjoint representation it is called the Killing form. From
(2.39) and (2.45) we have that the Killing form is given by
d c
ab (Ta , Tb ) T r(d(Ta )d(Tb )) = fac
fbd

(2.49)

Definition 2.8 A Lie algebra is said to be abelian if all its elements commute
with one another.
In this case all the structure constants vanish and consequently the Killing
form is zero. However there might exist some representation D of an abelian
algebra for which the bilinear form (2.45) is not zero.
Definition 2.9 A subalgebra H of G is said to be a invariant subalgebra (or
ideal) if
[H, G] H
(2.50)
From (2.27) we see the Lie algebra of an invariant subgroup of a group G is
an invariant subalgebra of the Lie algebra of G.
Definition 2.10 We say a Lie algebra G is simple if it has no invariant subalgebras, except zero and itself, and it is semisimple if it has no invariant abelian
subalgebras.
Theorem 2.1 (Cartan) A Lie algebra G is semisimple if and only if its
Killing form is non degenerated, i.e.
det | T r(d(Ta )d(Tb )) |6= 0.

(2.51)

48

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

or in other words, there is no T G such that


T r(d(T )d(T 0 )) = 0

(2.52)

for every T 0 G.
For the proof see chap. III of [JAC 79] or sec. 6 of appendix E of [COR 84].
Definition 2.11 We say a semisimple Lie algebra is compact if its Killing
form is positive definite.
The Lie algebra of a compact semisimple Lie group is a compact semisimple
Lie algebra. By choosing a suitable basis Ta we can put the Killing form of a
compact semisimple Lie algebra in the form .
ab = ab

(2.53)

d
fabc fab
dc

(2.54)

d
fabc = fab
T r(d(Td )d(Tc )) = iT r(d([Ta , Tb ]Tc ))

(2.55)

Let us define the quantity


From (2.49) we have

Using the cyclic property of the trace one sees that fabc is antisymmetric with
respect to all its three indices. Notice that, in general, fabc is not a structure
constant.
c
For a compact semisimple Lie algebra we have from (2.53) that fab
= fabc
, and therefore the commutation relations (2.23) can be written as
[Ta , Tb ] = ifabc Tc

(2.56)

Therefore the structure constants of a compact semisimple Lie algebra can be


put in a completely antisymmetric form.

2.5

su(2) and sl(2): Lie algebra prototypes

As we have seen the group SU (2) is defined as the group of 2 2 complex


unitary matrices with unity determinant. If an element of such group is written
as g = exp iT , then the matrix T has to be hemitian and traceless. Therefore

2.5. SU(2) AND SL(2): LIE ALGEBRA PROTOTYPES

49

the basis of the algebra su(2) of this group can be taken to be (half of) the
Pauli matrices (Ti 12 i )
1
T1 =
2

0 1
1 0

1
; T2 =
2

0 i
i 0

1
; T3 =
2

1 0
0 1

(2.57)

They satisfy the following commutation relations


[Ti , Tj ] = iijk Tk

(2.58)

The matrices (2.57) define what is called the spinor (2-dimensional) representation of the algebra su(2).
From (2.39) we obtain the adjoint representation (3-dimensional) of su(2)
dij (Tk ) = ikji = iikj

(2.59)

and so

0 0 0

d(T1 ) = i 0 0 1
0 1 0

0 0 1

d(T2 ) = i 0 0 0 ;
1 0 0

0 1 0

d(T3 ) = i 1 0 0
0 0 0

(2.60)

One can easily check that they satisfy (2.58).


As we have seen the group of rotations in three dimensions SO(3) is defined
as the group of 33 real orthogonal matrices. Its elements close to the identity
can be written as g = exp iT , and therefore the Lie algebra so(3) of this group
is given by 33 pure imaginary, antisymmetric and traceless matrices. But the
matrices (2.60) constitute a basis for such algebra. Thefore the Lie algebras
su(2) and so(3) are isomorphic, although the Lie groups SU (2) and SO(3) are
just homomorphic (in fact SO(3) SU (2)/Z2 ).
The Killing form of this algebra, according to (2.49), is given by
ij = T r(d(Ti Tj )) = 2ij

(2.61)

So, it is non degenerate. This is in agreement with theorem 2.1, since this
algebra is simple. According to the definition 2.11 this is a compact algebra.
The trace form (2.45) in the spinor representation is given by
1
ijs = T r(D(Ti Tj )) = ij
2

(2.62)

50

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

So, it is proportional to the Killing form, s = 14 . This is a particular example


of a general theorem we will prove later: the trace form in any representation
of a simple Lie algebra is proportional to the Killing form.
Notice that the matrices in these representations discussed above are hermitian and therefore the matrices representing the elements of the group are
unitary (g = exp iT ). In fact this is a result which constitute a generalization
of theorem 1.3 to the case of compact Lie groups: any finite dimensional representation of a compact Lie group is equivalent to a unitary representation.
Since the generators are hermitian we can always choose one of them to be
diagonal. Traditionally one takes T3 to be diagonal and defines (in the spinor
rep. T3 is already diagonal)
T = T1 iT2

(2.63)

Notice that formally, these are not elements of the algebra su(2) since we have
taken complex linear combination of the generators. These are elements of the
complex algebra denoted by A1 .
Using (2.58) one finds
[T3 , T ] = T
[T+ , T ] = 2T3

(2.64)

Therefore the generators of A1 are written as eigenvectors of T3 . The eigenvalues 1 are called the roots of su(2). We will show later that all Lie algebras
can be put in a similar form. In any representation one can check that the
operator
C = T12 + T22 + T32
(2.65)
commutes with all generators of su(2). It is called the quadractic Casimir
operator. The basis of the representation space can always be chosen to be
eigenstates of the operators T3 and C simultaneously. These states can be
labelled by the spin j and the weight m
T3 | j, mi = m | j, mi

(2.66)

The operators T raise and lower the eigenvalue of T3 since using (2.64)
T3 T | j, mi = ([T3 , T ] + T T3 ) | j, mi
= (m 1) T | j, mi

(2.67)

We are interested in finite representations and therefore there can only exists
a finite number of eigenvalues m in a given representation. Consequently there

2.5. SU(2) AND SL(2): LIE ALGEBRA PROTOTYPES

51

must exist a state which possess the highest eigenvalue of T3 which we denote
j
T+ | j, ji = 0
(2.68)
The other states of the representation are obtained from | j, ji by applying T
successively on it. Again, since the representation is finite there must exist a
positive integer l such that
(T )l+1 | j, ji = 0

(2.69)

Using (2.63) one can write the Casimir operator (2.65) as


C = T32 +

1
(T+ T + T T+ )
2

(2.70)

So, using (2.64), (2.66) and (2.68)


1
C | j, ji =
+ [T+ , T ] + T T+ | j, ji
2
= j (j + 1) | j, ji


T32

(2.71)

Since C commutes with all generators of the algebra, any state of the representation is an eigenstate of C with the same eigenvalue
C | j, mi = j (j + 1) | j, mi

(2.72)

where | j, mi = (T )n | j, ji for m = j n and n l. From Schurs lemma


(see lemma1.1), in a irreducible representation, the Casimir operator has to be
proportional to the unity matrix and so
C = j(j + 1)1l

(2.73)

T+ T = C T32 + T3

(2.74)

Using (2.70) one can write

Therefore applying T+ on both sides of (2.69)


T+ T (T )l | j, ji = 0
=

j(j + 1) (j l)2 + (j l) | j, ji

(2.75)

Since, by assumption the state (T )l | j, ji does exist, one must have


j(j + 1) (j l)2 + (j l) = (2j l)(l + 1) = 0

(2.76)

Since l is a positive integer, the only possible solution is l = 2j. Therefore we


conclude that

52

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


1. The lowest eigenvalue of T3 is j
2. The eigenvalues of T3 can only be integers or half integers and in a given
representation they vary from j to j in integral steps.

The group SL(2), as defined in example 1.16, is the group of 2 2 real matrices with unity determinant. If one writes the elements close to the identity
as g = exp L (without the i factor), then L is a real traceless 2 2 matrix. So
the basis of the algebra sl(2) can be taken as
1
L1 =
2

0 1
1 0

1
; L2 =
2

0 1
1 0

1
; L3 =
2

1 0
0 1

(2.77)

This defines a 2-dimensional representation of sl(2) which differ from the spinor
representation of su(2), given in (2.57), by a factor i in L2 . One can check the
they satisfy
[L1 , L2 ] = L3 ; [L1 , L3 ] = L2 ; [L2 , L3 ] = L1

(2.78)

From these commutation relations one can obtain the adjoint representation
of sl(2), using (2.39)

0 0
0

d(L1 ) = 0 0 1
0 1 0

0 0 1

d(L2 ) = 0 0 0 ;
1 0 0

0 1 0

d(L3 ) = 1 0 0
0 0 0

(2.79)

According to (2.49), the Killing form of sl(2) is given by

1 0 0

ij = T r(d(Li Lj )) = 2 0 1 0
0 0 1

(2.80)

sl(2) is a simple algebra and we see that its Killing form is indeed nondegenerate (see theorem 2.1). From definition 2.11 we conclude sl(2) is a
non-compact Lie algebra.
The trace form (2.45) in the 2-dimensional representation (2.77) of sl(2) is

ij2dim

1 0 0
1

= T r(Li Lj ) = 0 1 0
2
0 0 1

(2.81)

2.5. SU(2) AND SL(2): LIE ALGEBRA PROTOTYPES

53

Similarly to the case of su(2), this trace form is proportional to the Killing
form, 2dim = 41 .
The operators
L L1 L2
(2.82)
according to (2.78), satisfy commutation relations identical to (2.64)
[L3 , L ] = L ;

[L+ , L ] = 2L3

(2.83)

The quadratic Casimir operator of sl(2) is


C = L21 L22 + L23 = L23 +

1
(L+ L + L L+ )
2

(2.84)

The analysis we did for su(2), from eqs. (2.66) to (2.76), applies also to sl(2)
and the conclusions are the same, i.e. , in a finite dimensional representation of
sl(2) with highest eigenvalue j of L3 the lowest eigenvalue is j. In addition
the eigenvalues of L3 can only be integers or half integers varying from j
to j in integral steps. The striking difference however, is that the finite
representations of sl(2) (where these results hold) are not unitary. On the
contrary, the finite dimensional representations of su(2) are all equivalent to
unitary representations. Indeed, the exponentiation of the matrices (2.57) and
(2.60) (with the i factor) provide unitary matrices while the exponentiation of
(2.77) and (2.79) do not. All unitary representations of sl(2) are necessarily
infinite dimensional. In fact this is true for any non compact Lie algebra.
The structures discussed in this section for the cases of su(2) and sl(2) are
in fact the basic structures underlying all simple Lie algebras. The rest of this
course will be dedicated to this study.

54

2.6

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

The structure of semisimple Lie algebras

We now start the study of the features which are common to all semisimple
Lie algebras. These features are in fact a generalization of the properties of
the algebra of angular momentum discussed in section 2.5. We will be mainly
interested in compact semisimple algebras although several results also apply
to the case of non-compact Lie algebras.
Theorem 2.2 Given a subalgebra H of a compact semisimple Lie algebra G
we can write
G =H+P
(2.85)
where
[H, P] P

(2.86)

where P is the orthogonal complement of H in G w.r.t. a trace form in a given


representation, i.e.
T r(PH) = 0
(2.87)
Proof P does not contain any element of H and contains all elements of G
which are not in H. Using the cyclic property of the trace
T r(H[H, P]) = T r([H, H]P) = T r(HP) = 0

(2.88)

[H, P] P.

(2.89)

Therefore
2
This theorem does not apply to non compact algebras because the trace
form does not provide an Euclidean type metric, i.e. there can exist null vectors
which are orthogonal to themselves. As an example consider sl(2).
Example 2.5 Consider the subalgebra H of sl(2) generated by (L1 + L2 ) (see
section 2.5). Its complement P is generated by (L1 L2 ) and L3 . However
this is not an orthogonal complement since, using (2.80)
T r((L1 + L2 )(L1 L2 )) = 4

(2.90)

In addition (L1 L2 ) are null vectors, since


T r(L1 + L2 )2 = T r(L1 L2 )2 = 0

(2.91)

2.6. THE STRUCTURE OF SEMISIMPLE LIE ALGEBRAS

55

Using (2.78) one can check (2.86) is not satisfied. Indeed


[L1 + L2 , L1 L2 ] = 2L3
[L1 + L2 , L3 ] = (L1 + L2 )

(2.92)

[H, P] H + P

(2.93)

[L3 , L1 L2 ] = (L1 L2 )

(2.94)

So
Notice P is a subalgebra too

Theorem 2.3 A compact semisimple Lie algebra is a direct sum of simple


algebras that commute among themselves.
Proof If G is not simple then it has an invariant subalgebra H such that
[H, G] H

(2.95)

But from theorem 2.2 we have that


[H, P] P

(2.96)

and therefore, since P H = 0, we must have


[H, P] = 0

(2.97)

But P, in this case, is a subalgebra since


T r([P, P]H) = T r(P[P, H]) = 0

(2.98)

and from theorem 2.2 again


[P, P] P

(2.99)

If P and H are not simple we repeat the process. 2


Theorem 2.4 For a simple Lie algebra the invariant bilinear trace form defined in eq. (2.45) is the same in all representations up to an overall constant.
Consequentely they are all proportional to the Killing form.

56

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Proof Using the definition (2.31) of the adjoint representation and the invariance property (2.48) of D (T, T 0 ) we have
D (Ta , Tb ) =
=
=
=

T r(D(gTa g 1 gTb g 1 ))
T r(D(Tc dca (g)Td ddb (g)))
(d> )ac (g) D (Tc , Td )ddb (g)
(d> D d)ab

(2.100)

Therefore D is an invariant tensor under the adjoint representation. This is


true for any representation D, in particular the adjoint itself. So, the Killing
form defined in (2.49) also satisfies (2.100). From theorem 2.1 we have that
for a semisimple Lie algebra, det 6= 0 and therefore has an inverse. Then
multiplying both sides of (2.100) by 1 and using the fact that 1 = (d> d)1
we get
1 D = (d> d)1 (d> D d) = d1 1 D d
(2.101)
and so
d(g) 1 D = 1 D d(g)

(2.102)

For a simple Lie algebra the adjoint representation is irreducible. Therefore


using Schurs lemma (see lemma 1.1) we get
1 D = 1l D =

(2.103)

So, the theorem is proven. 2


The constant is representation dependent and is called the Dynkin index
of the representation D.
We will now show that it is possible to find a set of commuting generators
such that all other generators are written as eigenstates of them (under the
commutator). These commuting generators are the generalization of T3 in
su(2) and they generate what is called the Cartan subalgebra.
Definition 2.12 For a semisimple Lie algebra G, the Cartan subalgebra is
the maximal set of commuting elements of G which can be diagonalized simultaneously.
The formal definition of the Cartan subalgebra of a Lie algebra (semisimple or
not) is a little bit more sophisticated and involves two concepts which we now
discuss. The normalizer of a subalgebra K of G is defined by the set
N (K) {x G | [x, K] K}

(2.104)

2.6. THE STRUCTURE OF SEMISIMPLE LIE ALGEBRAS

57

Using the Jacobi identity we have


[[x, x0 ], K] K

(2.105)

with x, x0 N (K). Therefore the normalizer N (K) is a subalgebra of G and K


is an invariant subalgebra of N (K). So we can say that the normalizer of K in
G is the largest subalgebra of G which contains K as an invariant subalgebra.
Consider the sequence of subspaces of G
G0 = G; G1 = [G, G]; G2 = [G, G1 ]; ... Gi = [G, Gi1 ]

(2.106)

We have that G0 G1 G2 ... Gi and each Gi is a invariant subalgebra


of G. We say G is a nilpotent algebra if Gn = 0 for some n. Nilpotent algebras
are not semisimple.
Similarly we can define the derived series
G(0) = G; G(1) = [G, G]; G(2) = [G(1) , G(1) ]; ... G(i) = [G(i1) , G(i1) ]

(2.107)

If G(n) = 0 for some n then we say G is a solvable algebra . All nilpotent


algebras are solvable, but the converse is not true.
Definition 2.13 A Cartan subalgebra of a Lie algebra G is a nilpotent subalgebra which is equal to its normalizer in G.
Lemma 2.1 If G is semisimple then a Cartan subalgebra of G is a maximal
abelian subalgebra of G such that its generators can be diagonalized simultaneously.
Definition 2.14 The dimension of the Cartan subalgebra of G is the rank of
G.
Notice that if H1 , H2 ... Hr are the generators of the Cartan subalgebra then
g 1 H1 g , g 1 H2 g ... g 1 Hr g (g G) generates an abelian subalgebra of G
with the same dimension as that one generated by Hi , i = 1, 2, ...r. This is
also a Cartan subalgebra. Therefore there are an infinite number of Cartan
subalgebras in G and they are all related by conjugation by elements of the
group G which algebra is G.
By choosing suitable linear combinations one can make the basis of the
Cartan subalgebra to be orthonormal with respect to the Killing form of G,
i.e.1
T r(Hi Hj ) = ij
(2.108)
1

As we have shown, up to an overall constant, the trace form of a simple Lie algebra
is the same in all representations. We will simplify the notation from now on, and write
T r(T T 0 ) instead of D (T, T 0 ). We shall specify the representation where the trace is being
evaluated only when that is relevant.

58

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

with i, j = 1, 2, ... rank G. From the definition of Cartan subalgebra we see


that these generators can be diagonalized simultaneously.
We now want to construct the generalization of the operators T = T1 +iT2
of su(2), discussed in section 2.5, for the case of any compact semisimple Lie
algebra. They are called step operators and their number is dim G - rank G.
According to theorem 2.2 they constitute the orthogonal complement of the
Cartan subalgebra and therefore
T r(Hi Tm ) = 0

(2.109)

with i = 1, 2... rank G, m = 1, 2... (dim G - rank G). In addition, since a


compact semisimple Lie algebra is an Euclidean space we can make the basis
Tm orthonormal, i.e.
T r(Tm Tn ) = mn
(2.110)
Again from theorem 2.2 we have that the commutator of an element of the
Cartan subalgebra with Tm is an element of the subspace generated by the basis
Tm . Then, since the algebra is compact we can put its structure constants in
a completely antisymmetric form, and write
[Hi , Tm ] = ifimn Tn

(2.111)

[Hi , Tm ] = (hi )mn Tn

(2.112)

or
where we have defined the matrices
(hi )mn = ifimn

(2.113)

of dimension (dim G - rank G) and which are hermitian


(hi )mn = (hi )nm = ifinm = ifimn = (hi )mn

(2.114)

Therefore we can find a unitary transformation that diagonalizes the matrices


hi without affecting the Cartan subalgebra generators Hi .
Tm Umn Tn
(hi )mn (U hi U )mn

(2.115)

with U = U 1 . We shall denote by E the new basis of the subspace orthogonal to the Cartan subalgebra. The indices stand for the eigenvalues of the

2.6. THE STRUCTURE OF SEMISIMPLE LIE ALGEBRAS

59

matrix hi (or of the generators Hi ). The commutation relations (2.112) can


now be written as
[Hi , E ] = i E
(2.116)
The eigenvalues i are the components of a vector of dimension rank G and
they are called the roots of the algebra G . The generators E are called step
operators and they are complex linear combinations of the hermitian generators
Tm . Notice that the roots are real since they are the eigenvalues of the
hermitian matrices hi .
From (2.113) we see that the matrices hi are antisymmetric, and their off
diagonal elements are purely imaginary. So
hi = hi ;

hi = hi

(2.117)

Therefore if v is an eigenstate of the matrix hi then since the eigenvalue i is


real we have
hi v = i v
(2.118)
and then
hi v = hi v = i v

(2.119)

Consenquently if is a root its negative ( ) is also a root. Thus the roots


always occur in pairs.
We have shown that we can decompose a compact semisimple algebra L as
G =H+

(2.120)

where H is generated by the commuting generators Hi and constitute the


Cartan subalgebra of G. The subspace G is generated by the step operators
E . This is called the root space decomposition of G.In addition one can show
that for a semisimple Lie algebra
dim G = 1; f or any root

(2.121)

and consequently the roots are not degenerated. So, there are not two step operators E and E0 corresponding to the same root . Therefore for a semisimple Lie algebra one has
dim G - rank G =

dim G = number of roots = even number

Using the Jacobi identity and the commutation relations (2.116) we have that
if and are roots then
[Hi , [E , E ]] = [E , [E , Hi ]] [E , [Hi , E ]]
= (i + i ) [E , E ]

(2.122)

60

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Since the algebra is closed under the commutator we have that [E , E ] must
be an element of the algebra. We have then three possibilities
1. + is a root of the algebra and then [E , E ] E+
2. + is not a root and then [E , E ] = 0
3. + = 0 and consequently [E , E ] must be an element of the Cartan
subalgebra since it commutes with all Hi .
Since in a semisimple Lie algebra the roots are not degenerated (see (2.121)),
we conclude from (2.122) that 2 is never a root.
We then see that the knowlegde of the roots of the algebra provides all
the information about the commutation relations and consequently about the
structure of the algebra. From what we have learned so far, we can write the
commutation relations of a semisimple Lie algebra G as
[Hi , Hj ] = 0
[Hi , E ] = i E

N E+ if + is a root
if + = 0
[E , E ] = H

0
otherwise

(2.123)
(2.124)
(2.125)

where H 2.H/2 , i, j = 1, 2, ... rank G (see discussion leading to (2.129)


and (2.130)). The structure constants N will be determined later. The basis
{Hi , E } is called the Weyl-Cartan basis of a semisimple Lie algebra.
Using the cyclic property of the trace (2.47) (or equivalently, the invariance
property (2.46)) we get that, in a given representation
T r([Hi , E ]E ) = T r(E [E , Hi ])

(2.126)

(i + i )T r(E E ) = 0

(2.127)

and so
The step operators are orthogonal unless they have equal and opposite roots.
In particular E is orthogonal to itself. If it was orthogonal to all others, the
Killing form would have vanishing determinant and the algebra would not be
semisimple. Therefore for semisimple algebras if is a root then must also
be a root, and T r(E E ) 6= 0. The value of T r(E E ) is connected to the
structure constant of the second relation in (2.125). We know that [E , E ]
must be an element of the Cartan subalgebra. Therefore we write
[E , E ] = xi Hi

(2.128)

2.6. THE STRUCTURE OF SEMISIMPLE LIE ALGEBRAS

61

Using (2.108) and the cyclic property of the trace we get


T r(xi Hi Hj ) =
=
=
=

xj
T r([E , E ]Hj )
T r([Hj , E ]E )
j T r(E E )

(2.129)

Consequently [E , E ] must be proportional to .H. Normalizing the step


operators such that
2
T r(E E ) = 2
(2.130)

we obtain the second relation in (2.125).


Again using the invariance property (2.46) we have that
T r([Hi , E ]Hj ) = T r([Hj , Hi ]E )

(2.131)

i T r(Hj E ) = 0

(2.132)

and so
Since by assumption is a root and therefore different from zero we get
T r(Hi E ) = 0

(2.133)

From the above results and (2.108) we see that we can normalize the Cartan
subalgebra generators Hi and the step operator E such that the Killing form
becomes
T r(Hi Hj ) = ij ; i, j = 1, 2, ...rank G
T r(Hi E ) = 0
2
+,0
(2.134)
T r(E E ) =
2
This is the usual normalization of the Weyl-Cartan basis.
Notice that linear combinations (E E ) diagonalizes the Killing form
(2.134). However, by taking real linear combinations of Hi , (E + E ) and
i(E E ) one obtains a compact algebra since the eigenvalues of the Killing
form are all of the same sign. On the hand, if one takes real linear combinations
of Hi , (E + E ) and (E E ) one obtains a non compact algebra.
Example 2.6 In section 2.5 we have discussed the algebra of the group SU (2).
In that case the Cartan subalgebra is generated by T3 only. The step operators
are T+ and T corresponding to the roots +1 and 1 respectively . So the rank
of SU (2) is one. We can represent these roots by the diagram 2.1

62

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Figure 2.1: The root diagram of A1 (su(2),so(3) or sl(2))

2.7. THE ALGEBRA SU (3)

2.7

63

The algebra su(3)

In example 1.17 we defined the groups SU (N ). We now discuss in more detail


the algebra of the group SU (3). As we have seen this is defined as the group
of all 3 3 unitary matrices with unity determinant. If we write an element of
this group as g = exp (iT ) we see that T has to be hermitian in order g to be
unitary. In addition using the fact that det(exp A) = exp (T rA) we see that
T rT = 0 in order to detg = 1. So the Lie algebra of SU (3) is generated by
3 3 hermitian and traceless matrices. Its dimension is 2.32 32 1 = 8. The
Cartan subalgebra is generated by the diagonal matrices. Since they have to be
traceless we have only two linearly independent diagonal matrices. Therefore
the rank of SU (3) is two, and consequently it has six roots. The usual basis of
the algebra su(3) is given by the Gell-Mann matrices which are a generalizition
of the Pauli matrices

0 1 0

1 = 1 0 0 ;
0 0 0

1 0 0

3 = 0 1 0
;
0 0 0
0 0 i

5 = 0 0 0 ;
i 0 0
0 0 0

7 =
0 0 i ;
0 i 0

0 i 0

2 = i 0 0 ;
0 0 0

0 0 1

4 = 0 0 0
;
1 0 0
0 0 0

6 = 0 0 1 ;
0 1 0

1 0 0

8 = 13
0 1 0
0 0 2

(2.135)

The trace form in such matrix representation is given by


T r(i j ) = 2ij

(2.136)

with i, j = 1, 2, ...8. The algebra su(3) is simple and therefore according to


theorem 2.4 the Killing form is proportinal to (2.136). Therefore, according to
the definition 2.11 we see su(3) is a compact algebra.
The matrices (2.135) satisfy the commutation relations
[i , j ] = ifijk k

(2.137)

where the structure constants fijk are completly antisymmetric (see (2.56))
and are given in table 2.1. The diagonal matrices 3 and 8 are the generators

64

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


i
1
1
1
2
2
3
3
4
6

j k fijk
2 3
2
4 7
1
5 6 -1
4 6
1
5 7
1
4 5
1
6 7
-1
5 8 3
7 8
3

Table 2.1: Structure constants of su(3)


of the Cartan subalgebra. One can easily check that they satisfy the conditions
of the definition 2.13. We see that the remaining matrices play the role of Tm in
(2.112). Therefore we can construct the step operators as linear combination
of them. However, like the su(2) case, these are complex linear combination
and the step operators are not really generators of su(3). Doing that, and
normalizing the generators conveniently, we obtain the Weyl-Cartan basis for
for such algebra
1
H1 = 3 ;
2
1
E1 = (1 i2 ) ;
2
1
E3 = (4 i5 )
2

1
H2 = 8 ;
2
1
E2 = (6 i7 )
2
(2.138)

So they satisfy
T r(Hi Hj ) = ij ; T r(Em En ) = mn

(2.139)

with i, j = 1, 2 and m, n = 1, 2, 3. One can check that in such basis the


commutation relations read

[H1 , E1 ] = 2E1 ;
[H2 , E1 ] = 0 ;
s

2
3
[H1 , E2 ] =
E2 ;
[H2 , E2 ] =
E2 ;
2
2
s

2
3
[H1 , E3 ] =
E3 ;
[H2 , E3 ] =
E3
(2.140)
2
2

2.7. THE ALGEBRA SU (3)

65

3
KA
A
A




A

A 
A
A
 A

A





A
A
AU

Figure 2.2: The root diagram of A2 (SU (3) or SL(3))


Therefore the roots of su(3) are
s
s

2 3
2 3
1 = ( 2, 0) ; 2 = (
,
) ; 3 = (
,
)
2
2
2
2

(2.141)

and the corresponding negative ones.


Notice that all roots have the same lenght (2 = 2) and the angle between
any two of them is a multiple of 3 . The six roots of su(3) form a regular
diagram shown in figure 2.2. This is called the root diagram for su(3). The
root diagram of a Lie algebra lives in an Euclidean space of the same dimension
as the Cartan subalgebra, i.e., the rank of the algebra. The root diagram is
very useful in understanding the structure of the algebra. For instance, from
(2.125) and the diagram 2.2 one sees that
[E1 , E3 ] = [E3 , E2 ] = [E2 , E1 ] = 0
[E1 , E3 ] = [E3 , E2 ] = [E2 , E1 ] = 0

(2.142)

and also

2H1
s

2
3
[E2 , E2 ] =
H1 +
H2
2
2
s

2
3
[E3 , E3 ] =
H1 +
H2
2
2
[E1 , E1 ] =

(2.143)

Whenever the sum of two roots is a root of the diagram we know, from (2.125),
that the corresponding step operators do not commute. One can check that

66

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

the non vanishing commutators between step operators are


[E1 , E2 ] = E3 ;
[E1 , E3 ] = E2 ;
[E3 , E2 ] = E1 ;

[E1 , E2 ] = E3 ;
[E1 , E3 ] = E2 ;
[E3 , E2 ] = E1

(2.144)

We have seen that the algebra su(3) is generated by real linear combination
of the Gell-Mann matrices (2.135), or equivalently of the matrices Hi , i = 1, 2,
(Em +Em ) and i(Em Em ), m = 1, 2, 3. These are hermitian matrices.
If one takes real linear combinations of Hi , (Em + Em ) and (Em Em )
instead, one obtains the algebra sl(3) which is not compact. This is very
similar to the relation between su(2) and sl(2) which we saw in section 2.5.
This generalizes in fact, to all su(N ) and sl(N ).

2.8

The Properties of roots

We have seen that for a semisimple Lie algebra G, if is a root then, is


also a root. This means that for each step operator E there exists a corresponding step operator E . Together with H = 2.H/2 they constitute a
sl(2) subalgebra of G, since from (2.124) and (2.125) one gets
[H , E ] = 2E
[E , E ] = H

(2.145)

This subalgebra is isomorphic to sl(2) since H plays the role of 2T3 , E


and E play the role of T+ and T respectively (see section 2.5). Therefore
to each pair of roots and we can construct a sl(2) subalgebra. These
subalgebras, however, do not have to commute among themselves.
We have learned in section 2.5 that T3 , the third component of the angular
momentum, has half integer eigenvalues, and consenquently H ( 2T3 ) must
have integer eigenvalues. From (2.124) we have
[H , E ] =

2.
E
2

(2.146)

Therefore if | mi is an eigenstate of H with an integer eigenvalue m them the


state E | mi has eigenvalue m + 2.
since
2
H E | mi = (E H + [H , E ]) | mi
!
2.
= m + 2 E | mi

(2.147)

2.8. THE PROPERTIES OF ROOTS

67

2.
2

2.
2

2
2

0
1
1
1
1
1
1

0
1
1
2
2
3
3

3
2
3

4
3
4

6
5
6

undetermined
1
1
2
2
3
3

Table 2.2: The possible scalar products, angles and ratios of squared lenght
for the roots
This implies that
2.
= integer
(2.148)
2
for any roots and . This result is crucial in the study of the structure of
semisimple Lie algebras. In order to satisfy this condition the roots must have
some very special properties. From Schwartz inequality we get (The roots live
in a Euclidean space since they inherit the scalar product from the Killing form
P
of G restricted to the Cartan subalgebra by . T r(.H.H) = rankG
i i )
i=1
. =| || | cos | || |

(2.149)

where is the angle between and . Consenquently


2. 2.
= mn = 4(cos )2 4
2
2

(2.150)

where m and n are integers according to (2.148), and so


0 mn 4

(2.151)

This condition is very restrictive and from it we get that the possible values
of scalar products, angles and ratio of squared lenghts between any two roots
are those given in table 2.2. For the case of being parallel or anti-parallel
to we have cos = 1 and consequently mn = 4. In this case the possible
values of m and n are
1.

2.
2

= 2 and

2.
2

= 2

2.

2.
2

= 1 and

2.
2

= 4

68

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


3.

2.
2

= 4 and

2.
2

= 1

In case 1 we have that = , which is trivial, or = which is a fact discussed earlier, i.e., to every root there corresponds a root in a semisimple
Lie algebra. In case 2 we have = 2 which is impossible to occur in a
semisimple Lie algebra. In (2.121) we have seen that dim G = 1 and therefore
there exist only one step operator corresponding to a root . From (2.122) we
see that 2 or 2 can not be roots since [E , E ] = [E , E ] = 0. The case
3 is similar to 2. Therefore in a semisimple Lie algebra the only roots which
are multiples of are .
Notice that there are only three possible values for the ratio of lenghts
of roots, namely 1, 2 and 3 (there are five if one considers the reciprocals 12
and 31 ). However for a given simple Lie algebra, where there are no disjoint,
mutually orthogonal set of roots, there can occur only two different lenght of
roots. The reason is that if , , and are roots of a simple Lie algebra and
2
2
2
= 2 and 2 = 3 then it follows that 2 = 23 and this is not an allowed value
2
for the ratio of two roots (see table 2.2).

2.9. THE WEYL GROUP

2.9

69

The Weyl group

In the section 2.8 we have shown that to each pair of roots and of a
semisimple Lie algebra we can construct a sl(2) (or su(2)) subalgebra generated
by the operators H , E and E (see eq. (2.145)). We now define the
hermitian operators:
1
T1 () = (E + E )
2
1
T2 () = (E E )
2i

(2.152)

which satisfy the commutation relations


[Hi , T1 ()] = ii T2 ()
[Hi , T2 ()] = ii T1 ()
i
[T1 (), T2 ()] =
H
2

(2.153)

The operator T2 () is the generator of rotations about the 2-axis, and a rotation by is generated by the element
S = exp(iT2 ())

(2.154)

Using (2.27) and (2.153) one can check that


S (x.H)S1 = x.H + x.T1 () sin +

x.
.H(cos 1)
2

x.
= xi 2 2 i Hi

= (x).H


(2.155)

where we have defined the operator , acting on the root space, by


(x) x 2

x.

(2.156)

This operator corresponds to a reflection w.r.t the plane perpendicular to .

Indeed, if is the angle between x and then x.


=| x | cos ||
. Therefore
2
(x) is obtained from x by subtracting a vector parallel (or anti-parallel)
to and with lenght twice the projection of x in the direction of . These
reflections are called Weyl reflections on the root space.

70

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

We now want to show that if and are roots of a given Lie algebra G,
then () is also a root. Let us introduce the operator
E S E S1

(2.157)

where E is a step operator of the algebra and S is defined in (2.154). From


the fact that (see (2.124))
[x.H, E ] = x.E

(2.158)

we get, using (2.155) that


S [x.H, E ]S1 =
=
=
=

[S x.HS1 , S E S1 ]
[ (x).H, E ]
x.S E S1
x. E

(2.159)
(2.160)
(2.161)

and so
[ (x).H, E ] = x. E

(2.162)

However, if we perform a reflection twice we get back to where we started, i.e.,


2 = 1. Therefore denoting (x) by y we get that (y) = x, and then from
(2.162)
[y.H, E ] = (y). E
(2.163)
and so
[Hi , E ] = ()i E

(2.164)

Therefore E , defined in (2.157), is a step operator corresponding to the root


(). Consequently if and are roots, () is necessarily a root (similarly
() ).
Example 2.7 In section 2.7 we have discussed the algebra of the group SU (3).
The root diagram with the planes perpendicular to the roots is given in figure
2.3. One can sees that the root diagram is invariant under Weyl reflections.
We have
1 : 1 1 2 3 2 3
2 : 1 3 2 2 1 3
3 : 1 2 2 1 3 3

2.9. THE WEYL GROUP

71
plane 1


A
K
Q

Q


A
Q


A
Q

Q A
 
Q A  
Q

Q


- 1
A

Q
A


Q
  A Q
 
A QQ


Q
A


Q
A


Q
AU



plane 2

plane 3

Figure 2.3: The planes orthogonal to the roots of A2 (SU (3) or SL(3))
(

1 2
2 3 3 1
1 2 2 3 3 1

1 3
2 1
3 2
1 3 2 1 3 2

1 2 :
2 1 :

(2.165)

Notice that the composition of Weyl reflections is not necessarily a reflection


and that reflections do not commute. In this particular case the operation 2 1
is a rotation by an angle of 2
and 1 2 is its inverse. The set of a Weyl
3
reflexions and the composition of two or more of them form a group called
the Weyl group. It leaves the root diagram of su(3) invariant. This group is
isomorphic to S3 , and in fact the Weyl group of su(N ) is SN , the group of
permutations of N elements.

Definition 2.15 The Weyl group of a Lie algebra, or of its root system, is
the finite discrete group generated by the Weyl reflections.
From the considerations above we see that the Weyl group leaves invariant
the root system. However it does not contain all the symmetries of the root
system. The inversion is certainly a symmetry of the root system of
any semisimple Lie algebra but, in general, it is not an element of Weyl group.
In the case of su(3) discussed in example 2.7 the inversion can not be written
in terms of reflections. In addition, the root diagram of su(3) is invarint under
rotations of 3 , and this operation is not an element of the Weyl group of su(3).

72

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

As we have seen the conjugation by the group element S defined in (2.154)


maps x.H into (x).H and E into E (). Therefore, such mapping imitates,
in the algebra, the Weyl reflections of the roots. According to (2.34) this is an
inner automorphism of the algebra. Consequently any transformation of the
Weyl group can be elevated to an inner automorphism of the corresponding
algebra. In fact, any symmetry of the root diagram can be used to construct an
automorphism of the algebra. However those symmetries which do not belong
to the Weyl group give rise to outer automorphisms. We will see later that
the mapping Hi Hi , E E and E E is an automorphism
of any semisimple Lie algebra. It is a consequence of the invariance of the root
diagram under the inversion . It will be an inner (outer) automorphism
if the inversion is (is not) an element of the Weyl group.
We can summarize all the results about roots we have obtained so far in
the form of four postulates.
Definition 2.16 A set of vectors in a Euclidean space is the root system
or root diagram of a semisimple Lie algebra G if
1. does not contain zero, spans an Euclidean space of the same dimension
as the rank of the Lie algebra G and the number of elements of is equal
to dim G - rank G.
2. If then the only multiples of in are
3. If , , then

2.
2

is an integer

4. If , , then () , i.e., the Weyl group leaves invariant.


Notice that if the root diagram decomposes into two or more disjoint and
mutually orthogonal subdiagrams then the corresponding Lie algebra is not
simple. Suppose the rank of the algebra is r and that the diagram decomposes
into two orthogonal subdiagrams of dimensions m and n such that m + n = r.
By taking basis vi (i = 1, 2...m) and uk (k = 1, 2...n) in each subdiagram we can
split the generators of the Cartan subalgebra into two subsets of the form Hv
v.H and Hu = u.H. From (2.158) we see that the generatorsa Hv commute
with all step operators corresponding to roots in the subdiagram generated by
uk , and vice versa. In addition, since the sum of a root of one subdiagram
with a root of the other is not a root, we conclude that the corresponding step
operators commute. Therefore each subdiagram corresponds to an invariant
subalgebra of the Lie algebra which root diagram is their union.

2.10. WEYL CHAMBERS AND SIMPLE ROOTS

73

6


Figure 2.4: The root diagram of su(2) su(2)

Weyl chamber 
Figure 2.5: The Weyl chambers of A1 (su(2),so(3) or sl(2))
Example 2.8 The root diagram shown in figure 2.4 is made of two ortoghonal
diagrams. Since each one is the diagram of an su(2) algebra we conclude, from
the discussion above, that it corresponds to the algebra su(2)su(2). Remember
that the ratio of the squared lenght of the ortoghonal roots are undetermined in
this case (see table 2.2).

2.10

Weyl Chambers and simple roots

The hyperplanes perpendicular to the roots, defined in section 2.9 partition


the root space into finitely many regions. These connected regions (without
the hyperplanes) are called Weyl Chambers . Due to the regularity of the root
systems all the Weyl chambers have the same form and are equivalent.
Example 2.9 In the case of su(2) (or so(3) and sl(2)) there are only two
Weyl chambers, each one corresponding to a half line. These are shown in
figure 2.5. In the case of su(3) there are 6 Weyl chambers. They are shown in
figure 2.6.
Notice that under a Weyl reflection, all points of a Weyl chamber are mapped
into the same Weyl chamber, and therefore the Weyl group takes one Weyl
Chamber into another. In fact the Weyl group acts transitively on Weyl Chambers and its order is the number of Weyl Chambers. In general the number of
roots is bigger than the number of Weyl Chambers.
Since the Weyl Chambers are equivalent one to another, we will choose one
of them and call it the Fundamental Weyl Chamber. Consider now a vector
x inside this particular chamber. The scalar product of x with any root is
always different from zero, since if it was zero x would be on the hyperplane

74

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


plane 1
H
HHH
HHHHH3
HHH H
H
Q
KA
 HH H 
HHH
Q

A
H
HH
HHH
Q

H
H
A

H
Q

i
H
HH
Q A H
P
H
PWeyl
H
Q A H
H

H
H
Q

Q

A

- 1

Q

A

Q
  A Q
 
A QQ


Q

A

Q

A

Q

AU

plane 2
Chamber

plane 3

Figure 2.6: The Weyl chambers of A2 (SU (3) or SL(3))


perpendicular to and therefore not inside a Weyl chamber. As we move x
within the chamber the sign of .x does not change, since in order to change
.x would have to vanish and therefore x would have to cross a hyperplane.
Therefore the scalar product of a root with any vector inside a Weyl Chamber
has a definite sign.
Definition 2.17 Let x be any vector inside the Fundamental Weyl chamber.
We say is a positive root if .x > 0 and a negative root if .x < 0.
Definition 2.18 We say a positive root is a simple root if it can not be written
as the sum of two positive roots.
Example 2.10 In the case of su(3), if we choose the Fundamental Weyl
chamber to be the one shown in figure 2.6, then the positive roots are 1 , 2
and 3 . We see that 1 and 2 are simple, but 3 is not since 3 = 1 + 2 .
Theorem 2.5 Let and be non proportional roots. Then
1. if . > 0, is a root
2. if . < 0, + is a root
Proof If . > 0 we see from table 2.2 that either 2.
or
2
2.
Without loss of generality we can take 2 = 1. Therefore
() =

2.
=
2

2.
2

is equal to 1.

(2.166)

2.10. WEYL CHAMBERS AND SIMPLE ROOTS

75

So, from the invariance of the root system under the Weyl group, is also
a root, as well as . The proof for the case . < 0 is similar. 2
Theorem 2.6 Let and be distinct simple roots. Then is not a root
and . 0.
Proof Suppose is a root. If is positive we write = + , and if
it is negative we write = + (). In both cases we get a contradiction to
the fact and are simple. Therefore can not be a root. From theorem
2.5 we conclude .can not be positive. 2
Theorem 2.7 Let 1 , 2 ,... r be the set of all simple roots of a semisimple
Lie algebra G. Then r = rank G and each root of G can be written as
=

r
X

na a

(2.167)

a=1

where na are integers, and they are positive or zero if is a positive root and
negative or zero if is a negative root.
Proof Suppose the simple roots are linear dependent. Denote by xa and
ya the positive and negative coefficients, respectively, of a vanishing linear
combination of the simple roots. Then write
s
X

xa a =

a=1

r
X

yb b v

(2.168)

b=s+1

with each a being different from each b . Therefore


v2 =

xa yb a .b 0

(2.169)

ab

Since v is a vector on an Euclidean space it follows that that the only possibility
is v 2 = 0, and so v = 0. But this implies xa = yb = 0 and consequently the
simple roots must be linear independent. Now let be a positve root. If it is
not simple then = + with and both positive. If and/or are not
simple we can write them as the sum of two positive roots. Notice that can
not appear in the expansion of and/or in terms of two positive roots, since
if x is a vector of the Fundamental Weyl Chamber we have x. = x. + x..
Since they are all positive roots we have x. > x. and x. > a.. Therefore
or can not be written as + with a positive root. For the same reason
and will not appear in the expansion of any further root appearing in

76

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

this process. Thus, we can continue such process until is written as a sum
P
of simple roots, i.e. = ra=1 na a with each na being zero or a positive
integer. Since, for semisimple Lie algebras, the roots come in pairs ( and
) it follows that the negative roots are written in terms of the simple roots
in the same way, with na being zero or negative integers. We then see that
the set of simple roots span the root space. Since they are linear independent,
they form a basis and consequently r = rank G. 2

2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS

2.11

77

Cartan matrix and Dynkin diagrams

In order to define positive and negative roots and then simple roots we have
chosen one particular Weyl Chamber to play a special role. This was called the
Fundamental Weyl Chamber. However any Weyl Chamber can play such role
since they are all equivalent. As we have seen the Weyl group transforms one
Weyl Chamber into another. In fact, one can show (see pag. 51 of [HUM 72])
that there exists one and only one element of the Weyl group which takes one
Weyl Chamber into any other.
By changing the choice of the fundamental Weyl Chamber one changes the
set of simple roots. This implies that the choices of simple roots are related
by Weyl reflections. From the figure 2.6 we see that in the case of SU (3)
any of the pairs of roots (1 , 2 ), (3 , 1 ), (2 , 3 ), (1 , 2 ), (3 , 1 ),
(2 , 3 ), could be taken as the simple roots. The common features in these
pairs are the angle between the roots and the ratio of their lenghts. (in the
case of SU (3) this is trivial since all roots have the same length, but in other
cases it is not).
Therefore the important information about the simple roots can be encoded
into their scalar products. For this reason we introduce an r r matrix (r =
rank G) as
2a .b
(2.170)
Kab
b2
(a, b = 1, 2, ... rank G) which is called the Cartan matrix of the Lie algebra. As
we will see it contains all the relevant information about the structure of the
algebra G. Let us see some of its properties:
1. It provides the angle between any two simple roots since
a .b a .b
Kab Kba = 4 2
b
a2

(2.171)

with no summation on a or b, and so


1q
Kab Kba
(2.172)
2
where is the angle between a and b . We take the minus sign because,
according to theorem 2.6, the simple roots always form obtuse angles.
cos =

2. The Cartan matrix gives the ratio of the lenghts of any two simple roots
since
Kab
2
= a2
(2.173)
Kba
b

78

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


3. Kaa = 2. The diagonal elements do not give any information.
4. From the properties of the roots discussed in section 2.8 we see that
Kab Kba = 4 (cos )2 = 0, 1, 2, 3

(2.174)

we do not get 4 because we are taking a 6= b. But from theorem 2.6 we


have a .b 0 and so the off diagonal elements of the Cartan matrix
can take the values
Kab = 0, 1, 2, 3
(2.175)
with a 6= b. From the table 2.2 we see that if Kab = 2 or 3 then we
necessarily have Kba = 1.
5. If a and b are orthogonal, obviously Kab = Kba = 0. At the end of
section 2.9 we have shown that if the root diagram decomponses into
two or more mutually orthogonal subdiagrams then the corresponding
algebra is not simple. As a consequence of that if follows that the Cartan
matrix of a Lie algebra, which is not simple, necessarily has a blockdiagonal form.
6. The Cartan matrix is symmetric only when all roots have the same
lenght.
Example 2.11 The algebra of SO(3) or SU (2) has only one simple root and
therefore its Cartan matrix is trivial, i.e., K = 2.
Example 2.12 The algebra of SO(4) is not simple. It is isomorphic to su(2)
su(2). Its root diagram is given in figure 2.4. The simple roots are and
(for instance) and the ratio of their lenght is not determined. The Cartan
matrix is
K=

2 0
0 2

(2.176)

Example 2.13 From figure 2.6 we see that the Cartan matrix of A2 (su(3)
or sl(3)) is
K=

2 1
1 2

(2.177)

2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS


3

79

aa
6

aaaa
a
aaaa
@
aaa P
iPWeyl
@
aa
@ aa

@
- 1
@
@
@
@
@
?

R
@
@
I
@

Chamber

Figure 2.7: The root diagram and Fundamental Weyl chamber of so(5) (or
sp(2))
Example 2.14 The algebra of SO(5) has dimension 10 and rank 2. So it
has 8 roots. It root diagram is shown in figure 2.7. The Fundamental Weyl
Chamber is the shaded region. Notice that all roots lie on the hyperplanes
perpendicular to the roots. The positive roots are 1 , 2 , 3 and 4 as shown
on the diagram. All the others are negative. The simple roots are 1 and 2 ,
. The
and the ratio of their squared lenghts is 2. The angle between them is 3
4
Cartan matrix of so(5) is
K=

2 1
2 2

(2.178)

Example 2.15 The last simple Lie algebra of rank 2 is the exceptional algebra
G2 . Its root diagram is shown in figure 2.8. It has 12 roots and therefore
dimension 14. The Fundamental Weyl Chamber is the shaded region. The
positive roots are the ones labelled from 1 to 6 on the diagram. The simple
roots are 1 and 2 . The Cartan matrix is given by
K=

2 1
3 2

(2.179)

We have seen that the relevant information contained in the Cartan matrix
is given by its off-diagonal elements. We have also seen that if Kab 6= 0 then
one of Kab or Kba is necessarily equal to 1. Therefore the information of the
off-diagonal elements can be given by the positive integers Kab Kba (no sum in

80

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

XXX Weyl
6
XX
X
3 XX
4
X
X
2 H
* 5

Y
AK


H
HH A  

H
 H
A
- 1
H

 A H
  A HH
H

j
H


AU


Chamber

Figure 2.8: The root diagram and Fundamental Weyl Chamber of G2


a and b). These integers can be encoded in a diagram called Dynkin diagram
which is constructed in the following way:
1. Draw r points, each corresponding to one of the r simple roots of the
algebra (r is the rank of the algebra).
2. Join the point a to the point b by Kab Kba lines. Remember that the
number of lines can be 0, 1, 2 or 3.
3. If the number of lines joining the points a and b exceeds 1 put an arrow
on the lines directed towards the one whose corresponding simple root
has a shorter lenght than the other.
When Kab Kba = 2 or 3 the corresponding simple roots, a and b , have
different lenghts. In order to see this, remember that Kab or Kba is equal to
1. Taking Kab = 1, we have Kba = Kab Kba = 2 or 3. But
Kab
1
a2
=
=
2
b
Kba
Kab Kba

(2.180)

and consenquently b2 a2 . So the number of lines joining two points in a


Dynkin diagram gives the ratio of the squared lenghts of the corresponding
simple roots.
Example 2.16 The Cartan matrix of the algebra of SO(3) or SU (2) is simply
K = 2. It has only one simple root and therefore its Dynkin diagram is just a

2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS

81

Figure 2.9: The Dynkin diagrams of rank 1 and 2 algebras.


point. The algebra of SU (3) on the other hand has two simple roots. From its
Cartan matrix given in example 2.13 and the rules above we see that its Dynkin
diagram is formed by two points linked by just one line. Using the rules above
one can easily construct the Dynkin diagrams for the algebras discussed in
examples 2.11 - 2.15. They are given in figure 2.9.
The root system postulates, given in definition 2.16, impose severe restrictions on the possible Dynkin diagrams. In section 2.15 we will classifly the
admissible diagrams, and we will see that there exists only nine types of simple Lie algebras.
We have said that for non simple algebras the Cartan matrix has a block
diagonal form. This implies that the corresponding Dynkin diagram is not
connect. Therefore a Lie algebra is simple only and if only its Dynkin diagram
is connected.
We say a Lie algebra is simply laced if the points of its Dynkin diagram
are joined by at most one link. This means all the roots of the algebra have
the same length.

82

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

2.12

Root strings

We have shown in theorem 2.5 that if and are non proportional roots then
+ is a root whenever . < 0, and is a root whenever . > 0. We
can use this result further to see if + m or n (for m, n integers) are
roots. In this way we can obtain a set of roots forming a string. We then come
to the concept of the -root string through . Let p be the largest positive
integer for which +p is a root, and let q be largest positive integer for which
q is a root. We will show that the set of vectors
+ p ; + (p 1) ; ... + ; ; ; ... q

(2.181)

are all roots. They constitute the -root string through .


Suppose that + p and q are roots and that the string is broken,
let us say, on the positive side. That is, there exist positive integers r and s
with p > r > s such that
1. + (r + 1) is a root but + r is not a root
2. + (s + 1) is not a root but + s is a root
According to theorem 2.5, since + r is not a root then we must have
. ( + (r + 1)) 0

(2.182)

For the same reason, since + (s + 1) is not a root we have


. ( + s) 0

(2.183)

((r + 1) s) 2 0

(2.184)

sr 1

(2.185)

Therefore we get that


and since 2 > 0
But this is a contradiction with our assumption that r > s > 0. So this proves
that the string can not be broken on the positive side. The proof that the
string is not broken on the negative side is similar.
Notice that the action of the Weyl reflection on a given root is to add
or subtract a multiple of the root . Since all roots of the form + n are
contained in the -root string through , we conclude that this root string is
invariant under . In fact reverses the -root string. Clearly the image

2.12. ROOT STRINGS

83

of + p under has to be q, and vice versa, since they are the roots
that are most distant from the hyperplane perpendicular to . We then have
( q) = q
and since the only possible values of
qp=

2.( q)
= + p
2

2.
2

(2.186)

are 0, 1, 2 and 3 we get that

2.
= 0, 1, 2, 3
2

(2.187)

Denoting q by we see that for the -root string through we have


q = 0 and therefore the possible values of p are 0, 1, 2 and 3. Consequently
the number of roots in any string can not exceed 4.
are 0 and
For a simply laced Lie algebra the only possible values of 2.
2
1. Therefore the root strings, in this case, can have at most two roots.
Notice that if and are distinct simple roots, we necessarily have q = 0,
since is never a root in this case. So
[E , E ] = [E , E ] = 0

(2.188)

If, in addition, . = 0 we get from (2.187) that p = 0 and consequently +


is not a root either. For a semisimple Lie algebra, since if is a root then
is also a root, it follows that
[E , E ] = [E , E ] = 0

(2.189)

for and simple roots and . = 0. We can read this result from the Dynkin
diagram since, if two points are not linked then the corresponding simple roots
are orthogonal.
Example 2.17 For the algebra of SU (3) we see from the diagram shown in
figure 2.6 that the 1 -root string through 2 contains only two roots namely 2
and 3= 2+1.
Example 2.18 From the root diagram shown in figure 2.7 we see that, for
the algebra of SO(5), the 1 -root string through 2 contains thre roots 2 ,
3 = 1 + 2 , and 4 = 2 + 21 .
Example 2.19 The algebra G2 is the only simple Lie algebra which can have
root strings with four roots. From the diagram shown in figure 2.8 we see that
the 1 -root string through 2 contains the roots 2 , 3 = 2 +1 , 5 = 2 +21
and 6 = 22 + 31 .

84

2.13

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Commutation relations from Dynkin


diagrams

We now explain how one can obtain from the Dynkin diagram of a Lie algebra, the corresponding root system and then the commutation relations. The
fact that this is possible to be done is a demonstration of how powerful the
information encoded in the Dynkin diagram is.
We start by introducing the concept of height of a root . In theorem 2.7 we
have shown that any root can be written as a linear combination of the simple
roots with integer coefficients all of the same sign (see eq. (2.167)). The height
of a root is the sum of these integer coefficients, i.e.
h()

rankG
X

na

(2.190)

a=1

where na are given by (2.167). The only roots of height one are the simple
roots. This definition classifies the roots according to a hierarchy. We can
reconstruct the root system of a Lie algebra from its Dynkin diagram starting
from the roots of lowest height as we now explain.
Given the Dynkin diagram we can easily construct the Cartan matrix. We
know that the diagonal elements are always 2. The off diagonal elements are
zero whenever the corresponding points (simple roots) of the diagram are not
linked. When they are linked we have Kab (or Kba ) equals to 1 and Kba (or
Kab ) equal to minus the number of links between those points.
Example 2.20 The Dynkin diagram of SO(7) is given in figure 2.10
We see that the simple root 3 (according to the rules of section 2.11 ) has a
length smaller than that of the other two. So we have K23 = 2 and K32 = 1.
Since the roots 1 and 2 have the same length we have K12 = K21 = 1. K13
and K31 are zero because there are no links between the roots 1 and 3. Therefore

2 1 0

K = 1 2 2
0 1 2

(2.191)

Once the Cartan matrix has been determined from the Dynkin diagram, one
obtain all the roots of the algebra from the Cartan matrix. We are interested in
semisimple Lie algebras. Therefore, since in such case the roots come in pairs
and , we have to find just the positive roots. We now give an algorithm
for determining the roots of a given height n from those of height n 1. The
steps are

2.13. COMMUTATION RELATIONS FROM DYNKIN

DIAGRAMS

85

Figure 2.10: The Dynkin diagram of so(7).


1. The roots of height 1 are just the simple roots.
2. We have seen in (2.189) that if two simple roots are orthogonal then
their sum is not a root. On the other hand if they are not orthogonal
then their sum is necessarily a root. From theorem 2.6 one has . 0
for and simple, and therefore from theorem 2.5 one gets their sum
is a root (if they are not orthogonal). Consequently to obtain the roots
of height 2 one just look at the Dynkin diagram. The sum of pairs of
simple roots which corresponding points are linked, by one or more lines,
are roots. These are the only roots of height 2.
3. The procedure to obtain the roots of height 3 or greater is the following:
P
PrankG
suppose (l) = rankG
a=1 na a is a root o height l, i.e.
a=1 na = l. Using
the Cartan matrix one evaluates
X
2(l) .b rankG
na Kab
=
b2
a=1

(2.192)

where b is a simple root. If this quantity is negative one gets from


theorem 2.5 that (l) + b is a root of height l + 1. If it is zero or positive
on uses (2.187) to write
p=q

rankG
X

na Kab

(2.193)

a=1

where p and q are the highest positive integers such that (l) + pb and
(l) qb are roots. The integer q can be determined by looking at the set
of roots of height smaller than l (which have already been determined)
and checking what is the root of smallest height of the form (l) mb .
One then finds p from (2.193). If p does not vanish, (l) + b is a root.
Notice that if p 2 one also determines roots of height greater than
l + 1. By applying this procedure using all simple roots and all roots of
height l one determines all roots of height l + 1.
4. The process finishes when no roots of a given height l + 1 is found. That
is because there can not exists roots of height l + 2 if there are no roots
of height l + 1.

86

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Therefore we have shown that the root system of a Lie algebra can be
determined from its Dynkin diagram. In some cases it is more practical to
determine the root system using the Weyl reflections through hyperplanes
perpendicular to the simple roots.
The root which has the highest height is said the highest root of the algebra
and it is generally denoted . For simple Lie algebras the highest root is unique.
PrankG
P
The integer h() + 1 = rankG
a=1 ma a , is said the
a=1 ma + 1, where =
Coxeter number of the algebra.
Example 2.21 In example 2.20 we have determined the Cartan matrix of
SO(7) from its Dynkin diagram. We now determine its root system following
the procedure described above. The dimension of SO(7) is 21 and its rank is 3.
So, the number of positive roots is 9. The first three are the simple roots 1 ,
2 and 3 . Looking at the Dynkin diagram in figure 2.10 we see that 1 + 2
and 2 + 3 are the only roots of height 2, since 1 and 3 are orthogonal. We
2 ).a
= K1a + K2a which, from (2.191), is equal to 1 for a = 1, 2 and
have 2(1 +
2a
2 for a = 3. Therefore, from (2.193), we get that 21 + 2 and 1 + 22 are
not roots but 1 + 2 + 3 and 1 + 2 + 23 are roots. Analogously we have
2(2 +3 ).a
= K2a + K3a which is equal to 1 for a = 1, 1 for a = 2 and 0 for
2a
a = 3. Therefore the only new root we obtain is 2 + 23 . This exhausts the
roots of height 3. One can check that the only root of height 4 is 1 + 2 + 23
3 ).a
= K1a + K2a + 2K3a which
which we have obtained before. Now 2(1 +2+2
2
a
is equal to 1, 1 and 2 for a = 1, 2, 3 respectively. Since it is negative for
a = 2 we get that 1 + 22 + 23 is a root. This is the only root of height 5,
and it is in fact the highest root of SO(7). So the Coxeter number of SO(7) is
6. Summarizing we have that the positive roots of SO(7) are
roots of height 1 1 ; 2 ; 3
roots of height 2 (1 + 2 ); (2 + 3 )
roots of height 3 (1 + 2 + 3 ); (2 + 23 )
roots of height 4 (1 + 2 + 23 )
roots of height 5 (1 + 22 + 23 )
These could also be determined starting from the simple roots and using Weyl
reflections.
We now show how to determine the commutation relations from the root
system of the algebra. We have been using the Cartan-Weyl basis introduced
in (2.134). However the commutation relations take a simpler form in the so
called Chevalley basis . In this basis the Cartan subalgebra generators are

2.13. COMMUTATION RELATIONS FROM DYNKIN

DIAGRAMS

87

given by
Ha

2a .H
a2

(2.194)

where a (a = 1, 2, ... rank G) are the simple roots and a .H = ai H i , where


Hi are the Cartan subalgebra generators in the Cartan-Weyl basis and ai are
the components of the simple root a in that basis, i.e. [Hi , Ea ] = ai Ea .
The generators Ha are not orthonormal like the Hi . From (2.134) and (2.170)
we have that
4a .b
2
T r(Ha Hb ) = 2 2 = 2 Kab
(2.195)
a b
a
The generators Ha obviously commute among themselves
[Ha , Hb ] = 0

(2.196)

The commutation relations between Ha and step operators are given by (see
(2.124))
2.a
E = Ka E
(2.197)
[Ha , E ] =
a2
a
where we have defined Ka 2.
. Since can be written as in (2.167) we
2a
see that Ka is a linear combination with integer coefficients, all of the same
sign, of the a-columm of the Cartan matrix

Ka =

X
2.a rankG
nb Kba
=
a2
b=1

(2.198)

where = rankG
b=1 nb b . Notice that the factor multiplying E on the r.h.s
of (2.197) is an integer. In fact this is a property of the Chevalley basis. All
the structure constants of the algebra in this basis are integer numbers. The
commutation relations (2.197) are determined once one knows the root system
of the algebra.
We now consider the commutation relations between step operators. From
(2.125)
P

N E+
if + is a root
[E , E ] = H = ma Ha if + = 0

0
otherwise

(2.199)

a
where ma are integers in the expansion 2 = rankG
a=1 ma 2a . The structure
constants N , in the Chevalley basis, are integers and can be determined

88

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

from the root system of the algebra and also from the Jacobi identity . Let us
explain now how to do that.
Notice that from the antisymmetry of the Lie bracket
N = N

(2.200)

for any pair of roots and . The structure constants N are defined up to
rescaling of the step operators. If we make the transformation
E E

(2.201)

keeping the Cartan subalgebra generators unchanged, then from (2.199) the
structure constants N must transform as
N


N
+

(2.202)

and
= 1

(2.203)

As we have said in section 2.9, any symmetry of the root diagram can be elevated to an automorphism of the corresponding Lie algebra. In any semisimple
Lie algebra the transformation is a symmetry of the root diagram
since if is a root so is . We then define the transformation : G G as
(Ha ) = Ha ; (E ) = E

(2.204)

and 2 = 1. From the commutation relations (2.196), (2.197) and (2.199) one
sees that such transformation is an automorphism if
= 1

N,
N =
+

(2.205)

Using the freedom to rescale the step operators as in (2.202) one sees that it is
possible to satisfy (2.205) and make (2.204) an automorphism. In particular
it is possible to choose all equals to 1 and therefore
N = N,

(2.206)

Consider the -root string through given by (2.181). Using the Jacobi
identity for the step operators E , E and E+n , where p > n > 1 and p is
the highest integer such that + p is a root, we obtain from (2.199) that
N+n, N+(n1), N+n, N+(n+1), =

2.( + n)
2

(2.207)

2.13. COMMUTATION RELATIONS FROM DYNKIN

DIAGRAMS

89

Notice that the second term on the l.h.s of this equation vanishes when n = p
, since + (p + 1) is not a root. Adding up the equations (2.207) for n taking
the values 1, 2, ... p , we obtain that
2.
p + 2 (p + (p 1) + (p 2) + ... + 1)
2
= p(q + 1)
(2.208)

N+, N =

where we have used (2.187).


From the fact that the Killing form is invariant under the adjoint representation (see (2.48) it follows that it is invariant under inner automorphisms, i.e.
T r((T )(T 0 )) = T r(T T 0 ) with (T ) = gT g 1 . However one can show that
the Killing form is invariant any automorphism (inner or outer). Using this
fact for the automorphism (2.204) (with = 1), the invariance property
(2.46) and the normalization (2.134) one gets
2
( + )2
= T r([E , E ]E+ )
= T r([E+ , E ]E )
2
= N+, 2

T r([E , E ]E ) = N

(2.209)

Consequently
N+, =

2
N
( + )2

(2.210)

Substituting this into (2.208) we get


2
N
=

( + )2
p(q + 1)
2

(2.211)

Therefore, up to a sign, the structure constants N defined in (2.199) can be


determined from the root system of the algebra.
Using the Jacobi identity for the step operators E , E and En , with n
varying from 1 to q where q is the highest integer such that q is a root,
and doing similar calculations we obtain that
2
N,
=

( )2
q(p + 1)
2

(2.212)

90

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

The relation (2.211) can be put in a simpler form. From (2.187) we have
that (see section 25.1 of [HUM 72])
(q + 1) p

( + )2
2.
( + )2
=
p
+
+
1

p
2
2
2
2.
2
2.
=
+
1

p
p 2
2
2

!
!
2
2.

=
+1 1p 2
2

(2.213)

We want to show the r.h.s of this relation is zero. We distinguish two cases:
1. In the case where 2 2 we have | 2.
|| 2.
|. From table 2.2 we
2
2
2.
see that the possible values of 2 are 1, 0 or 1. In the first case we
get that the first factor on the r.h.s of (2.213) vanishes. On the other
two cases we have that . 0 and then ( + )2 is strictly larger than
both, 2 and 2 . Since we are assuming + is a root and since, as
we have said at the end of section 2.8, there can be no more than two
different root lengths in each component of a root system, we conclude
that 2 = 2 . For the same reason + 2 can not be a root since
( + 2)2 > ( + )2 and therefore p = 1. But this implies that the
second factor on the r.h.s of (2.213) vanishes.
2. For the case of 2 < 2 we have that ( + )2 = 2 or 2 , since otherwise
we would have three different root lengths. This forces . to be strictly
negative. Therefore we have ()2 > 2 > 2 and consequently is
|<| 2.
| and therefore 2.
= 1, 0
not a root and so q = 0. But | 2.
2
2
2
2.
or 1. Since . < 0 we have 2 = 1. Then from (2.187) we have
2

p = 2.
=
2 2.
vanishes.

2
.
2

Therefore the second factor on the r.h.s of (2.213)

Then, we have shown that


q+1=p

( + )2
2

(2.214)

and from (2.211)


2
N
= (q + 1)2

(2.215)

This shows that the structure constants N are integer numbers. From
(2.196), (2.197) and (2.199) we see that all structure contants in the Chevalley

2.13. COMMUTATION RELATIONS FROM DYNKIN

DIAGRAMS

91

basis are integers. Summarizing we have


[Ha , Hb ] = 0
2.a
[Ha , E ] =
E = Ka E
a2

(q + 1)(, )E+ if + is a root


= ma Ha if + = 0
H = 2.H
[E , E ] =
2

0
otherwise

(2.216)
(2.217)
(2.218)

where we have denoted (, ) the sign of the structure constant N , i.e.


N = (q +1)(, ). These signs, also called cocycles, are determined through
the Jacobi identity as explained in section 2.14. As we have said before q is
the highest positive integer such that q is a root. However when +
is a root, which is the case we are interested in (2.218), it is true that q is
also the highest positive integer such that q is a root. The reason is the
following: in a semisimple Lie algebra the roots always appear in pairs ( and
). Therefore if is a root so is . In addition we have seen in
section 2.12 that the root strings are unbroken and they can have at most four
roots. Therefore, since we are assuming that + is a root, the only possible
way of not satisfying what we said before is to have, let us say, the -root
string through as 2, , , + ; and the -root string through
as , , + or , , + , + 2. But from (2.187) we have

and

2.
=1
2

(2.219)

2.
= 0 or 1
2

(2.220)

which are clearly incompatible.


We have said in section 2.12 that for a simply laced Lie algebra there can
be at most two roots in a root string. Therefore if + is a root is not,
and therefore q = 0. Consequently the structure constants N are always 1
for a simply laced algebra.

92

2.14

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Finding the cocycles (, )

As we have seen the Dynkin diagram of an algebra contains all the necessary
information to construct the commutation relations (2.216)-( 2.218). However
that information is not enough to determine the cocycles (, ) defined in
( 2.218). For that we need the Jacobi identity. We now explain how to use
such identities to determine the cocycles. We will show that the consistency
conditions imposed on the cocycles are such that they can be split into a
number of sets equal to the number of positive non simple roots. The sign of
a cocycle in a given set completly determines the signs of all other cocycles of
that set, but has no influence in the determination of the cocycles in the other
sets. Therefore the cocycles (, ) are determined by the Jacobi identities up
to such gauge freedom in fixing independently the signs of the cocycles of
different sets.
From the antisymmetry of the Lie bracket the cocycles have to satisfy
(, ) = (, )

(2.221)

In addition, from the choice made in (2.206) one has


(, ) = (, )

(2.222)

Consider three roots , and such that their sum vanishes. The Jacobi
identity for their corresponding step operators yields, using (2.216) - (2.218)
0 = [[E , E ], E ] + [[E , E ], E ] + [[E , E ], E ]
2.H
2.H
= ((q + 1)(, ) 2 + (q + 1)(, ) 2

2.H
+(q + 1)(, ) 2 )

2
2.H
= (((q + 1)(, ) 2 (q + 1)(, )) 2

2
2.H

+((q + 1)(, ) 2 (q + 1)(, )) 2 )

(2.223)

Since the integers q 0 s are non negative we get

and also

(, ) = (, ) = (, )

(2.224)

1
1
1
(q + 1) = 2 (q + 1) = 2 (q + 1)
2

(2.225)

2.14. FINDING THE COCYCLES (, )

93

Further relations are found by considering Jacobi identities for three step operators corresponding to roots adding up to a fourth root. Now such identities
yield relations involving products of two cocycles. However, in many situations
there are only two non vanishing terms in the Jacobi identity. Consider three
roots , and such that + , + and + + are roots but +
is not a root. Then the Jacobi identity for the corresponding step operators
yields
0 = [[E , E ], E ] + [[E , E ], E ] + [[E , E ], E ]
= (q + 1)(q+, + 1)(, )( + , )
+(q + 1)(q+, + 1)(, )( + , )

(2.226)

Therefore one gets


(, )( + , ) = (, )(, + )

(2.227)

(q + 1)(q+, + 1) = (q + 1)(q+, + 1)

(2.228)

and
There remains to consider the cases where the three terms in the Jacobi identity
for three step operators do not vanish. Such thing happens when we have three
roots , and such that + , + , + and + + are roots as
well. We now classify all cases where that happens. We shall denote long roots
by , , , ... and short roots by e, f , g, ... . From the properties of roots
, 2.e
, 2e.f
= 0, 1. Let us consider
discussed in section 2.8 one gets that 2.
2
2
e2
the possible cases:
2

1. All three roots are long. If + is a root then (+)


= 2 + 2.
. Since
2
2
2.
+ can not be a longer than one gets 2 = 1. So + is a long
root and if + + is also a root one gets by the same argument that
2(+).
= 1. Therefore + and + can not be roots simultaneously
2
since that would imply, by the same arguments, 2.
= 2.
= 1 which
2
2
is a contradiction with the result above.
(+e)2
=
2
2.e
= 1.
2

2. Two roots are long and one short. If + e is a root then


2

1+ e 2 + 2.e
. Since +e can not be longer than it follows that
2
Therefore + e is a short root since ( + e)2 = e2 . So, if + e + is
2
2
a root then (+e+)
= 1 + (+e)
+ 2(+e).
and therefore 2(+e).
= 1.
2
2
2
2
Consequently + and + e can not be roots simultaneously since that
would imply, by the same arguments, 2.
= 2.e
= 1.
2
2

94

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS


3. Two roots are short and one long. Analogously if e + f and + e + f are
).
roots one gets 2(e+f
= 1 independently of e + f being shost or long.
2
So, it is impossible for + e and + f to be both roots since one would
get 2.e
= 2.f
= 1.
2
2
4. All three roots are short. If e + f is a root then
there exists three possibilities:

(e+f )2
e2

= 2+

2e.f
e2

(a)

2e.f
e2

= 1 and e + f is a short root.

(b)

2e.f
e2

= 1 and

(e+f )2
e2

= 3 (can only happen in G2 ).

(c)

2e.f
e2

= 0 and

(e+f )2
e2

= 2 (can only happen in Bn , Cn and F4 ).

and

In section 2.8 we have seen that the possible ratios of squared length of the
roots are 1, 2 and 3. Therefore there can not exists roots with three different
2
2
lengths in the same irreducible root system since if 2 = 2 and 2 = 3 then
2
2

= 23 .
Consider the case 4.b and let g be the third short root. Then if e + g is a
(e+g)2
2e.g
2e.g
= 32 + (e+f
= 1 or 13 . But this is impossible since (e+f
root we have (e+f
)2
)2
)2
would not be an integer. So the second case is ruled out since we would not
have e + f , e + g, f + g and e + f + g all roots.
(e+g)2
Consider the case 4.c. If e + g is a root then (e+f
= 1 + 12 2e.g
= 1 or
)2
g2
1
. Therefore 2e.g
= 0 or 1. Similarly if f + g is a root we get 2f.g
= 0
2
g2
g2
or 1. But if e + f + g is also a root then it has to be a short root since
).g
).g
+g)2
(e+f +g)2
= 23 + 2(e+f
. Consequently 2(e+f
= 1 and (e+f
= 12 . It then
(e+f )2
(e+f )2
(e+f )2
(e+f )2
2

).g (e+f )
follows that 2e.g
+ 2f.g
= 2(e+f
= 2. Therefore in the case 4.c we can
g2
g2
(e+f )2
g2
= 2f.g
= 1.
have e + f , e + g, f + g and e + f + g all roots if e.f = 0, 2e.g
g2
g2
2

Consider the case 4.a. Again if e + g is a root then (e+g)


= 2 + 2e.g
= 1 or
g2
g2
2f.g
2e.g
2. So, g2 = 0 or 1. Similarly if f + g is a root g2 = 0 or 1. If e + f + g is
2

).g
).g
= 1 or 2. Therefore 2(e+f
= 0 or 1.
also a root then (e+fg2+g) = 2 + 2(e+f
g2
g2
2f.g
2e.g
2e.g
Consequently g2 and g2 can not be both 1. Suppose then g2 = 0 and
2

consequently e + g is a long root, i.e. (e+g)


= 2. According to the arguments
g2
used in case 4.c we get e + f + g is a short root and then 2f.g
= 1.
g2
We then conclude that the only possibility for the ocurrence of three short
roots e, f and g such that the sum of any two of them and e+f +g are all roots
is that two of them are ortoghonal, let us say e.f = 0 and 2e.g
= 2f.g
= 1.
g2
g2
This can only happen in the algebras Cn or F4 . Therefore none of the three

2.14. FINDING THE COCYCLES (, )

95

terms in the Jacobi identity for the corresponding step operators will vanish.
We have
0 = [[Ee , Ef ], Eg ] + [[Eg , Ee ], Ef ] + [[Ef , Eg ], Ee ]
= (qef + 1)(qe+f,g + 1)(e, f )(e + f, g)
+(qge + 1)(qg+e,f + 1)(g, e)(g + e, f )
+(qf g + 1)(qf +g,e + 1)(f, g)(f + g, e)

(2.229)

According to the discussion in section 2.12 any root string in an algebra where
the ratio of the squared lengths of roots is 1 or 2 can have at most 3 roots.
From (2.187) we see that qef = 1 and qge = qf g = qe+f,g = qg+e,f = qf +g,e = 0.
Therefore
(e, f )(e + f, g) = (g, e)(f, g + e) = (f, g)(e, f + g)

(2.230)

We can then determine the cocycles using the following algorithm:


1. The cocycles involving two negative roots, (, ) with and both
positive, is determined from those involving two positive roots through
the relation (2.222).
2. The cocycles involving one positive and one negative root, (, ) with
both and both positive, are also determined from those involving
two positive roots through the relations (2.224) and (2.222). Indeed, if
+ is a positive root we write + = and if it is negative we
write + = with positive in both cases. Therefore from (2.224)
and (2.222) it follows (, ) = (, ) = (, ) in the first case,
and (, ) = (, ) in the second case.
3. Let be a positive non simple root which can be written as = + =
+ with , , and all positive roots. Then the cocycles (, )
and (, ) can be related to each other by using combinations of the
relations (2.227)
Using such algorithm one can then verify that there will be one cocycle to
be chosen freely, for each positive non-simple root of the algebra. Once those
cocycles are chosen, all the other are uniquely determined.

96

2.15

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

The classification of simple Lie algebras

The simple Lie algebras are, as we have seen, the building blocks for constructing all Lie algebras and therefore the classification of those is very important.
We have also seen that there exists, up to isomorphism, only one Lie algebra
associoated to a given Dynkin diagram. Since the Dynkin diagram for a simple Lie algebra is necessarily connected, we see that the classification of the
simple algebras is equivalent to the classification of possible connected Dynkin
diagrams. We now give such classification.
We will firstly look for the possible Dynkin diagrams ignoring the arrows
on them. We then define unit vectors in the direction of the simple roots as
a
a = q
a2

(2.231)

Therefore each point of the diagram will be associated to a unit vector a , and
these are all linearly independent. They satisfy
q
2a b
= Kab Kba
2a b = q
a2 b2

(2.232)

Now, from theorem 2.6 we have that a b 0, and therefore from (2.174)

2a b = 0, 1, 2, 3

(2.233)

which correspond to minus the square root of the number of lines joining
the points a and b. We shall call a set of unit vectors satisfying (2.233) an
admissible set.
One notices that by ommiting some a s, the remaining ones form an admissible set, which diagram is obtained from the original one by ommiting the
corresponding points and all lines attached to them. So we have the obvious
lemma.
Lemma 2.2 Any subdiagram of an admissible diagram is an admissible diagram.
Lemma 2.3 The number of pairs of vertices in a Dynkin diagram linked by
at least one line is strictly less than r, the rank of the algebra (or number of
vertices).

2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS

97

Proof: Consider the vector


=

r
X

a

(2.234)

a=1

Since the vectors a s are linearly independent we have  6= 0 and then


0 < 2 = r + 2

a b

(2.235)

pairs

And from (2.233) we see that if a and b are linked, then 2a b 1. In order
to keep the inequality we see that the number of linked pairs of points must
be smaller or equal to r 1. 2
Corollary 2.1 There are no loops in a Dynkin diagram.
Proof: If a diagram has a loop we see from lemma 2.2 that the loop itself
would be an admissible diagram. But that would violate lemma 2.3 since the
number o linked pairs of vertices is equal to the number of vertices. 2
Lemma 2.4 The number of lines attached to a given vertice can not exceed
three.
Proof: Let be a unit vector corresponding to a vertex and let 1 , 2 ,
. . . k be the set of unit vectors which correspond to the vertices linked to it.
Since the diagram has no loops we must have
a b = 0
So we can write
=

k
X

a, b = 1, 2, 3, . . . k

( a ) a + ( 0 ) 0

(2.236)

(2.237)

a=1

where 0 is a unit vector in a subspace perpendicular to the set 1 , 2 , . . . k .


Then
2

=1=

k
X

( a )2 + ( 0 )2

(2.238)

a=1

But the number of lines linked to is (see (2.232) and (2.233))


4

k
X
a=1

( a )2 = 4 4 ( 0 )2 4

(2.239)

98

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Figure 2.11: Possible links a vertex can have.

Figure 2.12: The only connected diagram with triple link.


The equality is only possible if 0 = 0. But that is impossible since it means
is a linear combination of 1 , 2 , . . . k . Therefore, the number of lines linked
to is strictly less than 4 and the lemma is proved. 2
Consequently we see that the possible links a vertex can have are shown in
figure 2.11 and then it follows the corollary 2.2.
Corollary 2.2 The only connected diagram which has a triple link is the one
shown in figure 2.12 and it corresponds to the exceptional Lie algebra G2 .

Corollary 2.3 If an admissible diagram D has a subdiagram given in figure


2.13, then the diagram D0 obtained from D by the contraction of the is also
an admissible diagram. By constraction we mean the reduction of to the
point
=

a+k
X

a

(2.240)

a=l

which corresponds to a new simple root = a+k


a=l a . Therefore, the simple
0
roots of D are together with the simple roots of D which do not correspond
to a , a+1 , . . . a+k .
P

Proof: We have to show that D0 is an admissible diagram. The vector ,


defined in (2.240), together with the remaining a s in D are linearly indepen-

2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS

99

Figure 2.13: Diagram .


dent.  has unit length since
2 = k + 2

a b

(2.241)

pairs

But since 2a b = 1, for a amd b being nearest neighbours, we have


2 = k + (k 1) (1) = 1

(2.242)

Any belonging to D can be linked at most to one of the points of .


Otherwise we would have a loop. Therefore, either
 = a

for a given a in

(2.243)

or
=0

(2.244)

But since and a belong to an admissible diagram we have that they satisfy
(2.233). Therefore, and  also satisfy (2.233) and consequently D0 is an
admissible diagram.
Corollary 2.4 Any admissible diagram can not have subdiagrams of the form
shown in figure 2.14.
The reason is that by lemma 2.3 we would obtain that the diagrams shown
in figure 2.15 are subdiagrams of admissible diagrams. From lemmas 2.2 and
2.4 we see that this is impossible.
So, from the results obtained so far we see that an admissible diagram has
to have one of the forms shown in figure 2.16.
Consider the diagram B) of figure 2.16, and define the vectors
=

p
X
a=1

aa

q
X
a=1

aa

(2.245)

100

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Figure 2.14: Non-admissible subdiagrams.

Figure 2.15:

2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS

Figure 2.16:

101

102

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Therefore
2 =

p
X

a2 + 2

a=1

p
X

ab a b

pairs

a2

a=1

p1
X

a (a + 1)

a=1

= p2

p1
X

a = p2 p (p 1) /2

a=1

= p (p + 1) /2

(2.246)

where we have used the fact that 2a b = 1 for a and b being nearest
neighbours and 2a b = 0 otherwise. In a similar way we obtain that
2 = q (q + 1) /2
Since the points p andq are linked by a double line we have

2p q = 2
and so

 = pq p q = pq/ 2

(2.247)

(2.248)

(2.249)

Using Schwartz inequality


( )2 2 2

(2.250)

we have from (2.246), (2.247) and (2.249) that


p2 q 2 < p (p + 1) q (q + 1) /2

(2.251)

Since the equality can not hold because  and are linearly independent, eq.
(2.251) can be written as
(p 1) (q 1) < 2
There are three possibilities for p, q 1, namely
1. p = q = 2
2. p = 1 and q any positive integer
3. q = 1 and p any positive integer

(2.252)

2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS

103

Figure 2.17:

Figure 2.18:
In the first case we have the diagram 2.17 which corresponds to the exceptional Lie algebra of rank 4 denoted F4 . In the other two cases we obtain the
diagram of figure 2.18 which corresponds to the classical Lie algebras so(2r+1)
or Sp(r) depending on the direction of the arrow.
Consider now the diagram D) of figure 2.16 and define the vectors
=

p1
X

aa

a=1

q1
X

aa

s1
X

aa

(2.253)

a=1

a=1

Doing similar calculations to those leading to (2.246) we obtain


2 = p(p 1)/2

2 = q(q 1)

2 = s(s 1)

(2.254)

The vectors , , and (see diagram D) in figure 2.16) are linearly independent. Since 2 = 1 we have from (2.254)
( )2
(p 1) (p1 )2
=
2 2
2
(1 1/p)
=
2

cos2 (, ) =

(2.255)

where we have used that 2p1 = 1.


Analogously we have
cos2 (, ) =

(1 1/q)
2

(2.256)

cos2 (, ) =

(1 1/s)
2

(2.257)

and

We can write as
= ( )


+
(

)
+
(

)
+ ( 0 ) 0
|  |2
| |2
| |2

(2.258)

104

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Figure 2.19:
where 0 is a unit vector in the subspace perpendicular to , and . Then
2 = 1 =

( )2 ( )2 ( )2
+
+
+ ( 0 )2
2
2
2

(2.259)

Notice that ( ) has to be different from zero, since , , and are linarly
independent, we get the inequality
cos2 (, ) + cos2 (, ) + cos2 (, ) < 1

(2.260)

and so from (2.255-2.255)


1 1 1
+ + >1
p q s

(2.261)

Whithou any loss of generality we can assume p q s. Then the possibilities


are
1. (p, q, s) = (p, 2, 2) with p any positive integer. The diagram we obtain is
given in figure 2.19 which corresponds to the classical Lie algebra so(2r).
2. (p, q, s) = (p, 3, 2) with p taking the values 3, 4 or 5. The diagrams
we obtain correspond to the exceptional Lie algebras E6 , E7 and E8
respectively, given in figure 2.20.
This ends the search for connected admissible diagrams. We have only to
consider the arrows on the diagrams with double and triple links. When that
is done we obtain all possible connected Dynkin diagrams corresponding to
the simple Lie algebras. We list in figure 2.21 the diagrams we have obtained
giving the name of the corresponding algebra in both, the physicists and
mathematicians notations.

2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS

Figure 2.20:

105

106

CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS

Figure 2.21: The Dynkin diagrams of the simple Lie algebras.

Chapter 3
Representation theory
of Lie algebras
3.1

Introduction

In this chapter we shall develop further the concepts introduced in section 1.5
for group representations. The concept of a representation of a Lie algebra
is analogous to that of a group. A set of operators D1 , D2 , . . . acting on
a vector space V is a representation of a Lie algebra in the representation
space V if we can define an operation between any two of these operators such
that it reproduces the commutation relations of the Lie algebra. We will be
interested mainly on matrix representations and the operation will be the usual
commutator of matrices. In addition we shall consider the representations of
compact Lie algebras and Lie groups only, since the representation theory of
non compact Lie groups is beyond the scope of these lecture notes.
Some results on the representation theory of finite groups can be extended
to the case of compact Lie groups. In some sense this this is true because the
volume of the group space is finite for the case of compact Lie groups, and
therefore the integration over the group elements converge. We state without
proof two important results on the representation theory of compact Lie groups
which are also true for finite groups:
Theorem 3.1 A finite dimensional representation of a compact Lie group is
equivalent to a unitary one.
Theorem 3.2 A unitary representation can be decomposed into unitary irreducible representations.
107

108 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


We then see that the irreducible representations (irreps.) constitute the
building blocks for constructing finite dimensional representations of compact
Lie groups. The aim of this chapter is to show how to classify and construct
the irreducible representations of compact Lie groups and Lie algebras.

3.2

The notion of weights

We have defined in section 2.6 (see definition 2.12) the Cartan subalgebra of a
semisimple Lie algebra as the maximal abelian subalgebra wich can be diagonalized simultaneously. Therefore we can take the basis of the representation
space V as the eigenstates of the Cartan subalgebra generators. Then we have
Hi | i = i | i

i = 1, 2, 3...r(rank)

(3.1)

The eigenvalues of the Cartan subalgebra generators constitute r-component


vectors and they are called weights. Like the roots, the weights live in a rdimensional Euclidean space. There can be more than one base state associated
to a single weight. So the base states can be degenerated.
In section 2.8 we have seen that the operator H = 2 H/2 , has integer
eigenvalues. Therefore from (3.1) we have
2
| i
2

(3.2)

is an integer for any root

(3.3)

H | i =
and consenquently we have that
2
2

Any vector satisfying this condition is a weight, and in fact this is the
only condition a weight has to satisfy. From (2.148) we see that any root is a
weight but the converse is not true. Notice that 2
does not have to be an
2
integer and therefore the table 2.2 does not apply to the weights.
A weight is called dominant if it lies in the Fundamental Weyl Chamber or
on its borders. Obviously a dominant weight has a non negative scalar product
with any positive root. It is possible to find among the dominant weights, r
weights a , a = 1, 2...r, satisfying
2a b
= ab
b2

for any simple root b

(3.4)

3.2. THE NOTION OF WEIGHTS

109

In orther words we can find r dominant weights which are orthogonal to all
simple roots except one. These weights are called fundamental weights. They
play an important role in representation theory as we will see below.
Consider now a simple root a and any weight . From (3.3) we have that
2 a
= ma = integer
a2

(3.5)

r
X
2a

ma a = 0
a2
a=1

(3.6)

Using (3.4) we have


!

Since the simple roots constitute a basis of an r-dimensional Euclidean space


we conclude that
r
=

ma a

(3.7)

a=1

Therefore any weight can be written as a linear combination of the fundamental weights with integer coefficients. We now want to show that any vector
formed by an integer linear combination of the fundamental weights is also a
weight, i.e., it satisfies the condition (3.3). In order to do that we introduce
the concept of co-root , which is a root devided by its squared lenght
v

Since
(v )2 =
and

1
2

2
2v v
=
2
2
(v )

(3.8)

(3.9)

(3.10)

one sees that the co-roots satisfy all the properties of roots and consequently
are also roots. However the co-roots of a given algebra G are the roots of
another algebra G v , called the dual algebra to G. The simply laced algebras,
su(N ) (AN1 ), so(2N ) (DN ), E6 , E7 and E8 , together with the exceptional
algebras G2 and F4 are self-dual algebras, in the sense that G = G v . However
so(2N +1) (BN ) is the dual algebra to sp(N ) (CN ) and vice versa. The Cartan
matrix of the dual algebra G v is the transpose of the Cartan matrix of G since
(Kab )v =

2av bv
2a b
=
= Kba
v 2
a2
(b )

(3.11)

110 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


where we have used the fact that the simple co-roots are given by
av =

a
a2

(3.12)

Any co-root can be written as a linear combination of the simple co-roots with
integer coefficients all of the same sign. To show that we observe from theorem
2.7 that
r
X
2

na a2 av
(3.13)
v = 2 =

a=1
and from (3.4) we get
2a
a2

(3.14)

r
2a v X

ma av

a
2

a=1
a=1

(3.15)

na =
Therefore
v =

r
X

since from (3.3) we have that 2a2 is an integer. In additon these integers are
all of the same sign since all a s lie on the Fundamental Weyl Chamber or on
its border.
Let be a vector defined by
=

r
X

ka a

(3.16)

a=1

where a are the fundamental weights and ka are arbitrary integers. Using
(3.15) and (3.4) we get
X
2
2b a X
v
=
2

=
m
k
=
ma ka
a
b
2
a2
a
a,b

(3.17)

Therefore is a weight. So we have shown that any integer linear combination


of the fundamental weights is a weigtht and that all weights are of this form.
Consequently the weights constitute a lattice called the weight lattice. This
quantized spectra of weights is a consequence of the fact that H has integer
eigenvalues and is an important feature of representation theory of compact
Lie algebras.
As we have said any root is a weight and consequently belong to . We can
also form a lattice by taking all vectors which are integer linear combinations
of the simple roots. This lattice is called the root lattice and is denoted by r .
All points in r are weights and therefore r is a sublattice of . The weight

3.2. THE NOTION OF WEIGHTS

111

lattice forms an abelian group under the addition of vectors. The root lattice is
an invariant subgroup and consequently the coset space /r has the structure
of a group (see section 1.4). One can show that /r corresponds to the center
of the covering group corresponding to the algebra which weight lattice is .
We will show that all the weights of a given irreducible representation of a
compact Lie algebra lie in the same coset.
Before giving some examples we would like to discuss the relation between
the simple roots and the fundamental weights, which constitute two basis for
the root (or weight) space. Since any root is a weight we have that the simple
roots can be written as integer linear combination of the fundamental weights.
Using (3.4) one gets that the integer coefficients are the entries of the Cartan
matrix, i.e.
X
a =
Kab b
(3.18)
b

and then
a =

1
Kab
b

(3.19)

So the fundamental weights are not, in general, written as integer linear combination of the simple roots.
Example 3.1 SU (2) has only one simple root and consequently only one fundamental weight. Choosing a normalization such that = 1, we have that
2
=1
2

and so

1
2

(3.20)

Therefore the weight lattice of SU (2) is formed by the integers and half integer
numbers and the root lattice only by the integers. Then
/r = ZZ2

(3.21)

which is the center of SU (2).


Example 3.2 SU (3) has two fundamental weights since it has rank two. They
can be constructed solving (3.4) or equivalently (3.19). The Cartan matrix of
SU (3) and its inverse are given by (see example 2.13)
K=

2 1
1
2

1
=
3

2 1
1 2

(3.22)

So, from (3.19), we get that fundamental weights are


1 =

1
(21 + 2 )
3

2 =

1
(1 + 22 )
3

(3.23)

112 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


2

3
KA
A
A


2 

A 6
A  
*
A
 
A
 A

A

A

A

AU

1
-

Figure 3.1: The fundamental weights of A2 (SU (3) or SL(3))


In example 2.10 we have seen that
the simple roots of SU (3) are given by

1 = (1, 0) and 2 = 1/2, 3/2 . Therefore
1 =

!
1 3
,
2 6

!
3
2 = 0,
3

(3.24)

The vectors representing the fundamental weights are given in figure 3.1.
The root lattice, r , generated by the simple roots 1 and 2 , corresponds
to the points on the intersection of lines shown in the figure 3.2. The weight
lattice, generated by the fundamental weights 1 and 2 , are all points of r
plus the centroid of the triangles, shown by circles and plus signs on the figure
3.2.
The points of the weight lattice can be obtained from the origin, 1 and 2
by adding to them all points of the root lattice. Therefore the coset space /r
has three points which can be represented by 0, 1 and 2 . Since 1 + 2 =
1 + 2 and 31 = 21 + 2 lie in the same coset as 0, we see that /r has
the structure of the cyclic group ZZ3 which is the center of SU (3).

3.3

The highest weight state

In a irreducible representation one can obtain all states of the representation


by starting with a given state and applying sequences of step operators on it.
If that was not possible the representation would have an invariant subspace
and therefore would not be irreducible.

3.3. THE HIGHEST WEIGHT STATE

Figure 3.2: The weight lattice of SU (3).

113

114 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


Consider a state with weight satisfying (3.1). The state defined by
| 0 i E | i

(3.25)

Hi | 0 i = Hi E | i
= (E Hi + [ Hi , E ]) | i
= (i + i ) E | i

(3.26)

satisfies

and therefore it has weight + . Therefore the state


E1 E2 . . . En | i

(3.27)

has weight + 1 + . . . + n .
For this reason the weights in an irreducible representation differ by a sum
of roots, and consequently they all lie in the same coset in /r . Since that
is the center of the covering group we see that the weights of an irreducible
representation is associated to only one element of the center.
In a finite dimensional representation, the number of weights is finite, since
this is at most the number of base states (remember the weights can be degenerated). Therefore, by applying sequences of step operators corresponding to
positive roots on a given state we will eventually get zero. So, an irreducible
finite dimensional representation possesses a state such that
E | i = 0

for any > 0

(3.28)

This state is called the highest weight state of the representation, and is the
highest weight. It is possible to show that there is only one highest weight
in an irrep. and only one highest weight state associated to it. That is, the
highest weight is unique and non degenerate.
All other states of the representation are obtained from the highest weight
state by the application of a sequence of step operators corresponding to negative roots. The state defined by
| i E1 E2 . . . En | i

(3.29)

according to (3.26) has weight 1 2 . . . n . All the basis states are of


the form (3.29). If one applies a positive step operator on the state (3.29) the
resulting state of the representation can be written as a linear combination of

3.3. THE HIGHEST WEIGHT STATE

115

states of the form (3.29). To see this, let be a a positive root and any of
the negative roots appearing in (3.29). Then we have
E | i = (E1 E + [ E , E1 ]) E2 . . . En | i

(3.30)

In the cases where 1 is a negative root or it is not a root or even 1 = 0,


we obtain that the second term on the r.h.s. of (3.30) is a state of the form of
(3.29). In the case 1 is a positive root we contiunue the process until all
positive step operators act directly on the highest state | i, and consequently
annihilate it. Therefore the state (3.30) is a linear combination of the states
(3.29).
The weight lattice is invariant by the Weyl group. If is a weight, and
therefore satisfies (3.3), it follows that () also satisfies (3.3) for any root
, and so is a weight. To show this we use the fact that (x) (y) = x y
and 2 = 1. Then (denoting = ())
2
2 ()
2 ()
=
=
= integer
2
2

2
()

(3.31)

However we can show that the set of weights of a given representation, which
is a finite subset of , is invariant by the Weyl group. The state defined by
|
i S | i

(3.32)

where | i is a state of the representation and S is defined in (2.154), is also


a state of the representation since it is obtained from | i by the action of an
operator of the representation. Using (2.155) we get
xH |
i =
=
=
=

S S1 x HS | i
S (x) H | i
(x) |
i
() x |
i

(3.33)

Since the vector x is arbitrary we obtain that the state |


i has, weight ()
Hi |
i = Hi S | i = ()i S | i = ()i |
i

(3.34)

Therefore if is a weight of the representation so is () for any root .


One can easily check that the root lattice r is also invariant by the Weyl
reflections.

116 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


A consequence of the above result is that the highest weight of an irrep.
is a dominant weight. By taking its Weyl reflection
() =

(3.35)

one obtains that 2 has to be non negative if is a positive root, since


() is also a weight of the representation and consequenlty can not exceed
by a multiple of a positive root. Therefore
0

for any positive root

(3.36)

and the highest weight is a dominant weight.


The highest weight can be used to label the representation. This is one
of the consequences of the following theorem which we state without proof.
Theorem 3.3 There exists a unique irreducible representation of a compact
Lie algebra (up to equivalence) with highest weight state | i for each of the
weight lattice in the Fundamental Weyl Chamber or on its border.
The importance of this theorem is that it provides some sort of classification of all irreps. of a compact Lie algebra. All other reducible representations
are constructed from these ones. The irreps. can be labelled by their highest weight as D or D(n1 ,n2 ,...nr ) where the na s are non-negative integers
appearing in the expansion of in terms of the fundamental weights a , i.e.
P
a
.
= ra=1 na a , and na = 2
2a
An irrep. is called a fundamental representation when its highest weight is
a fundamental weight. Therefore the number of fundamental representations
of a semisimple compact Lie algebra is equal to its rank.
The highest weight of the adjoint representation is the highest positive root
(see section 2.13). It follows that the weights of the adjoint representation are
all roots of the algebra together with zero which is a weight r-fold degenerated
(r= rank).
We say a weight is a minimal weight if it satisfies
2
= 0 or 1 for any root
2

(3.37)

The representation for which the highest weight is minimal is said to be a


minimal representation. These representations play an important role in grand
unified theories (GUT) in the sense that the constituent fermions prefer, in
general, to form multiplets in such minimal representations.

3.3. THE HIGHEST WEIGHT STATE

117

Example 3.3 In the example 3.1 we have seen that the only fundamental
weight of SU (2) is = 12 . Therefore the dominant weights of SU (2) are
the positive integers and half integers. Each one of these dominant weights
corresponds to an irreducible representation of SU (2). Then we have that
= 0 corresponds to the scalar representation, = 21 the spinorial rep. which
is the fundamental rep. of SU (2) (dim = 2), = 1 is the vectorial rep. which
is the adjoint of SU (2) (dim = 3) and so on.
Example 3.4 In the case of SU (3) we have two fundamental representations
with highest weights 1 , and 2 (see example 3.2. They are respectively the
triplet and antitriplet representations of SU (3). The rep. with highest weight
1 + 2 = 3 is the adjoint. All representations with highest weight of the form
with = n1 1 + n2 2 , with n1 and n2 non negative integers are irreducible
representations of SU(3).

118 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS

3.4

Weight strings and multiplicities

If we apply the step operator E or E , for a fixed root , successively on a


state of weight of a finite dimensional representation, we will eventually get
zero. That means that there exist positive integer numbers p and q such that
E | + pi

and

E | qi

(3.38)

p and q are the greatest positive integers for which +p and q are weights
of the representation. One can show that all vectors of the form + n with
n integer and q < n < p , are weights of the representation. Therefore the
weights form unbroken strings, called weight strings , of the form
+ p ; + (p 1) ; . . . + ; ; ; . . . q

(3.39)

We have shown in the last section that the set of weights of a representation is
invariant under the Weyl group. The effect of the action of the Weyl reflection
on a weight is to add or subtract a multiple of the root , since () =
, and from (3.3) we have that 2
is an integer. Therefore the weight
2
2
2
string (3.39) is invariant by the Weyl reflection . In fact, reverses the
string (3.39) and consenquently we have that
( + p) = q =
and so

2
p
2

(3.40)

2
=qp
(3.41)
2
This result is similar to (2.187) which was obtained for root strings. However,
notice that the possible values of q p , in this case, are not restrict to the
values given in (2.187) (q p can, in principle, have any integer value). In the
case where is the highest weight of the representation we have that p is zero if
is a positive root, and q is zero if is negative. The relation (3.41) provides
a practical way of finding the weights of the representation. In some cases it is
easier to find some weights of a given representation by taking successive Weyl
reflections of the highest weight. However, this method does not provide, in
general, all the weights of the representation.
Once the weights are known one has to calculate their multiplicities. There
exists a formula, due to Kostant, which expresses the multiplicities directly as
a sum over the elements of the Weyl group. However, it is not easy to use
this formula in practice. There exists a recursive formula, called Freudenthals

3.4. WEIGHT STRINGS AND MULTIPLICITIES

119

formula , which is much easier to use. According to it the multiplicity m ()


of a weight in an irreducible representation of highest weight is given
recursively as (see sections 22.3 and 24.2 of [HUM 72])


( + ) ( + )

m () = 2

X p()
X

( + n) m ( + n)

(3.42)

>0 n=1

where

1X

2 >0

(3.43)

The first summation on the l.h.s. is over the positive roots and the second one
over all positive integers n such that + n is a weight of the representation,
and we have denoted by p () the highest value of n. By starting with m () = 1
one can use (3.43) to calculate the multiplicities of the weights from the higher
ones to the lower ones.
If the states | i1 and | i2 have the same weight, i.e., is degenerated,
then the weight () is also degenerate and has the same multiplicity as .
Using (3.32) we obtain that the states
| ()i1 = S | i1

| ()i2 = S | i2

and

(3.44)

have weight () and their linear independence follows from the linear independence of | i1 and | i2 . Indeed,
0 = x1 | ()i1 + x2 | ()i2 = S (x1 | i1 + x2 | i2 )

(3.45)

So, if | i1 and | i2 are linearly independent one gets that one must have
x1 = x2 = 0 and so, | ()i1 and | ()i2 are also linearly independent.
Therefore all the weights of a representation which are conjugate under the
Weyl group have the same multiplicity. This fact can be used to make the
Freudenthals formula more efficient in the calculation of the multiplicities.
Example 3.5 Using the results of example 2.14 we have that the Cartan matrix of so(5) ond its inverse are
K=

2 1
2 2

1
=
2

2 1
2 2

(3.46)

Then, using (3.19), we get that the fundamental weights of so(5) are
1 =

1
(21 + 2 )
2

2 = 1 + 2

(3.47)

120 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS

Figure 3.3: The weights of the spinor representation of so(5).


where 1 and 2 are the simple roots of so(5). Let us consider the fundamental representation with highest weight 1 . The scalar products of 1 with the
positive roots of so(5) are
21 1
= 1
12
21 (1 + 2 )
= 1
(1 + 2 )2

21 2
=0
22
21 (21 + 2 )
=1
(21 + 2 )2

(3.48)

Therefore using (3.41) (with p = 0 since 1 is the highest weight) we get that
1 ;

(1 1 ) ;

(1 1 2 ) ;

(1 21 2 )

(3.49)

are weights of the representation. By taking Weyl reflections of these weights


or using (3.41) further one can check that these are the only weights of the
fundamental rep. with highest weight 1 .
Since all weights are conjugate under the Weyl group they all have the same
multiplicity as 1 , which is one. Therefore they are not degenerate and the
representation has dimension 4. This is the spinor representation of so(5).
One can check that the weights of the fundamental representation of so(5) with
highest weight 2 are
2 ; 2 2 = 1 ; 2 1 2 = 0 ;
2 21 2 = 1 ; 2 21 22 = (1 + 2 )

(3.50)

3.5. THE WEIGHT

121

Again these weights are not degenerate and the representation has dimension
5. This is the vector representation of so(5).
Example 3.6 Consider the irrep. of su(3) with highest weight = 3 =
1 + 2 , i.e., the highest positive root. Using (3.41) and performing Weyl
reflections one can check that the weights of such rep. are all roots plus the
zero weight. Since the roots are conjugated to 3 = under the Weyl group we
conclude that they are non degenerated weights. The multiplicity of the zero
weight can be calculated from the Freundenthals formula. From (3.43) we have
that, in this case, = 3 and so from (3.42) we get


432 32 m (0) = 2 m (1 ) 12 + m (2 ) 22 + m (3 ) 32

(3.51)

Since m (1 ) = m (2 ) = m (3 ) = 1 and 12 = 22 = 32 we obtain that


m (0) = 2. So there are two states with zero weight and consequently the
representation has dimension 8. This is the adjoint of su(3).

3.5

The weight

A vector which plays an important role in the representation theory of Lie


algebras is the vector defined in (3.43). It is half of the sum of all positive
roots. In same cases is a root, but in general that is not so. However is
always a dominant weight of the algebra. In other to show that we need some
results which we now prove.
Let a be a simple root and let be a positive root non proportional to
P
a . If we write = rb=1 nb b we have that nb 6= 0 for some b 6= a. Now,
the coefficient of b in a () is still nb , and consequently a () has at least
one positive coefficient. So, a () is a positive root, and it is different from
a , since a is the image of a under a . Therefore we have proved the
following lemma.
Lemma 3.1 If a is a simple root, then a permutes the positive roots other
than a .
From this lemma it follows that
a () = a

(3.52)

and consequently
2 a
=1
a2

for any simple root a

(3.53)

122 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


From the definition (3.43) it follows that is a vector on the root (or weight)
space and therefore can be written in terms of the simple roots or the fundamental weights. Writing
=

r
X

xb b

(3.54)

b=1

we get from (3.4) and (3.53) that


r
X
2 a
2b a
=1=
xb
= xa
2
a
a2
b=1

So we have shown that


=

r
X

b=1

and consequently is a dominant weight.

(3.55)

(3.56)

3.6. CASIMIR OPERATORS

3.6

123

Casimir operators

Let s1 s2 ...sn be a tensor invariant under the adjoint representation of a Lie


group G. By that we mean
0 0

s1 s2 ...sn = dss10 (g) dss20 (g) . . . dssn0n (g) s1 s2 ...sn

(3.57)

for any g G, and where d sj0 (g) is the matrix representing g in the adjoint
j

representation, i.e. gTs g 1 = Ts0 dss (g) (see (2.31)).


Consider now a representation D of G and construct the operator
Cn(D) s1 s2 ...sn D (Ts1 ) D (Ts2 ) . . . D (Tsn )

(3.58)

Notice that such operator can only be defined on a given representation since
it involves the product of operators and not Lie brackets of the generators.
We then have


D (g) Cn(D) = s1 s2 ...sn D gTs1 g 1 D gTs2 g 1 . . . D gTsn g 1 D (g)


0

= dss11 (g) . . . dssnn (g) s1 ...sn D Ts01 . . . D Ts0n D (g)


0

= s1 ...sn D Ts01 . . . D Ts0n D (g)


= Cn(D) D (g)

(3.59)

So, we have shown that Cn(D) commutes with any matrix of the representation
h

Cn(D) , D (g) = 0

(3.60)

We are interested in operators that can not be reduced to lower orders.


That implies that the tensor s1 s2 ...sn has to be totally symmetric. Indeed,
suppose that s1 s2 ...sn has an antisymmetric part in the indices sj and sj+1 .
Then we write


D Tsj D Tsj+1






i
1
1h  
{D Tsj , D Tsj+1 } +
D Tsj , D Tsj+1
2
2




1
=
{D Tsj , D Tsj+1 } + fstj sj+1 D (Tt )
(3.61)
2

and so, Cn(D) will have terms involving the product of (n 1) operators. Therefore, by totally symmetrizing the tensor s1 s2 ...sn we get operators Cn(D) which
are monomials of order n in D (Ts )s. Such operators are called Casimir operators, and n is called their order. They play an important role in representation

124 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


Ar
Br
Cr
Dr
E6
E7
E8
F4
G2

SU (r + 1)
SO(2r + 1)
Sp(r)
SO(2r)

2,
2,
2,
2,
2,
2,
2,
2,
2,

3,
4,
4,
4,
5,
6,
8,
6,
6

4, . . . r + 1
6, . . . 2r
6 . . . 2r
6 . . . 2r 2, r
6, 8, 9, 12
8, 10, 12, 14, 18
12, 14, 18, 20, 24, 30
8, 12

Table 3.1: The orders of the Casimir operators for the simple Lie Groups
theory. From Schurs lemma 1.1 it follows that in an irreducible representation
the Casimir operators have to be proportional to the identity.
One way of constructing tensors which are invariant under the adjoint
representation, is by considering traces of products of generators in a given
representation D0 , since


Tr (D0 (Ts1 Ts2 . . . Tsn )) = Tr D0 gTs1 g 1 gTs2 g 1 . . . gTsn g 1



(3.62)

Then taking
s1 s2 ...sn

1
n!

Tr (D0 (Ts1 Ts2 . . . Tsn ))

(3.63)

permutations

we get Casimir operators. However, one finds that after the symetrization procedure very few tensors of the form above survive. It follows that a semisimple
Lie algebra of rank r possesses r invariant Casimir operators functionally independent. Their orders, for the simple Lie algebras, are given in table 3.1.

3.6.1

The Quadratic Casimir operator

Notice from table 3.1 that all simple Lie groups have a quadratic Casimir
operator. That is because all such groups have an invariant symmetric tensor
of order two which is the Killing form (see section 2.4)
st = Tr (d (Ts ) d (Tt ))

(3.64)

and
(D)

C2

st D (Ts ) D (Tt )

(3.65)

3.7. CHARACTERS

125

where st is the inverse of st .


Using the normalization (2.134) of the Killing form, we have that the
Casimir operator in the Cartan-Weyl basis is given by
(D)

C2

r
X

D (Hi ) D (Hi )+

i=1

2
(D (E ) D (E ) + D (E ) D (E )) (3.66)
>0 2
X

Since the Casimir operator commutes with all generators, we have from the
Schurs lemma 1.1 that in an irreducible representation it must be proportional to the unit matrix. Denoting by the highest weight of the irreducible
representation D we have
(D)
C2

r
X

| i =

2
+
[ D (E ) , D (E ) ] | i
>0 2
!

2i

i=1

2 2
H | i
+
>0 2
!

| i

(3.67)

>0

where we have used (3.28) and (2.125). So, if D, with highest weight , is
irreducible, we can write using (3.43) that
(D)

C2

= ( + 2) 1l = ( + )2 2 1l

(3.68)

where 1l is the unit matrix in the representation D under consideration.


Example 3.7 In the case of SU (2) the quadratic operator is J 2 , i.e., the
square of the angular momentum. Indeed, from example 3.1 we have that
(D)
= 1, and then = 1/2 and therefore C2 = ( + 1). Since is a positive
integer or half integer we see that these are really the eigenvalues of J 2 .

3.7

Characters

In definition 1.13 we defined the character of an element g of a group G in a


given finite dimensional representation of G, with highest weight , as being
the trace of the matrix that represents that element, i.e.
(g) Tr (D (g))

(3.69)

Obviously equivalent representations (see section 1.5) have the same characters. Analogously, two conjugate elements, g1 = g3 g2 g31 , have the same character in all representations. Therefore the conjugacy classes can be labelled
by the characters.

126 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


Example 3.8 Using (2.27) and the commutation relations (2.58) for the algebra of so(3) (or su(2)) one gets that

ei 2 T2 T3 ei 2 T2 = T1
and consequently

(3.70)

ei 2 T2 eiT3 ei 2 T2 = eiT1

(3.71)

An analogous result is obtained if we interchange the roles of the generators T1


, T2 and T3 . Therefore the rotations by a given angle , no matter the axis, are
conjugate. The conjugacy classes of SO(3) are defined by the angle of rotation,
and the characters in a representation of spin j are given by
j

() =

iT3

j
X

eim

(3.72)

m=j

where m are the eigenvalues of T3 (see section 2.5). We have a geometric


progression and therefore
1
1
ei(j+ 2 ) ei(j+ 2 )
() =
ei/2 ei/2

(3.73)

Notice that rotations by and have the same character.


The relation (3.71) can be generalized for any compact Lie group. Any
element of a compact group is conjugate to an element of the abelian subgroup
which is the exponentiation of the Cartan subalgebra, i.e.
g = g 0 eiH g 0

(3.74)

Therefore the conjugacy classes, and consequently the characters, can be labelled by r parameters or angles (r = rank).
However, the elements of the abelian group parametrized by and ()
have the same character, since from (2.155) we have
S eiH S1 = ei ()H

(3.75)

Thus the parameter and its Weyl reflections parametrize the same conjugacy
class.
The generalization of (3.73) to any compact group was done by H. Weyl in
1935. In a representation with highest weight the elements of the conjugacy
class labelled by have a character given by

() =

i(+)
W (sign) e
Q
ei >0 (1 ei )

(3.76)

3.7. CHARACTERS

127

where the summation is over the elements of the Weyl group W , and where
sign is 1 (1) if the element of the Weyl group is formed by an even (odd)
number of reflections. is the same as the one defined in (3.43). This relation
is called the Weyl character formula.
The character can also be calculated once one knows the multiplicities of
the weights of the representation. From (3.69) and (3.74) we have that


() = Tr D eiH =

m () ei

(3.77)

where the summation is over the weights of the representation and m () are
their multiplicities. These can be obtained from Freudenthals formula (3.42).
In the scalar representation the elements of the group are represented by
the unity and the highest weight is zero. So setting = 0 in (3.76) we obtain
what is called the Weyl denominator formula
X

(sign) ei() = ei

Y

1 ei

(3.78)

>0

In general, such formula provides a nontrivial relation between a product and


a sum. Substituting (3.78)in (3.76) we can write the Weyl character formula
as the ratio of two sums:
P

(sign) ei(+)
i()
W (sign) e

() = PW

(3.79)

The dimension of the representation can be obtained from the Weyl character formula (3.76) noticing that
dimD = Tr (1l) = (0)

(3.80)

we then obtain the so called Weyl dimensionality formula

dimD =

( + )
>0

>0

(3.81)

Example 3.9 In the case of SO(3) (or SU (2)) we have that = 1, = 1/2
and consequently we have from (3.81) that
dim Dj = 2j + 1

(3.82)

This result can also be obtained from (3.73) by taking the limit 0 and
using LHospitals rule

128 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


(m1 , m2 )
(1, 0)
(0, 1)
(2, 0)
(0, 2)
(1, 1)
(3, 0)
(0, 3)
(2, 1)
(1, 2)

dimension
(triplet) 3
(anti-triplet) 3
6
6
(adjoint) 8
10
10
15
15

Table 3.2: The dimensions of the smallest irreps. of SU (3)


Example 3.10 Consider an irrep. of SU (3) with highest weight . We can
write = m1 1 + m2 2 where 1 and 2 are the fundamental weights and
m1 and m2 are non-negative integers. From (3.56) we have that ( + )2 =
(m1 + 1) 1 + (m2 + 1) 2 . Normalizing the roots of SU (3) as 2 = 2 we have
(from (3.4)) that a b = ab (a, b = 1, 2), where 1 and 2 are the simple
roots and therefore ( 3 = 1 + 2 )
( + ) 1 = m1 + 1 ; ( + ) 2 = m2 + 1 ;
1 = 2 = 1 ;
3 = 2

( + ) 3 = m1 m2 + 2
(3.83)

So, from (3.81) the dimension of the irrep. of SU (3) with highest weight is
1
(m1 + 1) (m2 + 1) (m1 + m2 + 2)
(3.84)
2
In table 3.2 we give the dimensions of the smallest irreps. of SU (3).
dim D = dim D =

Example 3.11 Similarly let us consider the irreps. of SO(5) (or Sp(2)) with
highest weight = m1 1 + m2 2 . From example 2.14 we have that the positive
roots of SO(5) are 1 , 2 , 3 1 + 2 , and 4 21 + 2 , and so using
(3.4) and (3.56) we get (setting 12 = 1, 22 = 2)
2 1
2 2
2 3
3
2 4
=
= 1;
= 1;
=2
2
2
2
1
2
3
2
42
2 ( + ) 1
2 ( + ) 2
= m1 + 1 ;
= m2 + 1
(3.85)
2
1
22
2 ( + ) 3
1
2 ( + ) 4
1
=
(m1 + 2m2 + 3) ;
= (m1 + m2 + 2)
2
2
3
2
4
2

3.7. CHARACTERS

129
(m1 , m2 )
(1, 0)
(0, 1)
(2, 0)
(0, 2)
(1, 1)
(3, 0)
(0, 3)
(2, 1)
(1, 2)

dimension
(spinor) 4
(vector) 5
(adjoint) 10
14
16
20
30
35
40

Table 3.3: The dimensions of the smallest irreps. of SO(5) (or Sp(2))
Therefore from (3.81)
dim D(m1 ,m2 ) =

1
(m1 + 1) (m2 + 1) (m1 + m2 + 2) (m1 + 2m2 + 3)
6

(3.86)

The smallest irreps. of SO(5) (or Sp(2)) are shown in table 3.3.
We give in figures 3.4 and 3.5 the dimensions of the fundamental representations of the simple Lie algebras (extracted from [DYN 57]).

130 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS

Figure 3.4: The dimensions of the fundamental representations of


the classical Lie groups.

3.7. CHARACTERS

Figure 3.5: The dimensions of of the fundamental representations


of the exceptional Lie groups.

131

132 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS

3.8

Construction of matrix representations

We have seen that finite dimensional representations of compact Lie groups


are equivalent to unitary ones (see theorem 3.1). In such representations the
Cartan subalgebra generators and step operators can be chosen to satisfy1
Hi = Hi ;

E = E

(3.87)

We have chosen the basis of the representation to be formed by the eigenstates


of the Cartan subalgebra generators. Using (3.1) and (3.87) we have
h0 | Hi | i = i h0 | i = 0i h0 | i

(3.88)

(0 ) h0 | i = 0

(3.89)

and so
and consequently states with different weights are orthogonal. In the case a
weight is degenerate, it is possible to find an orthogonal basis for the subspace
generated by the states corresponding to that degenerate weight. We then
shall denote the base states of the representation by | , ki where is the
corresponding weight and k is an integer number that runs from 1 to m(),
the multiplicity of . We can always normalize these states such that
h0 , k 0 | , ki = ,0 kk0

(3.90)

If T denotes an operator of the representation of the algebra then the matrix


D (T )(0 ,k0 ) (,k) h0 , k 0 | T | , ki

(3.91)

form a matrix representation since they reproduce the commutation relations


of the algebra. Indeed
[ D (T ) , D (T 0 ) ](0 ,k0 ) (,k) =

h0 , k 0 | T | 00 , k 00 ih00 , k 00 | T 0 | 0 , k 0 i

00 ,k00

h0 , k 0 | T 0 | 00 , k 00 ih00 , k 00 | T | 0 , k 0 i

00 ,k00
0
0

= h , k | [ T , T 0 ] | 0 , k 0 i
= D ([ T , T 0 ])(0 ,k0 ) (,k)
1

(3.92)

In order to simplify the notation we will denote the operators D (Hi ) and D (E ) by Hi
and E respectively.

3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS

133

where we have used the fact that


1l =

| , kih, k |

(3.93)

,k

is the identity operator.


When a step operator E acts on a state of weight , it either annihilates
it or produces a state of weight + . Therefore, using (3.93) and (3.90) one
gets
X

E | , ki =

| 0 , k 0 ih0 , k 0 | E | , ki

0 ,k0
m(+)

| + , lih + , l | E | , ki

(3.94)

l=1

where the sum is over the states of weight + . Therefore, from (3.91) one
has
D (E )(0 ,k0 ) (,k) = h + , k 0 | E | , ki0 ,+
(3.95)
The matrix elements of Hi are known once we have the weights of the
representation, since from (3.1) and (3.90)
D (Hi )(0 ,k0 ) (,k) = h0 , k 0 | Hi | , ki = i 0 , k0 ,k

(3.96)

Therefore, in order to construct the matrix representation of the algebra


we have to calculate the transition amplitudes h + , l | E | , ki. Notice
that from (3.87)
h + , l | E | , ki = h, k | E | + , li

(3.97)

Now, using the commutation relation (see (2.218))


[ E , E ] =

2 H
2

(3.98)

one gets
2 H
| , ki
2

h, k | [ E , E ] | , ki = h, k |

(3.99)

2
2
= h, k | E E | , ki h, k | E E | , ki
=

m()

h, k | E | , lih , l | E | , ki

l=1
m(+)

X
l=1

h, k | E | + , lih + , l | E | , ki

134 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


and so, using (3.97)
m()

m(+)

2
2
l=1
l=1
(3.100)
where m ( + ) and m ( ) are the multiplicities of the weights + and
respectively.
The relation (3.100) can be used to calculate the modules of the transition
amplitudes recursively. By taking to be a positive root and the highest
weight of the representation we have that the second term on the l.h.s. of
(3.100) vanishes. Since, in a irrep., is not degenerate we can neglect the
index k and write
X

| h, k | E | , li |

| h + , l | E | , ki |2 =

m()

X
l=1

| h | E | , li |2 =

2
=q
2

(3.101)

where, according to (3.41), q is the highest positive integer such that q


is a weight of the representation. Taking now the second highest weight we
repeat the process and so on.
The other relations that the transition amplitudes have to satisfy come
from the commutation relations between step operators. If + is a root we
have from (2.218)
h + + , l | [ E , E ] | , ki = (q + 1) (, )h + + , l | E+ | , ki
(3.102)
Then using (3.90) and (3.94) one gets
m(+)

h + + , l | E | + , k 0 ih + , k 0 | E | , ki

k0 =1
m(+)

h + + , l | E | + , k 0 ih + , k 0 | E | , ki

k0 =1

= (q + 1) (, )h + + , l | E+ | , ki

(3.103)

where q is the highest positive integer such that q (or equivalently q,


since we are assuming + is a root) is a root, and (, ) are signs determined
from the Jacobi identities (see section 2.14)
We now give some examples to ilustrate how to use (3.100) and (3.103)
to construct matrix representations. This method is very general and consequently difficult to use when the representation (or the algebra) is big. There
are other methods which work better in specific cases.

3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS

3.8.1

135

The irreducible representations of SU (2)

In section 2.5 we have studied the representations of SU (2). We have seen


that the weights of SU (2), denoted by m, are integers or half integers, and on
a given irreducible representation with highest weight j they run from j to j
in integer steps. The weights are non-degenerated and so the representations
have dimensions 2j + 1. As we did in section 2.5 we shall denote the basis of
the representation space as
| j, mi

m = j, j + 1, . . . , j 1, j

(3.104)

and they are orthonormal


hj, m0 | j, mi = m,m0

(3.105)

The Chevalley basis for SU (2) satisfy the commutation relations


[ H , E ] = E

[ E+ , E ] = H

(3.106)

where H = 2 H/2 , with being the only positive root of SU (2). In section
2.5 we have used the basis
[ T3 , T ] = T

[ T+ , T ] = 2T3

(3.107)

and so we have E T and H 2T3 . Since m are eigenvalues of T3


T3 | j, mi = m | j, mi

(3.108)

we get from (3.91) the matrix representing T3 as


(j)

Dm0 ,m (T3 ) = hj, m0 | T3 | j, mi = m m,m0

(3.109)

Using the relation (3.100), which is the same as taking the expectation
value on the state | j, mi of both sides of the second relation in (3.107), we get
| hj, m | T+ | j, m 1i |2 | hj, m + 1 | T+ | j, mi |2 = 2m

(3.110)

where we have used the fact that T+ = T (see (3.87)). Notice that T+ | j, ji =
0, since j is the highest weight and so
| hj, j | T+ | j, j 1i |2 = 2j

(3.111)

Clearly, such result could also be obtained directly from (3.101). The other
matrix elements of T+ can then be obtained recursively from (3.110). Indeed,

136 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


denoting cm | hj, m + 1 | T+ | j, mi |2 , we get cj1 = 2j, cj2 = 2j + 2(j 1),
cj3 = 2j + 2(j 1) + 2(j 2), and so
cm =

jm1
X

2(j l) = (j m)(j + m + 1) = j(j + 1) m(m + 1)

l=0

Therefore
| hj, m + 1 | T+ | j, mi |2 = j(j + 1) m(m + 1)

(3.112)

hj, m + 1 | T+ | j, mi = hj, m | T | j, m + 1i

(3.113)

| hj, m 1 | T | j, mi |2 = j(j + 1) m(m 1)

(3.114)

and since
we get
The phases of such matrix elements can be chosen to vanish, since in SU (2)
we do not have a relation like (3.103) to relate them. Therefore, we get
T | j, mi =

j(j + 1) m(m 1) | j, m 1i

(3.115)

and so,
(j)

Dm0 ,m (T+ ) = hj, m0 | T+ | j, mi


=

j(j + 1) m(m + 1) m0 ,m+1

(j)

Dm0 ,m (T ) = hj, m0 | T | j, mi
=

3.8.2

j(j + 1) m(m 1) m0 ,m1

(3.116)

The triplet representation of SU (3)

Consider the fundamental representation of SU (3) with highest weight 1 . In


example 3.10 we have seen it has dimension 3, and in fact it is the so called
triplet representation of SU (3). From (3.4) we have
21 1
21 3
=
=1
2
1
32

(3.117)

where 3 = 1 + 2 , 1 and 2 are the the simple roots of SU (3). So,from


(3.41) we get that 1 , (1 1 ) and (1 3 ) are weights of the representation.
Since the representation has dimension 3 it follows that they are the only
weights and they are non-degenerate. Those weights are shown in figure 3.6.

3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS


2

3
KA
A
A

1 1 A


137






HHA 
Y
*

A H
A

A A 
A A
A ?A
 AA A
 1 3A

AU

1
-

Figure 3.6: The weights of the triplet representation of SU (3)


Taking the Cartan subalgebra generators in the Chevalley basis we have
h0 | Ha | i =

2a
0 ,
a2

a = 1, 2

(3.118)

where we have used (3.90), and where we have neglected the degeneracy index.
From (3.4) and the Cartan matrix of SU (3) (see example 2.13) we have
21 (1 1 )
= 1
12
21 (1 3 )
= 0
12

22 (1 3 )
=1
22
22 (1 1 )
=1
22

(3.119)

Denoting the states as (as a matter of ordering the rows and columus of the
matrices)
| 1i | 1 i ;

| 2i | 1 1 i ;

| 3i | 1 3 i

(3.120)

we obtain from (3.117), (3.118), (3.119) and that the matrices representing the
Cartan subalgebra generators are

1 0
0

D1 (H1 ) = 0 1 0
0 0
0

0 0 0

D1 (H2 ) = 0 1 0
0 0 1

(3.121)

Using (3.101) and (3.117) we have that


| h1 | E1 | 1 1 i |2 =| h1 | E3 | 1 3 i |2 = 1

(3.122)

138 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


Making = 1 1 and = 2 in (3.100) and using the fact that
h1 1 + 2 | E2 | 1 1 i = 0

(3.123)

since 1 1 + 2 is not weight, we get


| h1 1 | E2 | 1 1 2 i |2 = 1

(3.124)

These are the only non vanishing transition amplitudes. From (3.95) and
(3.120) we see that the only non vanishing elements of the matrices representing
the step operators are
D1 (E1 ) = h1 | E1 | 1 1 i ei
D1 (E2 ) = h1 1 | E2 | 1 3 i ei
D1 (E3 ) = h1 | E3 | 1 3 i ei

(3.125)

where, according to (3.122) and (3.124), we have introduced the angles ,


and . The negative step operators are obtained from these ones using (3.87).
Choosing the cocycle (1 , 2 ) = 1 and since 2 1 is not a root, we
have from (3.103) that the fases have to satisfy (set = 1 3 , = 1 and
= 2 in (3.103))
+=
(3.126)
There are no futher restrictions on these fases.
Therefore we get that the matrices which represent the step operators in
the triplet representation are
0 ei 0

1
D (E1 ) = 0 0 0
0 0 0

0 0 0

D1 (E2 ) =
0 0 ei
0 0 0

(3.127)

0 0
0
0
D1 (E2 ) =
0 0

i
0 e
0

0 0 ei(+)

D1 (E3 ) = 0 0 0

0 0 0

0
0 0

D1 (E1 ) = ei 0 0
0
0 0

0
0 0

1
0 0
D (E3 ) = 0

i(+)
e
0 0

In general, the fases and are chosen to vanish. The algebra of SU (3)
is generated by taking real linear combination of the matrices Ha (a = 1, 2),
(E + E ) and (E E ). On the other hand the algebra of SL(3) is generated by the same matrices but the third one does not have the factor i. Notice
that in this way the triplet representation of the group SU (3) is unitary whilst
the triplet of SL(3) is not.

3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS


2

139

KA

A
2 
A AA 
A 6A
A A

AH
 A
 

HH

A

A

j
2 3 
 A

A

A

A

AU

2 2

Figure 3.7: The weights of the anti-triplet representation of SU (3)

3.8.3

The anti-triplet representation of SU (3)

We now consider the other fundamental representation of SU (3) which has


highest weight 2 . In example 3.10 we saw it also has diemnsion 3 and it is
the anti-triplet of SU (3). Using (3.4) we get that the weight are 2 , 2 2
and 2 3 and consequently they are not degenerate. They are shown in
figure 3.7.
We shall denote the states as
| 1i | 2 i ;

| 2i | 2 2 i ;

| 3i | 2 3 i

(3.128)

Using the Cartan matrix of SU (3) (see example 2.13), (3.4) and (3.118) we
get that the matrices which represent the Cartan subalgebra generators in the
Chevalley basis are

0 0 0

2
D (H1 ) = 0 1 0
0 0 1

1 0
0

2
D (H2 ) = 0 1 0
0 0
0

(3.129)

Using (3.101) we have that


| h2 | E2 | 2 2 i |2 =| h2 | E3 | 2 3 i |2 = 1

(3.130)

and from (3.100)


| h2 2 | E1 | 2 1 2 i |2 = 1

(3.131)

140 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


Using (3.95) we get that the only non vanishing matrix elements of the step
operators are
D2 (E1 ) = h2 2 | E1 | 2 3 i ei
D2 (E2 ) = h2 | E2 | 2 2 i ei
D2 (E3 ) = h2 | E3 | 2 3 i ei

(3.132)

where, according to (3.130) and (3.131), we have introduced the fases , and
. From (3.87) we obtain the matrices for the negative step operators. Using
the fact that (q + 1) (1 , 2 ) = 1 we get from (3.103) that these fases have to
satisfy
+=+
(3.133)
Therefore the matrices which represent the step operators in the anti-triplet
representation are

0 0 0

D2 (E1 ) = 0 0 ei
0 0 0

0 0
0

0
D2 (E1 ) = 0 0

i
0 e
0

0 ei 0

0
D2 (E2 ) = 0 0

0 0
0

0
0 0
i
2
0 0
D (E2 ) = e

0
0 0

0 0 ei(+)

D2 (E3 ) = 0 0 0

0 0 0

(3.134)

0
0 0

2
0 0
D (E3 ) = 0

i(+)
e
0 0

So, these matrices are obtained from those of the triplet by making the change
E1 E2 and E3 E3 . From (3.121) and (3.129) we see the
Cartan subalgebra generators are also interchanged.

3.9

Tensor product of representations

We have seen in definition 1.12 of section 1.5 the concept of tensor product
of representations. The idea is quite simple. Consider two irreducible repre0
sentations D and D of a Lie group G, with highest weights and 0 and
0
representation spaces V and V respectively. We can construct a third rep0
0
resentation by considering the tensor product space V V V . The
operators representing the group elements in the tensor product representation
are
0
0
D (g) D (g) D (g)
(3.135)

3.9. TENSOR PRODUCT OF REPRESENTATIONS

141

and they act as


0

D (g) V = D (g) V D (g) V

(3.136)

They form a representation since


0

D (g1 ) D (g2 ) = D (g1 ) D (g2 ) D (g1 ) D (g2 )


0
= D (g1 g2 ) D (g1 g2 )
0
= D (g1 g2 )
(3.137)
The operators representing the elements T of the Lie algebra G of G are
given by
0
0
D (T ) D (T ) 1l + 1l D (T )
(3.138)
Indeed
h

D (T1 ) , D (T2 )

D (T1 ) , D (T1 ) 1l
h

+ 1l D (T1 ) , D (T1 )

i
0

= D ([ T1 , T2 ]) 1l + 1l D ([ T1 , T2 ])
0
= D ([ T1 , T2 ])
(3.139)
Notice that if | , li and | 0 , l0 i are states of the representations V and
V with weights and 0 respectively, one gets
0

D (Hi ) | , li | 0 , l0 i = D (Hi ) | , li | 0 , l0 i
0
+ | , li D (Hi ) | 0 , l0 i
= (i + 0i ) | , li | 0 , l0 i

(3.140)

It then follows that the weigths of the representation V are the sums
0
of all weights of V with all weights of V . If and 0 are the highest weights
0
0
of V and V respectively, then the highest weight of V is + 0 , and the
corresponding state is
| + 0 i =| i | 0 i
(3.141)
which is clearly non-degenerate.
0
In general, the representation V is reducible and one can split it as the
sum of irreducible representations of G
0

V = 00 V
00

00

(3.142)

where V are irreducible representations with highest weight 00 . The decom0


position (3.142) is called the branching of the representation V .

142 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


0

Taking orthonormal basis | , li and | 0 , l0 i for V and V respectively,


0
we can construct an orthonormal basis for V as
m() m(0 )
0

| + , ki =

X X
l=1

k
0 0
Cl,l
0 | , li | , l i

(3.143)

l0 =1
0

where m () and m (0 ) are the multiplicities of and 0 in V and V respectively, and k = 1, 2, . . . m ( + 0 ), with m ( + 0 ) being the multiplicity
0
k
of + 0 in V . Clearly, m ( + 0 ) = m () m (0 ). The constants Cl,l
0 are
the so-called Clebsch-Gordan coefficients.
Example 3.12 Let us consider the tensor product of two spinorial representations of SU (2). As discussed in section 3.8.1 it is a two dimensional representation with states | 21 , 12 i and | 21 , 12 i, and satisfying
1
T3 | 12 , 21 i = | 21 , 12 i
2

(3.144)

and (see (3.115))


T+ | 12 , 21 i = 0 ;
T | 21 , 21 i = | 12 , 21 i ;

T+ | 12 , 21 i =| 12 , 12 i
T | 21 , 12 i = 0

(3.145)

One can easily construct the irreducible components by taking the highest
weight state | 21 , 12 i | 12 , 21 i and act with the lowering operator. One gets
1

D 2 2 (T ) | 12 , 12 i | 12 , 12 i = (T 1l + 1l T ) | 12 , 12 i | 21 , 12 i
= | 12 , 12 i | 21 , 12 i+ | 12 , 12 i | 12 , 21 i
and


2

D 2 2 (T )

and


| 12 , 12 i | 12 , 21 i = 2 | 21 , 12 i | 21 , 12 i
1

D 2 2 (T )

3

| 12 , 12 i | 12 , 12 i = 0

(3.146)
(3.147)

On the other hand notice that


1

D 2 2 (T ) (| 12 , 21 i | 12 , 21 i | 12 , 12 i | 21 , 21 i) = 0

(3.148)

Therefore, one gets that the states


| 1, 1i | 21 , 12 i | 12 , 12 i

| 1, 0i (| 21 , 12 i | 12 , 12 i+ | 21 , 12 i | 12 , 12 i) / 2
| 1, 1i | 21 , 12 i | 12 , 21 i

(3.149)

3.9. TENSOR PRODUCT OF REPRESENTATIONS

143

constitute a triplet representation (spin 1) of SU (2).


The state

| 0, 0i (| 12 , 12 i | 21 , 12 i | 12 , 21 i | 21 , 12 i) / 2

(3.150)

constitute a scalar representation (spin 0) of SU (2).


The branching of the tensor product representation is usually denoted in
terms of the dimensions of the irreducible representations, and in such case we
have
22=3+1
(3.151)
Given an irreducible representation D of a group G one observes that it is
also a representation of any subgroup H of G. However, it will in general be
a reducible representation of the subgroup. The decomposition of D in terms
of irreducible representations of H is called the branching of D. In order to
illustrate it let us discuss some examples.
Example 3.13 The operator T3 generates a subgroup U (1) of SU (2) (see
(3.107)). From the considerations in 3.8.1 one observes that each state | j, mi
constitutes a scalar representation of such U (1) subgroup. Therefore, each
spin j representation of SU (2) decomposes into 2j + 1 scalars representation
of U (1).
Example 3.14 In example 3.6 we have seen that weights of the adjoint representation of SU (3) are its roots plus the null weight which is two-fold degenerate. So, let us denote the states as
| 1 i ;

| 2 i ;

| 3 i ;

| 0i ;

| 00 i

(3.152)

Consider the SU (2) U (1) subgroup of SU (3) generated by


21 H
}
12
22 H
U (1) {
}
22

SU (2) {E1 ,

(3.153)

One can define the state | 0i as


| 0i E1 | 1 i

(3.154)

and consequently the states


| 1 i ;

| 0i ;

| 1 i

(3.155)

144 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS


constitute a triplet representation of the SU (2) defined above. In addition, the
states
| 2 i ;
| 3 i
(3.156)
and
| 3 i ;

| 2 i

(3.157)

constitute two dublet representations of the same SU (2).


By taking | 00 i to be orthogonal to | 0i one gets that it is a singlet representation of SU (2).
Clearly, each state | i in (3.152) constitute a scalar representation of the
U (1) subgroup with eigenvalue 22 /22 . Since, U (1) commutes with the
SU (2) it follows the states of a given irreducible representation of SU (2) have
to have the same eigenvalue fo the U (1). Therefore, we have got the following
branching of the adjoint of SU (3) in terms of irreps. of SU (2) U (1)
8 = 3(0) + 2(1) + 2(1) + 1(0)
where the numbers inside the parenteses are the U (1) eigenvalues.

(3.158)

Bibliography
[ALD 86] R. Aldrovandi and J.G. Pereira, An introduction to geometrical
Physics, World Scientific (1995).
[AUM 77] L. Auslander and R.E. Mackenzie, Introduction to Differential Manifolds, Dover Publ., Inc., New York (1977).
[BAR 77] A. O. Barut and R. Raczka, Theory of group representations and
applications, Polish Scientific Publishers (1977).
[BUD 72] F. J. Budden; The Fascination of Groups; Cambridge University
Press (1972).
[CBW 82] Y. Choquet-Bruhat, C. De Witt-Morette and M. Dillard-Bleick,
Analysis, Manifolds and Physics, North-Holland Publ. Co. (1982).
[COR 84] J.F. Cornwell; Group theory in Physics; vols. I, II and III; Techniques in Physics 7; Academic Press (1984).
[DYN 57] E.B. Dynkin, Transl. Amer. Math. Soc. (2) 6 (1957) 111,
and (1) 9 (1962) 328.
[FLA 63] H. Flanders, Differential Forms with Applications to the Physical
Sciences, Academic Press (1957).
[HAM 62] M. Hamermesh; Group Theory and its Applications to Physical
Problems ; Addison-Wesley Publ. Comp. (1962).
[HEL 78] S. Helgason, Differential Geometry, Lie Groups and Symmetric
Spaces, Academic Press (1978).
[HUM 72] J.E. Humphreys, Introduction to Lie Algebra and Representation
Theory, Graduate Texts in Mathematics, Vol. 9, Springer-Verlag
(1972).
145

146

BIBLIOGRAPHY

[JAC 79] Nathan Jacobson; Lie Algebras, Dover Publ., Inc. (1979).
[LEZ 92] A. N. Leznov and M. V. Saveliev, GroupTheoretical Methods for Integration of Nonlinear Dynamical Systems, Progress in Physics Series,
v. 15, Birkha
userVerlag, Basel, 1992.
[OLI 82] D. I. Olive, Lectures on gauge theories and Lie algebras: with some
applications to spontaneous symmetry breaking and integrable dynamical systems, University of Virginia preprint (1982).

Index
abelian group, 10
abelian Lie algebra, 45
adjoint representation, 42
algebra
abelian, 45
automorphism, 42
compact, 46
Lie, 36
nilpotent, 55
simple, 45
solvable, 55
structure constants, 39, 41
semisimple, 45
associativity, 6
automorphism
automorphism group, 12
definition, 11
inner, 12, 43, 70
outer, 12, 43, 70
automorphism of a Lie algebra, 42
branching, 139
Cartan matrix, 75
Cartan subalgebra, 54, 55
Casimir operator
definition, 121
Casimir operators, 121
Casimir operators
quadractic, 48
center of a group, 16
centralizer, 16

character, 123
character
definition, 27
of Lie group, 123
Weyl formula, 125
character of a representation, 27
Chevalley basis, 84
Clebsch-Gordan coefficients, 140
closure, 6
co-root, 107
compact group, 33
compact semisimple Lie algebra, 46
completely reducible rep., 25
conjugacy class, 15
conjugate element, 15
conjugate subgroup, 15
continuos group, 33
coset
left coset space, 19
left cosets, 19
right coset space, 19
right cosets, 19
Coxeter number, 84
cyclic group, 10
dimension of a representation, 21
direct product, 17
dominant weight, 106
Dynkin diagram, 78
Dynkin index, 54
equivalent representations, 24
147

148
essential parameters, 33
exponential mapping, 40
factor group, 19
faithful representation, 21
finite discrete groups, 33
Freudenthals formula, 117
fundamental representation, 114
fundamental weights, 107
Fundamental Weyl Chamber, 71
group
abelian group, 10
adjoint representation, 42
center of a, 16
compact, 33
continuos, 33
cyclic group, 10
definition, 6
direct product of, 17
essential parameters, 33
finite discrete, 33
homomorphic groups, 11
infinite discrete, 33
isomorphic groups, 11
Lie, 34
non compact, 33
operator group, 21
order of a group, 14
quocient group, 19
representation of, 21
semisimple group, 16
simple group, 16
symmetric group, 9
topological, 34
Weyl, 69
group of transformations, 21
height of a root, 82
highest root, 84

INDEX
highest weight, 112
highest weight state, 112
homomorphism
definition, 11
homomorphic groups, 11
ideal, 45
identity, 6
identity
left identity, 6
improper subgoups, 13
infinite discrete groups, 33
inner automorphism, 12, 70
inner automorphism of algebras, 43
invariant bilinear trace form, 45
invariant subalgebra, 45
invariant subgroup, 15
inverse
left inverse, 7
inverse element, 6
isomorphism
definition, 11
isomorphic groups, 11
Killing form, 45
left coset space, 19
left cosets, 19
left invariant vector field, 38
left translations, 38
Lie algebra, 36
Lie group, 34
Lie subalgebra, 39
linear representation, 22
matrix representation, 22
minimal representation, 114
minimal weight, 114
negative root, 72
nilpotent algebra, 55

INDEX
non compact group, 33
normalizer, 54
one parameter subgroup, 40
operator group, 21
order of a group, 14
outer automorphism, 12, 70
outer automorphism of algebras, 43
positive root, 72
potentially real representation, 29
proper subgroups, 13
pseudo real representation, 29
quadractic Casimir operator, 48
quocient group, 19
real representation, 29
reducible representation, 24
representation
adjoint, 42
branching, 139
Clebsch-Gordan, 140
completely reducible, 25
dimension, 21, 125
equivalent, 24
essentially complex, 29
faithful, 21
fundamental, 114
linear, 22
matrix, 22
minimal, 114
of algebras, 105
potentially real, 29
pseudo real, 29
real, 29
reducible, 24
representation of a group, 21
space, 105
space of, 21

149
tensor product, 27
unitary, 25
character, 27
representation of a Lie algebra, 105
representation space, 21, 105
right coset space, 19
right cosets, 19
right translations, 38
root
co-root, 107
definition, 57
diagram, 70
height of, 82
highest, 84
lattice, 108
negative, 72
of su(3), 63
positive, 72
simple, 72
space decomposition, 57
string, 80
system of, 70
root diagram, 70
root diagram of su(3), 63
root lattice, 108
root space decomposition, 57
root string, 80
root system, 70
semisimple group, 16
semisimple Lie algebra, 45
simple group, 16
simple Lie algebra, 45
simple root, 72
solvable algebra, 55
step operators, 56, 57
structure constants, 39, 41
su(3)
roots, 63

150
SU(n)
center of, 17
subalgebra
Cartan, 54, 55
invariant, 45
subgroup
conjugate subgroup, 15
definition, 13
improper subgoups, 13
invariant subgroup, 15
one parameter, 40
proper subgroups, 13
symmetric group, 9
tangent space, 35
tangent vector, 35
tensor product representation, 27
topological group, 34
trace form, 45
transformations, group of, 21
unitary representation, 25
vector field
definition, 36
left invariant, 38
tagent vector, 35
weight
definition, 106
dominant, 106
fundamental, 107
highest, 112
lattice, 108
minimal, 114
strings, 116
weight lattice, 108
weight strings, 116
Weyl chambers, 71
Weyl character formula, 125

INDEX
Weyl denominator formula, 125
Weyl dimensionality formula, 125
Weyl group, 69
Weyl reflection, 67
Weyl-Cartan basis, 58, 59

You might also like