Lecture Notes On Lie Algebras
Lecture Notes On Lie Algebras
2
.
Contents
1 Elements of Group Theory
1.1 The concept of group . . .
1.2 Subgroups . . . . . . . . .
1.3 Direct Products . . . . . .
1.4 Cosets . . . . . . . . . . .
1.5 Representations . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
13
18
19
22
.
.
.
.
.
.
.
.
.
.
.
.
35
35
37
40
43
48
54
63
66
69
73
77
82
. . . . . . . . . . . . . 84
. . . . . . . . . . . . . 92
. . . . . . . . . . . . . 96
3 Representation theory
of Lie algebras
107
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2 The notion of weights . . . . . . . . . . . . . . . . . . . . . . . . 108
3
CONTENTS
3.3
3.4
3.5
3.6
3.7
3.8
3.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
112
118
121
123
124
125
132
135
136
139
140
Chapter 1
Elements of Group Theory
1.1
The idea of groups is one that has evolved from some very intuitive concepts
we have acquired in our attempts of understanding Nature. One of these is
the concept of mathematical structure. A set of elements can have a variety of
degrees of structure. The set of the letters of the alphabet has some structure
in it. They are ordered as A < B < C... < Z. Although this order is
fictitious, since it is a convenction, it endows the set with a structure that
is very useful. Indeed, the relation between the letters can be extended to
words such that a telephone directory can be written in an ordered way.
The set of natural numbers possesses a higher mathematical structure. In
addition of being naturally ordered we can perform operations on it. We
can do binary operations like adding or multiplying two elements and also
unary operations like taking the square root of an element (in this case the
result is not always in the set). The existence of an operation endows the set
with a mathematical structure. In the case when this operation closes within
the set, i.e. the composition of two elements is again an element of the set, the
endowed structure has very nice properties. Let us consider some examples.
Example 1.1 The set of integer numbers (positive and negative) is closed under the operations of addition, subtration and multiplication, but is not closed
under, division. The set of natural numbers, on the other hand is not closed
under subtraction and division but does close under addition and multiplication.
Example 1.2 Consider the set of all human beings living and dead and define
a binary operation as follows: for any two persons take the latest common
5
forefather. For the case of two brothers this would be their father; for two
cousins their common grandfather; for a mother and his son, the mothers
father, etc. This set is closed or not under such operation depending, of course,
on how we understand everything has started.
Example 1.3 Take a rectangular box and imagine three mutually orthogonal
axis, x, y and z, passing through the center of the box and each of them being
orthogonal to two sides of the box. Consider the set of three rotations:
x a half turn about the x-axis
y a half turn about the y-axis
z a half turn about the z-axis
and let the operation on this set be the composition of rotations. So if we
perform y and then x we get z, z then y we get x, and x then z we get y.
However if we perform x then y and then z we get that the box gets back to
its original position. Therefore the set is not closed. If we add to the set the
operation (identity) I leaves the box as it is, then we get a closed set of
rotations.
For a set to be considered a group it has to have, in addition of a binary
operation and closure, some other special structures. We now start discussing
them by giving the formal definition of a group.
Definition 1.1 An abstract group G is a set of elements furnished with a
composition law (or product) defined for every pair of elements of G and that
satisfies:
a) If g1 and g2 are elements of G, then the product g1 g2 is also an element
of G. (closure property)
b) The composition law is associative, that is (g1 g2 )g3 = g1 (g2 g3 ) for every
g1 , g2 and g3 G.
c) There exists an unique element e in G , called identity element such that
eg = ge = g for every g G.
d) For every element g of G, there exists an unique inverse element, denoted
g 1 , such that g 1 g = gg 1 = e.
There are some redundancies in these definition, and the axioms c) and d)
could, in fact, be replaced by the weaker ones:
c0 ) There exists an element e in G, called left identity such that eg = g for
every g G.
products.
Example 1.4 The subtraction of real numbers is not an associative operation,
since (xy)z 6= x(yz) , for x, y and z being real numbers. This operation
possesses a right unity element, namely zero, but does not possess left unity
since, x0 = x but 0x 6= x . The left and right inverses of x are equal and are
x itself, since xx = 0 . Now the inverse of (xy) is not (y 1 x1 ) = (y x)
. Since (x y) (y x) = 2(x y) 6= 0 . This is an ilustration of the fact that
for a non associative operation, the inverse of x y is not necessarily y 1 x1
.
The definition of abstract group given above is not the only possible one.
There is an alternative definition that does not require inverse and identity.
We could define a group as follows:
Definition 1.2 (alternative) Take the definition of group given above (assuming it is a non empty set) and replace axioms c) and d) by: For any given
elements g1 , g2 G there exists a unique g satisfying g1 g = g2 and also a
unique g 0 satisfying g 0 g1 = g2 .
This definition is equivalent to the previous one since it implies that, given
any two elements g1 and g2 there must exist unique elements eL1 and eL2 in G
such that eL1 g1 = g1 and eL2 g2 = g2 . But it also implies that there exists a
unique g such that g1 g = g2 . Therefore, using associativity, we get
(eL1 g1 )g = g1 g = g2 = eL1 (g1 g) = eL1 g2
(1.1)
From the uniquiness of eL2 we conclude that eL1 = eL2 .Thus this alternative
definition implies the existence of a unique left identity element eL . On the
other hand it also implies that for every g G there exist an unique gL1 such
that gL1 g = eL . Consequently axioms c) and d) follows from the alternative
axiom above.
Example 1.5 The set of real numbers is a group under addition but it is not
under multiplication, division, and subtraction. The last two operations are
not associative and the element zero has no inverse under multiplication. The
natural numbers under addition are not a group since there are no inverse
elements.
Example 1.6 The set of all nonsingular n n matrices is a group under
matrix product. The set of p q matrices is a group under matrix addition.
Example 1.7 The set of rotations of a box discussed in example 1.3 is a group
under composition of rotations when the identity operation I is added to the
set. In fact the set of all rotations of a body in 3 dimensions (or in any number
of dimensions) is a group under the composition of rotations. This is called
the rotation group and is denoted SO(3).
Example 1.8 The set of all human beings living and dead with the operation
defined in example 1.2 is not a group. There are no unity and inverse elements
and the operation is not associative
Example 1.9 Consider the permutations of n elements which we shall represent graphically. In the case of three elements, for instance, the graph shown
in figure 1.1 means the element 1 replaces 3, 2 replaces 1 and 3 replaces 2. We
can compose permutations as shown in fig. 1.2. The set of all permutations
of n elements forms a group under the composition of permutations. This is
called the symmetric group of degree n, and it is generally denoted by Sn .
The number of elements of this group is n!, since this is the number of distint
permutations of n elements.
1
@
@
@
@
@
@
@
@
A
A
A
A
@
@
@
@
10
Example 1.10 The N th roots of the unity form a group under multiplication.
These roots are exp(i2m/N ) with m=0,1,2..., N-1. The identity elements is
1(m = 0) and the inverse of exp(i2m/N ) is exp(i2(N m)/N ) . This group
is called the cyclic group of order N and is denoted by ZN .
We say two elements, g1 and g2 , of a group commute with each other if their
product is independent of the order, i.e., if g1 g2 = g2 g1 . If all elements of a
given group commute with one another then we say that this group is abelian.
The real numbers under addition or multiplication (without zero) form an
abelian group. The cyclic groups Zn (see example 1.10 ) are abelian for any
n. The symmetric group Sn (see example 1.9 ) is not abelian for n > 2, but it
is abelian for n = 2 .
Let us consider some groups of order two, i.e., with two elements. The elements
0 and 1 form a group under addition modulo 2. We have
0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 0
(1.2)
The elements 1 and 1 also form a group, but under multiplication. We have
1.1 = 1.(1) = 1, 1.(1) = (1).1 = 1
(1.3)
The symmetric group of degree 2, S2 , (see example 1.9 ) has two elements as
shown in fig. 1.3.
e=
a=
A
A
A
A
(1.4)
These three examples of groups are in fact different realizations of the same
abstract group. If we make the identifications as shown in fig. 1.4 we see that
the structure of these groups are the same. We say that these groups are
isomorphic.
11
-1
A
A
A
A
(1.5)
12
(1.6)
This is an automorphism of Z6 .
In fact the above example is just a particular case of the automorphism of any
abelian group where a given element is mapped into its inverse.
Notice that if and 0 are two automorphisms of a group G, then the
composition of both 0 is also an automorphism of G. Such composition
is an associative operation. In addition, since automorphisms are one-to-one
mappings, they are invertible. Therefore, if one considers the set of all automorphisms of a group G together with the identity mapping of G into G, one
gets a group which is called the automorphism group of G.
Any element of G gives rise to an automorphism. Indeed, define the mapping g : G G
g (g) g g g1
g, g G and g fixed
(1.7)
Then
g (gg 0 ) = g gg 0 g1
= g g
g 1 gg 0 g1
= g (g) g (g 0 )
(1.8)
and so it constitutes an automorphism of G. That is called an inner automorphism. The automorphism group that they generate is isomorphic to G,
since
(1.9)
g1 (g2 (g)) = g1 g2 g g21 g11 = g1 g2 (g)
All automorphisms which are not of such type are called outer automorphisms.
1.2. SUBGROUPS
1.2
13
Subgroups
A subset H of a group G which satisfies the group postulates under the same
composition law used for G, is said to be a subgroup of G. The identity element
and the whole group G itself are subgroups of G. They are called improper
subgroups. All other subgroups of a group G are called proper subgroups. If H
is a subgroup of G, and K a subgroup of H then K is a subgroup of G.
In order to find if a subset H of a group G is a subgroup we have to check only
two of the four group postulates. We have to chek if the product of any two
elements of H is in H (closure) and if the inverse of each element of H is in
H. The associativity property is guaranteed since the composition law is the
same as the one used for G. As G has an identity element it follows from the
closure and inverse element properties of H that this identity element is also
in H.
Example 1.13 The real numbers form a group under addition.The integer
numbers are a subset of the real numbers and also form a group under the
addition. Therefore the integers are a subgroup of the reals under addition.
However the reals without zero also form a group under multiplication, but the
integers (with or without zero) do not. Consequently the integers are not a
subgroup of the reals under multiplication.
Example 1.14 Take G to be the group of all integers under addition, H1 to
be all even integers under addition, H2 all multiples of 22 = 4 under addition,
H3 all multiples of 23 = 8 under addition and son on. Then we have
G:
... 2,
1,
H1 : ... 4,
2,
H2 : ... 8,
4,
H3 : ... 16, 8,
Hn : ... 2.2n , 2n ,
0, 1,
2...
0, 2,
4...
0, 4,
8...
0, 8, 16...
0, 2n , 2.2n ...
We see that each group is a subgroup of all groups above it, i.e.
G H1 H2 ... Hn ...
(1.10)
Moreover there is a one to one correspondence between any two groups of this
list such that the composition law is preserved. Therefore all these groups are
isomorphic to one another
G H1 H2 ... Hn ...
(1.11)
This shows that a group can be isomorphic to one of its proper subgroups. The
same can not happen for finite groups.
14
...n-1
a2 =
1
a
@ @
@ @
@
@
n-1
...
n-1
@
... @
n-3 n-2
an1=
...n-1 !n
!
A
A
A
A
...
n-2 n-1
1a 2
...n-1
...n-1 n
A A A
A A !
A!
A !A! A
!!
A A A
a
a
aaa
n
... aa
n 1
1.2. SUBGROUPS
15
The proof involves the concept of cosets and it is given in section 1.4. A
finite group of prime order is necessarily a cyclic group and can be generated
from any of its elements other than the identity element.
We say an element g of a group G is conjugate to an element g 0 G if there
exists g G such that
g = gg 0 g1
(1.12)
This concept of conjugate elements establishes an equivalence relation on the
group. Indeed, g is conjugate to itself (just take g = e), and if g is conjugate to
g 0 , so is g 0 conjugate to g (since g 0 = g1 g
g ). In addition, if g is conjugate to g 0
0
00
0
00 1
and g to g , i.e. g = gg g , then g is conjugate to g 00 , since g = ggg 00 g1 g1 .
One can use such equivalence relation to divide the group G into classes.
Definition 1.6 The set of elements of a group G which are conjugate to each
other constitute a conjugacy class of G.
Obviously different conjugacy classes have no common elements. The indentity
element e constitute a conjugacy class by itself in any group. Indeed, if g 0 is
conjugate to the identity e, e = gg 0 g 1 , then g 0 = e.
Given a subgroup H of a group G we can form the set of elements g 1 Hg
where g is any fixed element of G and H stands for any element of the subgroup
H. This set is also a subgroup of G and is said to be a conjugate subgroup of
H in G. In fact the conjugate subgroups of H are all isomorphic to H, since if
h1 , h2 H and h1 h2 = h3 we have that h01 = g 1 h1 g and h02 = g 1 h2 g satisfy
h01 h02 = g 1 h1 gg 1 h2 g = g 1 h1 h2 g = g 1 h3 g = h03
(1.13)
(1.14)
(1.15)
This means that all conjugate subgroups of H in G are not only isomorphic
to H but are identical to H. In this case we say that the subgroup H is an
invariant subgroup of G. This implies that, given an element h1 H we can
find, for any element g G, an element h2 H such that
g 1 h1 g = h2 h1 g = gh2
(1.16)
16
(1.17)
and say that the invariant subgroup H, taken as an entity, commutes with all
elements of G. The identity element and the group G itself are trivial examples
of invariant subgroups of G. Any subgroup of an abelian group is an invariant
subgroup.
Definition 1.7 We say a group G is simple if its only invariant subgroups
are the identity element and the group G itself. In other words, G is simple if
it has no invariant proper subgroups. We say G is semisimple if none of its
invariant subgroups is abelian.
Example 1.16 Consider the group of the non-singular real n n matrices,
which is generally denoted by GL(n). The matrices of this group with unit determinant form a subgroup since if detM = detN = 1 we have det(M.N ) = 1
and detM 1 = detM = 1. This subgroup of GL(n) is denoted by SL(n). If
g GL(n) and M SL(n) we have that g 1 M g SL(n) since det(g 1 M g) =
detM = 1 . Therefore SL(n) is an invariant subgroup of GL(n) and consequently the latter is not simple. Consider now the matrices of the form
R x 1lnn , with x being a non-zero real number, and 1lnn being the n n
identity matrix. Notice, that such set of matrices constitute a subgroup of
GL(n), since the identity belongs to it, the product of any two of them belongs
to the set, and the inverse of R x 1lnn is R1 (1/x) 1lnn , which is also an
element of the set. In addition, such subgroup is invariant since any matrix R
commutes with any element of GL(n) and so it is invariant under conjugation.
Since that subgroup is abelian, it follows that GL(n) is not semisimple.
Definition 1.8 Given an element g of a group G we can form the set of all
elements of G which commute with g, i.e., all x G such that xg = gx. This
set is called the centralizer of g and it is a subgroup of G.
In order to see it is a subgroup of G, take two elements x1 and x2 of the
centralizer of g, i.e., x1 g = gx1 and x2 g = gx2 . Then it follows that (x1 x2 )g =
x1 (x2 g) = x1 (gx2 ) = g(x1 x2 ). Therefore x1 x2 is also in the centralizer. On the
other hand, we have that
1
1
1
1
1
x1
1 (x1 g)x1 = x1 (gx1 )x1 gx1 = x1 g.
(1.18)
So the inverse of an element of the centralizer is also in the centralizer. Therefore the centralizer of an element g G is a subgroup of G. Notice that
1.2. SUBGROUPS
17
although all elements of the centralizer commute with a given element g they
do not have to commute among themselves and therefore it is not necessarily
an abelian subgroup of G.
Definition 1.9 The center of a group G is the set of all elements of G which
commute with all elements of G.
We could say that the center of G is the intersection of the centralizers of all
elements of G. The center of a group G is a subgroup of G and it is abelian ,
since by definition its elements have to commute with one another. In addition,
it is an (abelian) invariant subgroup.
Example 1.17 The set of all unitary n n matrices form a group, called
U (n), under matrix multiplicaton. That is because if U1 and U2 are unitary
(U1 = U11 and U2 = U21 ) then U3 U1 U2 is also unitary. In addition the
inverse of U is just U and the identity is the unity n n matrix. The unitary
matrices with unity determinant constitute a subgroup, because the product of
two of them, as well as their inverses, have unity determinant. That subgroup
is denoted SU (n). It is an invariant subgroup of U (n) because the conjugation
of a matrix of unity determinant
by any unitary matrix gives a matrix of unity
determinat, i.e. det U M U = detM = 1, with U U (n) and M SU (n).
Therefore, U (n) is not simple. However, it is not semisimple either, because it
has an abelian subgroup constituted by the matrices R ei 1lnn , with being
real. Indeed, the multiplication of any two Rs is again in the set of matrices
R, the inverse of R is R1 = ei 1lnn , and so a matrix in the set. Notice
the subgroup constituted by the matrices R is isomorphic to U (1), the group of
11 unitary matrices, i.e. phases ei . Since, the matrices R commute with any
unitary matrix, it follows they are invariant under conjugation by elements of
U (n). Therefore, the subgroup U (1) is an abelian invariant subgroup of U (n),
and so U (n) is not semisimple. The subgroup U (1) is in fact the center of
U (n), i.e. the set of matrices commuting with all unitary matrices. Notice, that
such U (1) is not a subgroup of SU (n), since their elements do not have unity
deteminant. However, the discrete subset of matrices e2im/n 1lnn with m =
0, 1, 2...(n 1) have unity determinant and belong to SU (n). They certainly
commute with all n n matrices, and constitute the center of SU (n). Those
matrices form an abelian invariant subgroup of SU (n), which is isomorphic to
Zn . Therefore, SU (n) is not semisimple.
18
1.3
Direct Products
(1.19)
(1.20)
(1.21)
1.4. COSETS
1.4
19
Cosets
(1.23)
20
group and it is called the factor group or the quocient group. In order to show
this we consider the product of two elements of two different cosets. We get
gh1 g 0 h2 = gg 0 g 01 h1 g 0 h2 = gg 0 h3 h2
(1.24)
where we have used the fact that H is invariant, and therefore there exists
h3 H such that g 01 h1 g 0 = h3 . Thus we have obtained an element of a
third coset, namely gg 0 H. If we had taken any other elements of the cosets
gH and g 0 H, their product would produce an element of the same coset gg 0 H.
Consequently we can introduce, in a well defined way, the product of elements
of the coset space G/H, namely
gHg 0 H gg 0 H
(1.25)
The invariant subgroup H plays the role of the identity element since
(gH)H = H(gH) = gH
(1.26)
(1.27)
1.4. COSETS
21
(1.29)
(1.31)
22
1.5
Representations
(1.32)
We can define the product of these operators by the composition of their action,
i.e., an operator D3 is the product of two other operators D1 and D2 if
D1 (D2 | vi) = D1 | v 0 i = D3 | vi
(1.33)
(1.34)
Suppose that these operators form a group under this product law. We call it
an operator group or group of transformations.
If we can associate to each element g of an abstract group G an operator,
which we shall denote by D(g), such that the group structure of G is preserved,
i.e., if for g, g 0 G we have
D(g)D(g 0 ) = D(gg 0 )
(1.35)
then we say that such set of operators is a representation of the abstract group
G in the representation space V . In fact, the mapping between the operator
group D and the abstract group G is a homomorphism. In addition to eq.(1.35)
one also has that
D(g 1 ) = D1 (g)
D(e) = 1
(1.36)
1.5. REPRESENTATIONS
23
Example 1.22 The unit matrix of any order is a trivial representation of any
group. Indeed, if we associate all elements of a given group to the operator 1
we have that the relation 1.1 = 1 reproduces the composition law of the group
g.g 0 = g 00 . This is an example of an extremely non faithful representation.
When the operators D are linear operators, i.e.,
D(| vi+ | v 0 i) = D | vi + D | v 0 i
D(a | vi) = aD | vi
(1.37)
with | vi, | v 0 i V and a being a c-number, we say they form a linear representation of G.
Given a basis | vi i (i = 1, 2...n) of the vector space V (of dimension n)
we can construct the matrix representatives of the operators D of a given
representation. The action of an operator D on an element | vi i of the basis
produces an element of the vector space which can be written as a linear
combination of the basis
D | vi i =| vj iDji
(1.38)
The coefficients Dji of this expansion constitute the matrix representatives of
the operator D. Indeed, we have
0
D0 (D | vi i) = D0 | vj iDji =| vk iDkj
Dji =| vk i(D0 D)ki
(1.39)
So, we can now associate to the matrix Dij , the element of the abstract group
that is associated to the operator D. We have then what is called a matrix
representation of the abstract group. Notice that the matrices in each representation have to be non singular because of the existence of the inverse element.
In addition the unit element e is always represented by the unit matrix, i.e.,
Dij (e) = ij .
Example 1.23 In example 1.9 we have defined the group Sn . We can construct a representation for this group in terms of n n matrices as follows:
take a vector space Vn and let | vi i, i = 1, 2, ...n, be a basis of Vn . One can
define n! operators that acting on the basis permute them, reproducing the n!
permutations of n elements. Using (1.38) one then obtains the matrices. For
instance, in the case of S3 , consider the matrices
1 0 0
D(a0 ) = 0 1 0 ;
0 0 1
0 1 0
D(a1 ) = 1 0 0 ;
0 0 1
24
1 0 0
D(a2 ) = 0 0 1 ;
0 1 0
0 0 1
D(a3 ) = 0 1 0 ;
1 0 0
0 1 0
D(a4 ) = 0 0 1
;
1 0 0
0 0 1
D(a5 ) = 1 0 0
0 1 0
(1.40)
1
0
0
| v1 i = 0 ; | v2 i = 1 ; | v3 i = 0
0
0
1
(1.42)
one can check that the matrices given above play the role of the operators
permuting the basis too
Dij (am ) | vk ij =| vl ii Dlk (am )
(1.43)
(1.44)
(1.45)
1.5. REPRESENTATIONS
25
Two representations D and D0 of an abstract group G are said to be equivalent representations if there exists an operator C such that
D0 (g) = CD(g)C 1
(1.46)
with C being the same for every g G. Such thing happens, for instance,
when one changes the basis of the representation
| vi0 i =| vj iji
(1.47)
Then
0
D(g) | vi0 i | vj0 iDji
(g)
= | vk iDkl (g)li
= | vn inj 1
jk Dkl (g)li
= | vj0 i1
jk Dkl (g)li
(1.48)
(1.49)
D(g) =
(1.50)
v1
0
Av1
0
(1.51)
i.e., V1 does not mix with the rest of V . The subspace V2 of V generated by
the last n elements of the basis is not invariant since
A C
0 B
0
v2
Cv2
Bv2
(1.52)
26
A 0
0 B
(1.53)
Lemma 1.1 (Schur) Any matrix which commutes with all matrices of a given irreducible representation of a group G must be a multiple of the unit
matrix.
Proof Let A be a matrix that commutes will all matrices D(g) of a given
irreducible representation of G, i.e.
AD(g) = D(g)A
(1.54)
(1.55)
(1.56)
1.5. REPRESENTATIONS
27
1 X
D (g)D(g)
N gG
(1.57)
For any g 0 G
D (g 0 )HD(g 0 ) =
1 X 0
D (gg )D(gg 0 ) = H
N gG
(1.58)
d
X
Hii0 | vi0 |2
(1.60)
i=1
where vi0 are the components of v 0 . Since vi0 are arbitrary we conclude that each
entry Hii0 of H 0 is q
real and positive. We then define a diagonal real matrix h
with entries hii = Hii0 , i.e. H 0 = hh. Therefore
H = U H 0 U = U hhU SS
(1.61)
(1.62)
S 1 D0 (g)S
(SS) S 1 D0 (g)S = SS
(1.63)
28
and so
D0 (g)D0 (g) = 1l
(1.64)
(1.65)
(1.66)
(g) T r(D(g)) =
dimD
X
Dii (g)
(1.67)
i=1
Obviously, the characters of a given group element in two equivalent representations are the same, since from (1.46)
0
(1.68)
Analogously, the elements of a given conjugacy class have the same character.
Indeed, from definition 1.6, if two elements g 0 and g 00 are conjugate, g 0 =
gg 00 g 1 , then in any representation D one has T r(D(g 0 )) = T r(D(g 00 )). Nothing
prevents however, the elements of two different conjugacy class of having the
same character in some particular representation. In fact, this happens in the
representation discussed in example 1.22.
1.5. REPRESENTATIONS
29
(1.70)
where N (G) is the order of G, DD0 = 1 if D and D0 are equivalent representations and DD0 = 0 otherwise.
Theorem 1.5 A sufficient conditions for two representations of a finite group
G to be equivalent is the equality of their character systems.
Theorem 1.6 The number of times nD that an irreducible representation D
appears in a given reducible representation D0 of a finite group G is given by
nD =
1 X D0
(g)(D (g))
N (G) gG
(1.71)
(1.72)
30
Theorem 1.8 The sum of the squares of the dimensions of the inequivalent
irreducible representations of a finite group G is equal to the order of G.
Theorem 1.9 The number of inequivalent irreducible representations of a finite group G is equal to the number of conjugacy classes of G.
For the proofs see [COR 84].
Definition 1.14 If all the matrices of a representation are real the representation is said to be real.
Notice that if D is a matrix representation of a group G, then the matrices
D (g), g G, also constitute a representation of G of the same dimension as
D, since
D(g)D(g 0 ) = D(gg 0 ) D (g)D (g 0 ) = D (gg 0 )
(1.73)
If D is equivalent to a real representation DR , then D is equivalent to D . The
reason is that there exists a matrix C such that
DR (g) = CD(g)C 1
(1.74)
DR (g) = C D (g)(C )1
(1.75)
D (g) = (C 1 C )1 D(g)C 1 C
(1.76)
and so
Therefore
and D is equivalent to D . However the converse is not always true, i.e. , if D is
equivalent to D it does not means D is equivalent to a real representation. So
we classify the representations into three classes regarding the relation between
D and D .
Definition 1.15
1. If D is equivalent to a real representation it is said
potentially real.
2. If D is equivalent to D but not equivalent to a real representation it is
said pseudo real.
3. If D is not equivalent to D then it is said essentially complex.
Notice that if D is potentially real or pseudo real then its characters are real.
1.5. REPRESENTATIONS
31
Example 1.24 The rotation group on the plane, denoted SO(2), can be represented by the matrices
cos sin
sin cos
R() =
(1.77)
such that
R()
x
y
x cos + y sin
x sin + y cos
(1.78)
One can easily check that R()R() = R( + ). This group is abelian and
according to corollary 1.2 such representation is reducible. Indeed, one gets
M R()M
ei 0
0 ei
(1.79)
where
1 i
i 1
M=
(1.80)
x
y
x + iy
ix + y
(1.81)
(1.82)
(1.83)
Therefore
5
1X
| D (ai ) |2 = 2
6 i=0
(1.84)
32
From theorem 1.7 one sees that such 3-dimensional representation is not irreducible. Indeed, the one dimensional subspace generated by the vector
1
1
| w3 i = 1
3 1
(1.85)
1
1
| w1 i = 1 ;
2
0
1
1
| w2 i = 1
6 2
(1.86)
(1.87)
where i, j = 1, 2, 3 and
12
2
1
6
1
6
2
1
3
1
3
1
3
(1.88)
(1.89)
D (am ) =
(1.90)
D (a0 ) =
00
D (a2 ) =
00
D (a4 ) =
1
2
3
2
1
2
3
2
1 0
0 1
3
2
1
2
3
2
1
2
00
D (a1 ) =
D00 (a3 ) =
00
D (a5 ) =
1 0
0 1
1
2
3
2
1
2
3
2
3
2
1
2
3
2
1
2
;
!
;
!
(1.91)
1.5. REPRESENTATIONS
33
D (a0 ) = 2
00
00
00
D (a1 ) = D (a2 ) = D (a3 ) = 0
00
00
D (a4 ) = D (a5 ) = 1
(1.92)
Therefore
5
1X
00
| D (ai ) |2 = 1
6 i=0
(1.93)
(1.94)
34
Chapter 2
Lie Groups and Lie Algebras
2.1
Lie groups
36
(2.1)
(2.3)
x0 = f (x)
(2.4)
then
If the elements of a group G form a topological space and if the functions
F (x, x0 ) and f (x) are continuous functions of its arguments then we say that
G is a topological group. Notice that in a topological group we have to have
some compatibility between the algebraic and the topological structures.
When the elements of a group G constitute a manifold and when the functions F (x, x0 ) and f (x), discussed above, possess derivatives of all orders with
respect to its arguments, i.e., are analytic functions, we say the group G is a
Lie group . This definition can be given in a formal way.
Definition 2.1 A Lie group is an analytic manifold which is also a group
such that the analytic structure is compatible with the group structure, i.e. the
operation G G G is an analytic mapping.
For more details about the geometrical concepts involved here see [HEL 78,
CBW 82, ALD 86, FLA 63].
Example 2.1 The real numbers under addition constitute a Lie group. Indeed, we can use a real variable x to parametrize the group elements. Therefore
for two elements with parameters x and x0 the function in (2.2) is given by
x00 = F (x, x0 ) = x + x0
(2.5)
(2.6)
37
Example 2.2 The group of rotations on the plane, discussed in example 1.24,
is a Lie group. In fact the groups of rotations on IRn , denoted by SO(n), are
Lie groups. These are the groups of orthogonal n n real matrices O with unit
determinant (O> O = 1l, detO = 1)
Example 2.3 The groups GL(n) and SL(n) discussed in example 1.16 are
Lie groups, as well as the group SU (n) discussed in example 1.17
Example 2.4 The groups Sn and Zn discussed in examples 1.9 and 1.10 are
not Lie groups.
2.2
Lie Algebras
The fact that Lie groups are differentiable manifolds has very important consequences. Manifolds are locally Euclidean spaces. Using the differentiable
structure we can approximate the neighborhood of any point of a Lie group
G by an Euclidean space which is the tangent space to the Lie group at that
particular point. This approximation is some sort of local linearization of the
Lie group and it is the approach we are going to use in our study of the algebraic structure of Lie groups. Obviously this approach does not tell us much
about the global properties of the Lie groups.
Let us begin by making some comments about tangent planes and tangent
vectors. A convenient way of describing tangent vectors is through linear
operators acting on functions. Consider a differentiable curve on a manifold
M and let the coordinates xi , i = 1, 2, ...dim M , of its points be parametrized
by a continuous variable t varying, let us say, from 1 to 1. Let f be any
differentiable function defined on a neighbourhood of the point p of the curve
corresponding to t = 0. The vector Vp tangent to the curve at the point p is
defined by
f
dxi (t)
|t=0
(2.7)
Vp (f ) =
dt
xi
Since the function f is arbitrary the tangent vector is independent of it. The
vector Vp is a tangent vector to M at the point p.
The tangent vectors at p to all differentiable curves passing through p form
the tangent space Tp M of the manifold M at the point p. This space is a
vector space since the sum of tangent vectors is again a tangent vector and the
muliplication of a tangent vector by a scalar (real or complex number) is also
a tangent vector.
38
xi
(2.8)
xi
(2.9)
2
V i f
j i f
+
W
V
xj xi
xj xi
(2.10)
Due to the second term on the r.h.s of (2.10) the operator W V is not a vector
field and therefore the ordinary composition of vector fields is not a vector
field. However if we take the commutator of the linear operators V and W we
get
!
j
j
i V
i W
W
(2.11)
[V, W ] = V
i
i
x
x xj
and this is again a vector field. So, the set of vector fields close under the
operation of commutation and they form what is called a Lie algebra.
Definition 2.2 A Lie algebra G is a vector space over a field k with a bilinear
composition law
(x, y) [x, y]
[x, ay + bz] = a[x, y] + b[x, z]
with x, y, z L and a, b k, and such that
(2.12)
39
1. [x, x] = 0
2. [x, [y, z]] + [z, [x, y]] + [y, [z, x]] = 0; (Jacobi identity)
Notice that (2.12) implies that [x, y] = [y, x], since
[x + y, x + y] = 0
= [x, y] + [y, x]
(2.13)
(2.14)
(a, b) ab
(2.15)
and
called respectively addition and multiplication such that
1. k is an abelian group under addition
2. k without the identity element of addition is an abelian group under multiplication
3. multiplication is distributive with respect to addition, i.e.
a (b + c) = ab + ac
(a + b) c = ac + bc
The real and complex numbers are fields.
40
2.3
We have seen that vector fields on a manifold form a Lie algebra. We now
want to show that the Lie algebra of some special vector fields on a Lie group
is related to its group structure.
If we take a fixed element g of a Lie group G and multiply it from the left
by every element of G, we obtain a transformation of G onto G which is called
left translation on G by g. In a similar way we can define right translations
on G. Under a left translation by g, an element g 0 , which is parametrized by
the coordinates x0i (i = 1, 2, ... dim G), is mapped into the element g 00 = gg 0 ,
and the parameters x00i of g 00 are analytic functions of x0i . This mapping of
G onto G induces a mapping between the tangent spaces of G as follows: let
V be a vector field on G which corresponds to the tangent vectors Vg0 and Vg00
on the tangent spaces to G at g 0 and g 00 respectively. Let f be an arbitrary
function of the parameters x00i of g 00 . We define a tangent vector Wg00 on Tg00 G
(the tangent plane to G at g 00 ) by
Wg00 f Vg0 (f x00 ) = Vgi0
00j
f
00
i x
f
(x
)
=
V
0
g
0i
0i
x
x x00j
(2.16)
This defines a mapping between the tangent spaces of G since, given Vg0 in
Tg0 G, we have associated a tangent vector Wg00 in Tg00 G. The vector Wg00 does
not have necessarily to coincide with the value of the vector field V at Tg00 G,
namely Vg00 . However, when that happens we say that the vector field V is a
left invariant vector field on G, since that transformation was induced by left
translations on G.
The commutator of two left invariant vector fields, V and V , is again a left
invariant vector field. To check this consider the commutator of this vector
fields at group element g 0 . According to (2.11)
V j0
V j0
(2.17)
j
j00
V
V
00
00j
00i
00j
l x
l0 x
k0 x
=
V
V
V
0
g
g
g
x x00i
x0l
x0k x00i
x0l
V j0 x00k
V j0
= Vgi0 g0i Vgi0 g0i
x
x
x0j x0k
x00k
= Vgj0 0j
x x0k
41
!!
x00j
(2.18)
So, V is also left invariant. Therefore the set of left invariant vector fields form
a Lie algebra. They constitute in fact a Lie subalgebra of the Lie algebra of
all vector fields on G.
Definition 2.4 A vector subspace H of a Lie algebra G is said to be a Lie
subalgebra of G if it closes under the Lie bracket, i.e.
[H , H] H
(2.19)
(2.20)
42
these constants contain all the information about the Lie algebra of G. Since
the relation above is point independent we are going to fix the tangent plane
to G at the identity element, Te G, as the vector space of the Lie algebra of G.
We could have defined right invariant vector fields in a similar way. Their Lie
algebra is isomorphic to the Lie algebra of the left-invariant fields.
A one parameter subgroup of a Lie group G is a differentiable curve, i.e., a
differentiable mapping from the real numbers onto G, t g(t) such that
g(t)g(s) = g(t + s)
g(0) = e
(2.21)
(2.22)
This means that the straight line on the tangent plane to G at the identity
element, Te G, is mapped onto the one parameter subgroup of G, g(t). This is
called the exponential mapping of the Lie algebra of G (Te G) onto G. In fact,
it is possible to prove that in general, the exponential mapping is an analytic
mapping of Te G onto G and that it maps a neighbourhood of the zero element
of Te G in a one to one manner onto a neighbourhood of the identity element
of G. In several cases this mapping can be extended globally on G.
For more details about the exponential mapping and other geometrical
concepts involved here see [HEL 78, ALD 86, CBW 82, AUM 77].
2.4
43
In the last section we have seen that the Lie algebra, G ,of a Lie group G
possesses a basis Ta , a = 1, 2, ... dim G (= dim G, satisfying
c
[Ta , Tb ] = ifab
Tc
(2.23)
c
where the quantities fab
are called the structure constants of the algebra. We
have introduced the imaginary unity i on the r.h.s of (2.23) because if the
generators Ta are hermitian, Ta = Ta , then the structure constants are real.
c
c
. From the definition of Lie algebra given in section
= fba
Notice that fab
2.2 we have that the generators Ta satisfy the Jacobi identity
(2.24)
(2.25)
with sum over repeated indices. We have also seen that the elements g of G
close to the identity element can be written, using the exponential mapping,
as
g = exp (i a Ta )
(2.26)
where a are the parameters of the Lie group. Under certain circunstances this
relation is also true for elements quite away from the identity element (which
corresponds to a = 0).
If we conjugate elements of the Lie algebra by elements of the Lie group
we obtain elements of the Lie algebra again. Indeed, if L and T are elements
of the algebra one gets
exp (L)T exp (L) = T + [L, T ] +
1
1
[L, [L, T ]] + [L, [L, [L, T ]]] + ... (2.27)
2!
3!
(2.28)
then
f0
f 00
...
f (n)
=
=
=
=
(2.29)
44
X
n=0
n n
ad T
n! L
(2.30)
(2.31)
One can easily check that the n n matrices dba (g) , n = dim G, form a
representation of G, since if we take the element g1 g2 we get
g1 g2 Ta (g1 g2 )1 =
=
=
=
Tb dba (g1 g2 )
g1 (g2 Ta g21 )g11
g1 Tc g11 dca (g2 )
Tb dbc (g1 )dca (g2 )
(2.32)
(2.33)
From the defintion (2.31) we see that the dimension of the adjoint representation d(g) of G is equal to the dimension of G. It is a real representation in the
sense that the entries of the matrices d(g) are real.
Notice that the conjugation defines a mapping of the Lie algebra G into
itself which respects the commutation relations. Defining : G G
(T ) gT g 1
(2.34)
(2.35)
45
(2.36)
for any T, T 0 G.
The mapping (2.34) in particular, is called an inner automorphism. All other
automorphism which are not conjugations are called outer automorphism.
If g is an element of G infinitesimally close to the identity, its parameters
in (2.26) are very small and we can write
g = 1 + ia Ta
(2.37)
Tc dcb (1 + ia Ta )
Tc (bc + ia dcb (Ta ))
Tb + ia [Ta , Tb ]
c
Tb a fab
Tc
(2.38)
(2.39)
Therefore in the adjoint representation the matrices representing the generators are given by the structure constants of the algebra. This defines a matrix
representation of the Lie algebra. In fact, whenever one has a matrix representation of a Lie group one gets, through the exponential mapping, a matrix
representation of the corresponding Lie algebra.
The concept of representation of a Lie algebra is basically the same as the
one we discussed in section 1.5 for the case of groups. The representation
theory of Lie algebras will be discussed in more details later, but here we give
the formal definition.
Definition 2.7 If one can associate to every element T of a Lie algebra G a
n n matrix D(t) such that
1. D(T + T 0 ) = D(T ) + D(T 0 )
2. D(aT ) = aD(T )
46
for T, T 0 G and a being a c-number. Then we say that the matrices D define
a n-dimensional matrix representation of G.
Notice that given an element T of a Lie algebra G, one can define a transformation in G as
T : G G0 = [ T , G ]
(2.40)
Using the Jacobi identity one can easily verify that the commutator of the composition of two of such transformations reproduces the Lie bracket operation
on G, i.e.
[T , [T0 , G ]] [T0 , [T , G ]] = [[T , T0 ], G ]
(2.41)
Therefore such transformations define a representation of G on G, which is
called the adjoint representation of G. Obviously, it has the same dimension
as G. Introducing the coeeficients dba (T ) as
[ T , Ta ] Tb dba (T )
(2.42)
(2.44)
47
(2.46)
(2.47)
(2.49)
Definition 2.8 A Lie algebra is said to be abelian if all its elements commute
with one another.
In this case all the structure constants vanish and consequently the Killing
form is zero. However there might exist some representation D of an abelian
algebra for which the bilinear form (2.45) is not zero.
Definition 2.9 A subalgebra H of G is said to be a invariant subalgebra (or
ideal) if
[H, G] H
(2.50)
From (2.27) we see the Lie algebra of an invariant subgroup of a group G is
an invariant subalgebra of the Lie algebra of G.
Definition 2.10 We say a Lie algebra G is simple if it has no invariant subalgebras, except zero and itself, and it is semisimple if it has no invariant abelian
subalgebras.
Theorem 2.1 (Cartan) A Lie algebra G is semisimple if and only if its
Killing form is non degenerated, i.e.
det | T r(d(Ta )d(Tb )) |6= 0.
(2.51)
48
(2.52)
for every T 0 G.
For the proof see chap. III of [JAC 79] or sec. 6 of appendix E of [COR 84].
Definition 2.11 We say a semisimple Lie algebra is compact if its Killing
form is positive definite.
The Lie algebra of a compact semisimple Lie group is a compact semisimple
Lie algebra. By choosing a suitable basis Ta we can put the Killing form of a
compact semisimple Lie algebra in the form .
ab = ab
(2.53)
d
fabc fab
dc
(2.54)
d
fabc = fab
T r(d(Td )d(Tc )) = iT r(d([Ta , Tb ]Tc ))
(2.55)
Using the cyclic property of the trace one sees that fabc is antisymmetric with
respect to all its three indices. Notice that, in general, fabc is not a structure
constant.
c
For a compact semisimple Lie algebra we have from (2.53) that fab
= fabc
, and therefore the commutation relations (2.23) can be written as
[Ta , Tb ] = ifabc Tc
(2.56)
2.5
49
the basis of the algebra su(2) of this group can be taken to be (half of) the
Pauli matrices (Ti 12 i )
1
T1 =
2
0 1
1 0
1
; T2 =
2
0 i
i 0
1
; T3 =
2
1 0
0 1
(2.57)
(2.58)
The matrices (2.57) define what is called the spinor (2-dimensional) representation of the algebra su(2).
From (2.39) we obtain the adjoint representation (3-dimensional) of su(2)
dij (Tk ) = ikji = iikj
(2.59)
and so
0 0 0
d(T1 ) = i 0 0 1
0 1 0
0 0 1
d(T2 ) = i 0 0 0 ;
1 0 0
0 1 0
d(T3 ) = i 1 0 0
0 0 0
(2.60)
(2.61)
So, it is non degenerate. This is in agreement with theorem 2.1, since this
algebra is simple. According to the definition 2.11 this is a compact algebra.
The trace form (2.45) in the spinor representation is given by
1
ijs = T r(D(Ti Tj )) = ij
2
(2.62)
50
(2.63)
Notice that formally, these are not elements of the algebra su(2) since we have
taken complex linear combination of the generators. These are elements of the
complex algebra denoted by A1 .
Using (2.58) one finds
[T3 , T ] = T
[T+ , T ] = 2T3
(2.64)
Therefore the generators of A1 are written as eigenvectors of T3 . The eigenvalues 1 are called the roots of su(2). We will show later that all Lie algebras
can be put in a similar form. In any representation one can check that the
operator
C = T12 + T22 + T32
(2.65)
commutes with all generators of su(2). It is called the quadractic Casimir
operator. The basis of the representation space can always be chosen to be
eigenstates of the operators T3 and C simultaneously. These states can be
labelled by the spin j and the weight m
T3 | j, mi = m | j, mi
(2.66)
The operators T raise and lower the eigenvalue of T3 since using (2.64)
T3 T | j, mi = ([T3 , T ] + T T3 ) | j, mi
= (m 1) T | j, mi
(2.67)
We are interested in finite representations and therefore there can only exists
a finite number of eigenvalues m in a given representation. Consequently there
51
must exist a state which possess the highest eigenvalue of T3 which we denote
j
T+ | j, ji = 0
(2.68)
The other states of the representation are obtained from | j, ji by applying T
successively on it. Again, since the representation is finite there must exist a
positive integer l such that
(T )l+1 | j, ji = 0
(2.69)
1
(T+ T + T T+ )
2
(2.70)
T32
(2.71)
Since C commutes with all generators of the algebra, any state of the representation is an eigenstate of C with the same eigenvalue
C | j, mi = j (j + 1) | j, mi
(2.72)
(2.73)
T+ T = C T32 + T3
(2.74)
j(j + 1) (j l)2 + (j l) | j, ji
(2.75)
(2.76)
52
The group SL(2), as defined in example 1.16, is the group of 2 2 real matrices with unity determinant. If one writes the elements close to the identity
as g = exp L (without the i factor), then L is a real traceless 2 2 matrix. So
the basis of the algebra sl(2) can be taken as
1
L1 =
2
0 1
1 0
1
; L2 =
2
0 1
1 0
1
; L3 =
2
1 0
0 1
(2.77)
This defines a 2-dimensional representation of sl(2) which differ from the spinor
representation of su(2), given in (2.57), by a factor i in L2 . One can check the
they satisfy
[L1 , L2 ] = L3 ; [L1 , L3 ] = L2 ; [L2 , L3 ] = L1
(2.78)
From these commutation relations one can obtain the adjoint representation
of sl(2), using (2.39)
0 0
0
d(L1 ) = 0 0 1
0 1 0
0 0 1
d(L2 ) = 0 0 0 ;
1 0 0
0 1 0
d(L3 ) = 1 0 0
0 0 0
(2.79)
1 0 0
ij = T r(d(Li Lj )) = 2 0 1 0
0 0 1
(2.80)
sl(2) is a simple algebra and we see that its Killing form is indeed nondegenerate (see theorem 2.1). From definition 2.11 we conclude sl(2) is a
non-compact Lie algebra.
The trace form (2.45) in the 2-dimensional representation (2.77) of sl(2) is
ij2dim
1 0 0
1
= T r(Li Lj ) = 0 1 0
2
0 0 1
(2.81)
53
Similarly to the case of su(2), this trace form is proportional to the Killing
form, 2dim = 41 .
The operators
L L1 L2
(2.82)
according to (2.78), satisfy commutation relations identical to (2.64)
[L3 , L ] = L ;
[L+ , L ] = 2L3
(2.83)
1
(L+ L + L L+ )
2
(2.84)
The analysis we did for su(2), from eqs. (2.66) to (2.76), applies also to sl(2)
and the conclusions are the same, i.e. , in a finite dimensional representation of
sl(2) with highest eigenvalue j of L3 the lowest eigenvalue is j. In addition
the eigenvalues of L3 can only be integers or half integers varying from j
to j in integral steps. The striking difference however, is that the finite
representations of sl(2) (where these results hold) are not unitary. On the
contrary, the finite dimensional representations of su(2) are all equivalent to
unitary representations. Indeed, the exponentiation of the matrices (2.57) and
(2.60) (with the i factor) provide unitary matrices while the exponentiation of
(2.77) and (2.79) do not. All unitary representations of sl(2) are necessarily
infinite dimensional. In fact this is true for any non compact Lie algebra.
The structures discussed in this section for the cases of su(2) and sl(2) are
in fact the basic structures underlying all simple Lie algebras. The rest of this
course will be dedicated to this study.
54
2.6
We now start the study of the features which are common to all semisimple
Lie algebras. These features are in fact a generalization of the properties of
the algebra of angular momentum discussed in section 2.5. We will be mainly
interested in compact semisimple algebras although several results also apply
to the case of non-compact Lie algebras.
Theorem 2.2 Given a subalgebra H of a compact semisimple Lie algebra G
we can write
G =H+P
(2.85)
where
[H, P] P
(2.86)
(2.88)
[H, P] P.
(2.89)
Therefore
2
This theorem does not apply to non compact algebras because the trace
form does not provide an Euclidean type metric, i.e. there can exist null vectors
which are orthogonal to themselves. As an example consider sl(2).
Example 2.5 Consider the subalgebra H of sl(2) generated by (L1 + L2 ) (see
section 2.5). Its complement P is generated by (L1 L2 ) and L3 . However
this is not an orthogonal complement since, using (2.80)
T r((L1 + L2 )(L1 L2 )) = 4
(2.90)
(2.91)
55
(2.92)
[H, P] H + P
(2.93)
[L3 , L1 L2 ] = (L1 L2 )
(2.94)
So
Notice P is a subalgebra too
(2.95)
(2.96)
(2.97)
(2.98)
(2.99)
56
Proof Using the definition (2.31) of the adjoint representation and the invariance property (2.48) of D (T, T 0 ) we have
D (Ta , Tb ) =
=
=
=
T r(D(gTa g 1 gTb g 1 ))
T r(D(Tc dca (g)Td ddb (g)))
(d> )ac (g) D (Tc , Td )ddb (g)
(d> D d)ab
(2.100)
(2.102)
(2.103)
(2.104)
57
(2.105)
(2.106)
(2.107)
As we have shown, up to an overall constant, the trace form of a simple Lie algebra
is the same in all representations. We will simplify the notation from now on, and write
T r(T T 0 ) instead of D (T, T 0 ). We shall specify the representation where the trace is being
evaluated only when that is relevant.
58
(2.109)
(2.111)
(2.112)
or
where we have defined the matrices
(hi )mn = ifimn
(2.113)
(2.114)
(2.115)
with U = U 1 . We shall denote by E the new basis of the subspace orthogonal to the Cartan subalgebra. The indices stand for the eigenvalues of the
59
hi = hi
(2.117)
(2.119)
(2.120)
(2.121)
and consequently the roots are not degenerated. So, there are not two step operators E and E0 corresponding to the same root . Therefore for a semisimple Lie algebra one has
dim G - rank G =
Using the Jacobi identity and the commutation relations (2.116) we have that
if and are roots then
[Hi , [E , E ]] = [E , [E , Hi ]] [E , [Hi , E ]]
= (i + i ) [E , E ]
(2.122)
60
Since the algebra is closed under the commutator we have that [E , E ] must
be an element of the algebra. We have then three possibilities
1. + is a root of the algebra and then [E , E ] E+
2. + is not a root and then [E , E ] = 0
3. + = 0 and consequently [E , E ] must be an element of the Cartan
subalgebra since it commutes with all Hi .
Since in a semisimple Lie algebra the roots are not degenerated (see (2.121)),
we conclude from (2.122) that 2 is never a root.
We then see that the knowlegde of the roots of the algebra provides all
the information about the commutation relations and consequently about the
structure of the algebra. From what we have learned so far, we can write the
commutation relations of a semisimple Lie algebra G as
[Hi , Hj ] = 0
[Hi , E ] = i E
N E+ if + is a root
if + = 0
[E , E ] = H
0
otherwise
(2.123)
(2.124)
(2.125)
(2.126)
(i + i )T r(E E ) = 0
(2.127)
and so
The step operators are orthogonal unless they have equal and opposite roots.
In particular E is orthogonal to itself. If it was orthogonal to all others, the
Killing form would have vanishing determinant and the algebra would not be
semisimple. Therefore for semisimple algebras if is a root then must also
be a root, and T r(E E ) 6= 0. The value of T r(E E ) is connected to the
structure constant of the second relation in (2.125). We know that [E , E ]
must be an element of the Cartan subalgebra. Therefore we write
[E , E ] = xi Hi
(2.128)
61
xj
T r([E , E ]Hj )
T r([Hj , E ]E )
j T r(E E )
(2.129)
(2.131)
i T r(Hj E ) = 0
(2.132)
and so
Since by assumption is a root and therefore different from zero we get
T r(Hi E ) = 0
(2.133)
From the above results and (2.108) we see that we can normalize the Cartan
subalgebra generators Hi and the step operator E such that the Killing form
becomes
T r(Hi Hj ) = ij ; i, j = 1, 2, ...rank G
T r(Hi E ) = 0
2
+,0
(2.134)
T r(E E ) =
2
This is the usual normalization of the Weyl-Cartan basis.
Notice that linear combinations (E E ) diagonalizes the Killing form
(2.134). However, by taking real linear combinations of Hi , (E + E ) and
i(E E ) one obtains a compact algebra since the eigenvalues of the Killing
form are all of the same sign. On the hand, if one takes real linear combinations
of Hi , (E + E ) and (E E ) one obtains a non compact algebra.
Example 2.6 In section 2.5 we have discussed the algebra of the group SU (2).
In that case the Cartan subalgebra is generated by T3 only. The step operators
are T+ and T corresponding to the roots +1 and 1 respectively . So the rank
of SU (2) is one. We can represent these roots by the diagram 2.1
62
2.7
63
0 1 0
1 = 1 0 0 ;
0 0 0
1 0 0
3 = 0 1 0
;
0 0 0
0 0 i
5 = 0 0 0 ;
i 0 0
0 0 0
7 =
0 0 i ;
0 i 0
0 i 0
2 = i 0 0 ;
0 0 0
0 0 1
4 = 0 0 0
;
1 0 0
0 0 0
6 = 0 0 1 ;
0 1 0
1 0 0
8 = 13
0 1 0
0 0 2
(2.135)
(2.136)
(2.137)
where the structure constants fijk are completly antisymmetric (see (2.56))
and are given in table 2.1. The diagonal matrices 3 and 8 are the generators
64
j k fijk
2 3
2
4 7
1
5 6 -1
4 6
1
5 7
1
4 5
1
6 7
-1
5 8 3
7 8
3
1
H2 = 8 ;
2
1
E2 = (6 i7 )
2
(2.138)
So they satisfy
T r(Hi Hj ) = ij ; T r(Em En ) = mn
(2.139)
[H1 , E1 ] = 2E1 ;
[H2 , E1 ] = 0 ;
s
2
3
[H1 , E2 ] =
E2 ;
[H2 , E2 ] =
E2 ;
2
2
s
2
3
[H1 , E3 ] =
E3 ;
[H2 , E3 ] =
E3
(2.140)
2
2
65
3
KA
A
A
A
A
A
A
A
A
A
A
AU
2 3
2 3
1 = ( 2, 0) ; 2 = (
,
) ; 3 = (
,
)
2
2
2
2
(2.141)
(2.142)
and also
2H1
s
2
3
[E2 , E2 ] =
H1 +
H2
2
2
s
2
3
[E3 , E3 ] =
H1 +
H2
2
2
[E1 , E1 ] =
(2.143)
Whenever the sum of two roots is a root of the diagram we know, from (2.125),
that the corresponding step operators do not commute. One can check that
66
[E1 , E2 ] = E3 ;
[E1 , E3 ] = E2 ;
[E3 , E2 ] = E1
(2.144)
We have seen that the algebra su(3) is generated by real linear combination
of the Gell-Mann matrices (2.135), or equivalently of the matrices Hi , i = 1, 2,
(Em +Em ) and i(Em Em ), m = 1, 2, 3. These are hermitian matrices.
If one takes real linear combinations of Hi , (Em + Em ) and (Em Em )
instead, one obtains the algebra sl(3) which is not compact. This is very
similar to the relation between su(2) and sl(2) which we saw in section 2.5.
This generalizes in fact, to all su(N ) and sl(N ).
2.8
(2.145)
2.
E
2
(2.146)
(2.147)
67
2.
2
2.
2
2
2
0
1
1
1
1
1
1
0
1
1
2
2
3
3
3
2
3
4
3
4
6
5
6
undetermined
1
1
2
2
3
3
Table 2.2: The possible scalar products, angles and ratios of squared lenght
for the roots
This implies that
2.
= integer
(2.148)
2
for any roots and . This result is crucial in the study of the structure of
semisimple Lie algebras. In order to satisfy this condition the roots must have
some very special properties. From Schwartz inequality we get (The roots live
in a Euclidean space since they inherit the scalar product from the Killing form
P
of G restricted to the Cartan subalgebra by . T r(.H.H) = rankG
i i )
i=1
. =| || | cos | || |
(2.149)
(2.150)
(2.151)
This condition is very restrictive and from it we get that the possible values
of scalar products, angles and ratio of squared lenghts between any two roots
are those given in table 2.2. For the case of being parallel or anti-parallel
to we have cos = 1 and consequently mn = 4. In this case the possible
values of m and n are
1.
2.
2
= 2 and
2.
2
= 2
2.
2.
2
= 1 and
2.
2
= 4
68
2.
2
= 4 and
2.
2
= 1
In case 1 we have that = , which is trivial, or = which is a fact discussed earlier, i.e., to every root there corresponds a root in a semisimple
Lie algebra. In case 2 we have = 2 which is impossible to occur in a
semisimple Lie algebra. In (2.121) we have seen that dim G = 1 and therefore
there exist only one step operator corresponding to a root . From (2.122) we
see that 2 or 2 can not be roots since [E , E ] = [E , E ] = 0. The case
3 is similar to 2. Therefore in a semisimple Lie algebra the only roots which
are multiples of are .
Notice that there are only three possible values for the ratio of lenghts
of roots, namely 1, 2 and 3 (there are five if one considers the reciprocals 12
and 31 ). However for a given simple Lie algebra, where there are no disjoint,
mutually orthogonal set of roots, there can occur only two different lenght of
roots. The reason is that if , , and are roots of a simple Lie algebra and
2
2
2
= 2 and 2 = 3 then it follows that 2 = 23 and this is not an allowed value
2
for the ratio of two roots (see table 2.2).
2.9
69
In the section 2.8 we have shown that to each pair of roots and of a
semisimple Lie algebra we can construct a sl(2) (or su(2)) subalgebra generated
by the operators H , E and E (see eq. (2.145)). We now define the
hermitian operators:
1
T1 () = (E + E )
2
1
T2 () = (E E )
2i
(2.152)
(2.153)
The operator T2 () is the generator of rotations about the 2-axis, and a rotation by is generated by the element
S = exp(iT2 ())
(2.154)
x.
.H(cos 1)
2
x.
= xi 2 2 i Hi
= (x).H
(2.155)
x.
(2.156)
70
We now want to show that if and are roots of a given Lie algebra G,
then () is also a root. Let us introduce the operator
E S E S1
(2.157)
(2.158)
[S x.HS1 , S E S1 ]
[ (x).H, E ]
x.S E S1
x. E
(2.159)
(2.160)
(2.161)
and so
[ (x).H, E ] = x. E
(2.162)
(2.164)
71
plane 1
A
K
Q
Q
A
Q
A
Q
Q A
Q A
Q
Q
- 1
A
Q
A
Q
A Q
A QQ
Q
A
Q
A
Q
AU
plane 2
plane 3
Figure 2.3: The planes orthogonal to the roots of A2 (SU (3) or SL(3))
(
1 2
2 3 3 1
1 2 2 3 3 1
1 3
2 1
3 2
1 3 2 1 3 2
1 2 :
2 1 :
(2.165)
Definition 2.15 The Weyl group of a Lie algebra, or of its root system, is
the finite discrete group generated by the Weyl reflections.
From the considerations above we see that the Weyl group leaves invariant
the root system. However it does not contain all the symmetries of the root
system. The inversion is certainly a symmetry of the root system of
any semisimple Lie algebra but, in general, it is not an element of Weyl group.
In the case of su(3) discussed in example 2.7 the inversion can not be written
in terms of reflections. In addition, the root diagram of su(3) is invarint under
rotations of 3 , and this operation is not an element of the Weyl group of su(3).
72
2.
2
is an integer
73
6
Weyl chamber
Figure 2.5: The Weyl chambers of A1 (su(2),so(3) or sl(2))
Example 2.8 The root diagram shown in figure 2.4 is made of two ortoghonal
diagrams. Since each one is the diagram of an su(2) algebra we conclude, from
the discussion above, that it corresponds to the algebra su(2)su(2). Remember
that the ratio of the squared lenght of the ortoghonal roots are undetermined in
this case (see table 2.2).
2.10
74
plane 2
Chamber
plane 3
2.
=
2
2.
2
is equal to 1.
(2.166)
75
So, from the invariance of the root system under the Weyl group, is also
a root, as well as . The proof for the case . < 0 is similar. 2
Theorem 2.6 Let and be distinct simple roots. Then is not a root
and . 0.
Proof Suppose is a root. If is positive we write = + , and if
it is negative we write = + (). In both cases we get a contradiction to
the fact and are simple. Therefore can not be a root. From theorem
2.5 we conclude .can not be positive. 2
Theorem 2.7 Let 1 , 2 ,... r be the set of all simple roots of a semisimple
Lie algebra G. Then r = rank G and each root of G can be written as
=
r
X
na a
(2.167)
a=1
where na are integers, and they are positive or zero if is a positive root and
negative or zero if is a negative root.
Proof Suppose the simple roots are linear dependent. Denote by xa and
ya the positive and negative coefficients, respectively, of a vanishing linear
combination of the simple roots. Then write
s
X
xa a =
a=1
r
X
yb b v
(2.168)
b=s+1
xa yb a .b 0
(2.169)
ab
Since v is a vector on an Euclidean space it follows that that the only possibility
is v 2 = 0, and so v = 0. But this implies xa = yb = 0 and consequently the
simple roots must be linear independent. Now let be a positve root. If it is
not simple then = + with and both positive. If and/or are not
simple we can write them as the sum of two positive roots. Notice that can
not appear in the expansion of and/or in terms of two positive roots, since
if x is a vector of the Fundamental Weyl Chamber we have x. = x. + x..
Since they are all positive roots we have x. > x. and x. > a.. Therefore
or can not be written as + with a positive root. For the same reason
and will not appear in the expansion of any further root appearing in
76
this process. Thus, we can continue such process until is written as a sum
P
of simple roots, i.e. = ra=1 na a with each na being zero or a positive
integer. Since, for semisimple Lie algebras, the roots come in pairs ( and
) it follows that the negative roots are written in terms of the simple roots
in the same way, with na being zero or negative integers. We then see that
the set of simple roots span the root space. Since they are linear independent,
they form a basis and consequently r = rank G. 2
2.11
77
In order to define positive and negative roots and then simple roots we have
chosen one particular Weyl Chamber to play a special role. This was called the
Fundamental Weyl Chamber. However any Weyl Chamber can play such role
since they are all equivalent. As we have seen the Weyl group transforms one
Weyl Chamber into another. In fact, one can show (see pag. 51 of [HUM 72])
that there exists one and only one element of the Weyl group which takes one
Weyl Chamber into any other.
By changing the choice of the fundamental Weyl Chamber one changes the
set of simple roots. This implies that the choices of simple roots are related
by Weyl reflections. From the figure 2.6 we see that in the case of SU (3)
any of the pairs of roots (1 , 2 ), (3 , 1 ), (2 , 3 ), (1 , 2 ), (3 , 1 ),
(2 , 3 ), could be taken as the simple roots. The common features in these
pairs are the angle between the roots and the ratio of their lenghts. (in the
case of SU (3) this is trivial since all roots have the same length, but in other
cases it is not).
Therefore the important information about the simple roots can be encoded
into their scalar products. For this reason we introduce an r r matrix (r =
rank G) as
2a .b
(2.170)
Kab
b2
(a, b = 1, 2, ... rank G) which is called the Cartan matrix of the Lie algebra. As
we will see it contains all the relevant information about the structure of the
algebra G. Let us see some of its properties:
1. It provides the angle between any two simple roots since
a .b a .b
Kab Kba = 4 2
b
a2
(2.171)
2. The Cartan matrix gives the ratio of the lenghts of any two simple roots
since
Kab
2
= a2
(2.173)
Kba
b
78
(2.174)
2 0
0 2
(2.176)
Example 2.13 From figure 2.6 we see that the Cartan matrix of A2 (su(3)
or sl(3)) is
K=
2 1
1 2
(2.177)
79
aa
6
aaaa
a
aaaa
@
aaa P
iPWeyl
@
aa
@ aa
@
- 1
@
@
@
@
@
?
R
@
@
I
@
Chamber
Figure 2.7: The root diagram and Fundamental Weyl chamber of so(5) (or
sp(2))
Example 2.14 The algebra of SO(5) has dimension 10 and rank 2. So it
has 8 roots. It root diagram is shown in figure 2.7. The Fundamental Weyl
Chamber is the shaded region. Notice that all roots lie on the hyperplanes
perpendicular to the roots. The positive roots are 1 , 2 , 3 and 4 as shown
on the diagram. All the others are negative. The simple roots are 1 and 2 ,
. The
and the ratio of their squared lenghts is 2. The angle between them is 3
4
Cartan matrix of so(5) is
K=
2 1
2 2
(2.178)
Example 2.15 The last simple Lie algebra of rank 2 is the exceptional algebra
G2 . Its root diagram is shown in figure 2.8. It has 12 roots and therefore
dimension 14. The Fundamental Weyl Chamber is the shaded region. The
positive roots are the ones labelled from 1 to 6 on the diagram. The simple
roots are 1 and 2 . The Cartan matrix is given by
K=
2 1
3 2
(2.179)
We have seen that the relevant information contained in the Cartan matrix
is given by its off-diagonal elements. We have also seen that if Kab 6= 0 then
one of Kab or Kba is necessarily equal to 1. Therefore the information of the
off-diagonal elements can be given by the positive integers Kab Kba (no sum in
80
XXX Weyl
6
XX
X
3 XX
4
X
X
2 H
* 5
Y
AK
H
HH A
H
H
A
- 1
H
A H
A HH
H
j
H
AU
Chamber
(2.180)
81
82
2.12
Root strings
We have shown in theorem 2.5 that if and are non proportional roots then
+ is a root whenever . < 0, and is a root whenever . > 0. We
can use this result further to see if + m or n (for m, n integers) are
roots. In this way we can obtain a set of roots forming a string. We then come
to the concept of the -root string through . Let p be the largest positive
integer for which +p is a root, and let q be largest positive integer for which
q is a root. We will show that the set of vectors
+ p ; + (p 1) ; ... + ; ; ; ... q
(2.181)
(2.182)
(2.183)
((r + 1) s) 2 0
(2.184)
sr 1
(2.185)
83
of + p under has to be q, and vice versa, since they are the roots
that are most distant from the hyperplane perpendicular to . We then have
( q) = q
and since the only possible values of
qp=
2.( q)
= + p
2
2.
2
(2.186)
2.
= 0, 1, 2, 3
2
(2.187)
(2.188)
(2.189)
for and simple roots and . = 0. We can read this result from the Dynkin
diagram since, if two points are not linked then the corresponding simple roots
are orthogonal.
Example 2.17 For the algebra of SU (3) we see from the diagram shown in
figure 2.6 that the 1 -root string through 2 contains only two roots namely 2
and 3= 2+1.
Example 2.18 From the root diagram shown in figure 2.7 we see that, for
the algebra of SO(5), the 1 -root string through 2 contains thre roots 2 ,
3 = 1 + 2 , and 4 = 2 + 21 .
Example 2.19 The algebra G2 is the only simple Lie algebra which can have
root strings with four roots. From the diagram shown in figure 2.8 we see that
the 1 -root string through 2 contains the roots 2 , 3 = 2 +1 , 5 = 2 +21
and 6 = 22 + 31 .
84
2.13
We now explain how one can obtain from the Dynkin diagram of a Lie algebra, the corresponding root system and then the commutation relations. The
fact that this is possible to be done is a demonstration of how powerful the
information encoded in the Dynkin diagram is.
We start by introducing the concept of height of a root . In theorem 2.7 we
have shown that any root can be written as a linear combination of the simple
roots with integer coefficients all of the same sign (see eq. (2.167)). The height
of a root is the sum of these integer coefficients, i.e.
h()
rankG
X
na
(2.190)
a=1
where na are given by (2.167). The only roots of height one are the simple
roots. This definition classifies the roots according to a hierarchy. We can
reconstruct the root system of a Lie algebra from its Dynkin diagram starting
from the roots of lowest height as we now explain.
Given the Dynkin diagram we can easily construct the Cartan matrix. We
know that the diagonal elements are always 2. The off diagonal elements are
zero whenever the corresponding points (simple roots) of the diagram are not
linked. When they are linked we have Kab (or Kba ) equals to 1 and Kba (or
Kab ) equal to minus the number of links between those points.
Example 2.20 The Dynkin diagram of SO(7) is given in figure 2.10
We see that the simple root 3 (according to the rules of section 2.11 ) has a
length smaller than that of the other two. So we have K23 = 2 and K32 = 1.
Since the roots 1 and 2 have the same length we have K12 = K21 = 1. K13
and K31 are zero because there are no links between the roots 1 and 3. Therefore
2 1 0
K = 1 2 2
0 1 2
(2.191)
Once the Cartan matrix has been determined from the Dynkin diagram, one
obtain all the roots of the algebra from the Cartan matrix. We are interested in
semisimple Lie algebras. Therefore, since in such case the roots come in pairs
and , we have to find just the positive roots. We now give an algorithm
for determining the roots of a given height n from those of height n 1. The
steps are
DIAGRAMS
85
(2.192)
rankG
X
na Kab
(2.193)
a=1
where p and q are the highest positive integers such that (l) + pb and
(l) qb are roots. The integer q can be determined by looking at the set
of roots of height smaller than l (which have already been determined)
and checking what is the root of smallest height of the form (l) mb .
One then finds p from (2.193). If p does not vanish, (l) + b is a root.
Notice that if p 2 one also determines roots of height greater than
l + 1. By applying this procedure using all simple roots and all roots of
height l one determines all roots of height l + 1.
4. The process finishes when no roots of a given height l + 1 is found. That
is because there can not exists roots of height l + 2 if there are no roots
of height l + 1.
86
Therefore we have shown that the root system of a Lie algebra can be
determined from its Dynkin diagram. In some cases it is more practical to
determine the root system using the Weyl reflections through hyperplanes
perpendicular to the simple roots.
The root which has the highest height is said the highest root of the algebra
and it is generally denoted . For simple Lie algebras the highest root is unique.
PrankG
P
The integer h() + 1 = rankG
a=1 ma a , is said the
a=1 ma + 1, where =
Coxeter number of the algebra.
Example 2.21 In example 2.20 we have determined the Cartan matrix of
SO(7) from its Dynkin diagram. We now determine its root system following
the procedure described above. The dimension of SO(7) is 21 and its rank is 3.
So, the number of positive roots is 9. The first three are the simple roots 1 ,
2 and 3 . Looking at the Dynkin diagram in figure 2.10 we see that 1 + 2
and 2 + 3 are the only roots of height 2, since 1 and 3 are orthogonal. We
2 ).a
= K1a + K2a which, from (2.191), is equal to 1 for a = 1, 2 and
have 2(1 +
2a
2 for a = 3. Therefore, from (2.193), we get that 21 + 2 and 1 + 22 are
not roots but 1 + 2 + 3 and 1 + 2 + 23 are roots. Analogously we have
2(2 +3 ).a
= K2a + K3a which is equal to 1 for a = 1, 1 for a = 2 and 0 for
2a
a = 3. Therefore the only new root we obtain is 2 + 23 . This exhausts the
roots of height 3. One can check that the only root of height 4 is 1 + 2 + 23
3 ).a
= K1a + K2a + 2K3a which
which we have obtained before. Now 2(1 +2+2
2
a
is equal to 1, 1 and 2 for a = 1, 2, 3 respectively. Since it is negative for
a = 2 we get that 1 + 22 + 23 is a root. This is the only root of height 5,
and it is in fact the highest root of SO(7). So the Coxeter number of SO(7) is
6. Summarizing we have that the positive roots of SO(7) are
roots of height 1 1 ; 2 ; 3
roots of height 2 (1 + 2 ); (2 + 3 )
roots of height 3 (1 + 2 + 3 ); (2 + 23 )
roots of height 4 (1 + 2 + 23 )
roots of height 5 (1 + 22 + 23 )
These could also be determined starting from the simple roots and using Weyl
reflections.
We now show how to determine the commutation relations from the root
system of the algebra. We have been using the Cartan-Weyl basis introduced
in (2.134). However the commutation relations take a simpler form in the so
called Chevalley basis . In this basis the Cartan subalgebra generators are
DIAGRAMS
87
given by
Ha
2a .H
a2
(2.194)
(2.196)
The commutation relations between Ha and step operators are given by (see
(2.124))
2.a
E = Ka E
(2.197)
[Ha , E ] =
a2
a
where we have defined Ka 2.
. Since can be written as in (2.167) we
2a
see that Ka is a linear combination with integer coefficients, all of the same
sign, of the a-columm of the Cartan matrix
Ka =
X
2.a rankG
nb Kba
=
a2
b=1
(2.198)
where = rankG
b=1 nb b . Notice that the factor multiplying E on the r.h.s
of (2.197) is an integer. In fact this is a property of the Chevalley basis. All
the structure constants of the algebra in this basis are integer numbers. The
commutation relations (2.197) are determined once one knows the root system
of the algebra.
We now consider the commutation relations between step operators. From
(2.125)
P
N E+
if + is a root
[E , E ] = H = ma Ha if + = 0
0
otherwise
(2.199)
a
where ma are integers in the expansion 2 = rankG
a=1 ma 2a . The structure
constants N , in the Chevalley basis, are integers and can be determined
88
from the root system of the algebra and also from the Jacobi identity . Let us
explain now how to do that.
Notice that from the antisymmetry of the Lie bracket
N = N
(2.200)
for any pair of roots and . The structure constants N are defined up to
rescaling of the step operators. If we make the transformation
E E
(2.201)
keeping the Cartan subalgebra generators unchanged, then from (2.199) the
structure constants N must transform as
N
N
+
(2.202)
and
= 1
(2.203)
As we have said in section 2.9, any symmetry of the root diagram can be elevated to an automorphism of the corresponding Lie algebra. In any semisimple
Lie algebra the transformation is a symmetry of the root diagram
since if is a root so is . We then define the transformation : G G as
(Ha ) = Ha ; (E ) = E
(2.204)
and 2 = 1. From the commutation relations (2.196), (2.197) and (2.199) one
sees that such transformation is an automorphism if
= 1
N,
N =
+
(2.205)
Using the freedom to rescale the step operators as in (2.202) one sees that it is
possible to satisfy (2.205) and make (2.204) an automorphism. In particular
it is possible to choose all equals to 1 and therefore
N = N,
(2.206)
Consider the -root string through given by (2.181). Using the Jacobi
identity for the step operators E , E and E+n , where p > n > 1 and p is
the highest integer such that + p is a root, we obtain from (2.199) that
N+n, N+(n1), N+n, N+(n+1), =
2.( + n)
2
(2.207)
DIAGRAMS
89
Notice that the second term on the l.h.s of this equation vanishes when n = p
, since + (p + 1) is not a root. Adding up the equations (2.207) for n taking
the values 1, 2, ... p , we obtain that
2.
p + 2 (p + (p 1) + (p 2) + ... + 1)
2
= p(q + 1)
(2.208)
N+, N =
T r([E , E ]E ) = N
(2.209)
Consequently
N+, =
2
N
( + )2
(2.210)
( + )2
p(q + 1)
2
(2.211)
( )2
q(p + 1)
2
(2.212)
90
The relation (2.211) can be put in a simpler form. From (2.187) we have
that (see section 25.1 of [HUM 72])
(q + 1) p
( + )2
2.
( + )2
=
p
+
+
1
p
2
2
2
2.
2
2.
=
+
1
p
p 2
2
2
!
!
2
2.
=
+1 1p 2
2
(2.213)
We want to show the r.h.s of this relation is zero. We distinguish two cases:
1. In the case where 2 2 we have | 2.
|| 2.
|. From table 2.2 we
2
2
2.
see that the possible values of 2 are 1, 0 or 1. In the first case we
get that the first factor on the r.h.s of (2.213) vanishes. On the other
two cases we have that . 0 and then ( + )2 is strictly larger than
both, 2 and 2 . Since we are assuming + is a root and since, as
we have said at the end of section 2.8, there can be no more than two
different root lengths in each component of a root system, we conclude
that 2 = 2 . For the same reason + 2 can not be a root since
( + 2)2 > ( + )2 and therefore p = 1. But this implies that the
second factor on the r.h.s of (2.213) vanishes.
2. For the case of 2 < 2 we have that ( + )2 = 2 or 2 , since otherwise
we would have three different root lengths. This forces . to be strictly
negative. Therefore we have ()2 > 2 > 2 and consequently is
|<| 2.
| and therefore 2.
= 1, 0
not a root and so q = 0. But | 2.
2
2
2
2.
or 1. Since . < 0 we have 2 = 1. Then from (2.187) we have
2
p = 2.
=
2 2.
vanishes.
2
.
2
( + )2
2
(2.214)
(2.215)
This shows that the structure constants N are integer numbers. From
(2.196), (2.197) and (2.199) we see that all structure contants in the Chevalley
DIAGRAMS
91
0
otherwise
(2.216)
(2.217)
(2.218)
and
2.
=1
2
(2.219)
2.
= 0 or 1
2
(2.220)
92
2.14
As we have seen the Dynkin diagram of an algebra contains all the necessary
information to construct the commutation relations (2.216)-( 2.218). However
that information is not enough to determine the cocycles (, ) defined in
( 2.218). For that we need the Jacobi identity. We now explain how to use
such identities to determine the cocycles. We will show that the consistency
conditions imposed on the cocycles are such that they can be split into a
number of sets equal to the number of positive non simple roots. The sign of
a cocycle in a given set completly determines the signs of all other cocycles of
that set, but has no influence in the determination of the cocycles in the other
sets. Therefore the cocycles (, ) are determined by the Jacobi identities up
to such gauge freedom in fixing independently the signs of the cocycles of
different sets.
From the antisymmetry of the Lie bracket the cocycles have to satisfy
(, ) = (, )
(2.221)
(2.222)
Consider three roots , and such that their sum vanishes. The Jacobi
identity for their corresponding step operators yields, using (2.216) - (2.218)
0 = [[E , E ], E ] + [[E , E ], E ] + [[E , E ], E ]
2.H
2.H
= ((q + 1)(, ) 2 + (q + 1)(, ) 2
2.H
+(q + 1)(, ) 2 )
2
2.H
= (((q + 1)(, ) 2 (q + 1)(, )) 2
2
2.H
(2.223)
and also
(, ) = (, ) = (, )
(2.224)
1
1
1
(q + 1) = 2 (q + 1) = 2 (q + 1)
2
(2.225)
93
Further relations are found by considering Jacobi identities for three step operators corresponding to roots adding up to a fourth root. Now such identities
yield relations involving products of two cocycles. However, in many situations
there are only two non vanishing terms in the Jacobi identity. Consider three
roots , and such that + , + and + + are roots but +
is not a root. Then the Jacobi identity for the corresponding step operators
yields
0 = [[E , E ], E ] + [[E , E ], E ] + [[E , E ], E ]
= (q + 1)(q+, + 1)(, )( + , )
+(q + 1)(q+, + 1)(, )( + , )
(2.226)
(2.227)
(q + 1)(q+, + 1) = (q + 1)(q+, + 1)
(2.228)
and
There remains to consider the cases where the three terms in the Jacobi identity
for three step operators do not vanish. Such thing happens when we have three
roots , and such that + , + , + and + + are roots as
well. We now classify all cases where that happens. We shall denote long roots
by , , , ... and short roots by e, f , g, ... . From the properties of roots
, 2.e
, 2e.f
= 0, 1. Let us consider
discussed in section 2.8 one gets that 2.
2
2
e2
the possible cases:
2
1+ e 2 + 2.e
. Since +e can not be longer than it follows that
2
Therefore + e is a short root since ( + e)2 = e2 . So, if + e + is
2
2
a root then (+e+)
= 1 + (+e)
+ 2(+e).
and therefore 2(+e).
= 1.
2
2
2
2
Consequently + and + e can not be roots simultaneously since that
would imply, by the same arguments, 2.
= 2.e
= 1.
2
2
94
(e+f )2
e2
= 2+
2e.f
e2
(a)
2e.f
e2
(b)
2e.f
e2
= 1 and
(e+f )2
e2
(c)
2e.f
e2
= 0 and
(e+f )2
e2
and
In section 2.8 we have seen that the possible ratios of squared length of the
roots are 1, 2 and 3. Therefore there can not exists roots with three different
2
2
lengths in the same irreducible root system since if 2 = 2 and 2 = 3 then
2
2
= 23 .
Consider the case 4.b and let g be the third short root. Then if e + g is a
(e+g)2
2e.g
2e.g
= 32 + (e+f
= 1 or 13 . But this is impossible since (e+f
root we have (e+f
)2
)2
)2
would not be an integer. So the second case is ruled out since we would not
have e + f , e + g, f + g and e + f + g all roots.
(e+g)2
Consider the case 4.c. If e + g is a root then (e+f
= 1 + 12 2e.g
= 1 or
)2
g2
1
. Therefore 2e.g
= 0 or 1. Similarly if f + g is a root we get 2f.g
= 0
2
g2
g2
or 1. But if e + f + g is also a root then it has to be a short root since
).g
).g
+g)2
(e+f +g)2
= 23 + 2(e+f
. Consequently 2(e+f
= 1 and (e+f
= 12 . It then
(e+f )2
(e+f )2
(e+f )2
(e+f )2
2
).g (e+f )
follows that 2e.g
+ 2f.g
= 2(e+f
= 2. Therefore in the case 4.c we can
g2
g2
(e+f )2
g2
= 2f.g
= 1.
have e + f , e + g, f + g and e + f + g all roots if e.f = 0, 2e.g
g2
g2
2
).g
).g
= 1 or 2. Therefore 2(e+f
= 0 or 1.
also a root then (e+fg2+g) = 2 + 2(e+f
g2
g2
2f.g
2e.g
2e.g
Consequently g2 and g2 can not be both 1. Suppose then g2 = 0 and
2
95
terms in the Jacobi identity for the corresponding step operators will vanish.
We have
0 = [[Ee , Ef ], Eg ] + [[Eg , Ee ], Ef ] + [[Ef , Eg ], Ee ]
= (qef + 1)(qe+f,g + 1)(e, f )(e + f, g)
+(qge + 1)(qg+e,f + 1)(g, e)(g + e, f )
+(qf g + 1)(qf +g,e + 1)(f, g)(f + g, e)
(2.229)
According to the discussion in section 2.12 any root string in an algebra where
the ratio of the squared lengths of roots is 1 or 2 can have at most 3 roots.
From (2.187) we see that qef = 1 and qge = qf g = qe+f,g = qg+e,f = qf +g,e = 0.
Therefore
(e, f )(e + f, g) = (g, e)(f, g + e) = (f, g)(e, f + g)
(2.230)
96
2.15
The simple Lie algebras are, as we have seen, the building blocks for constructing all Lie algebras and therefore the classification of those is very important.
We have also seen that there exists, up to isomorphism, only one Lie algebra
associoated to a given Dynkin diagram. Since the Dynkin diagram for a simple Lie algebra is necessarily connected, we see that the classification of the
simple algebras is equivalent to the classification of possible connected Dynkin
diagrams. We now give such classification.
We will firstly look for the possible Dynkin diagrams ignoring the arrows
on them. We then define unit vectors in the direction of the simple roots as
a
a = q
a2
(2.231)
Therefore each point of the diagram will be associated to a unit vector a , and
these are all linearly independent. They satisfy
q
2a b
= Kab Kba
2a b = q
a2 b2
(2.232)
Now, from theorem 2.6 we have that a b 0, and therefore from (2.174)
2a b = 0, 1, 2, 3
(2.233)
which correspond to minus the square root of the number of lines joining
the points a and b. We shall call a set of unit vectors satisfying (2.233) an
admissible set.
One notices that by ommiting some a s, the remaining ones form an admissible set, which diagram is obtained from the original one by ommiting the
corresponding points and all lines attached to them. So we have the obvious
lemma.
Lemma 2.2 Any subdiagram of an admissible diagram is an admissible diagram.
Lemma 2.3 The number of pairs of vertices in a Dynkin diagram linked by
at least one line is strictly less than r, the rank of the algebra (or number of
vertices).
97
r
X
a
(2.234)
a=1
a b
(2.235)
pairs
And from (2.233) we see that if a and b are linked, then 2a b 1. In order
to keep the inequality we see that the number of linked pairs of points must
be smaller or equal to r 1. 2
Corollary 2.1 There are no loops in a Dynkin diagram.
Proof: If a diagram has a loop we see from lemma 2.2 that the loop itself
would be an admissible diagram. But that would violate lemma 2.3 since the
number o linked pairs of vertices is equal to the number of vertices. 2
Lemma 2.4 The number of lines attached to a given vertice can not exceed
three.
Proof: Let be a unit vector corresponding to a vertex and let 1 , 2 ,
. . . k be the set of unit vectors which correspond to the vertices linked to it.
Since the diagram has no loops we must have
a b = 0
So we can write
=
k
X
a, b = 1, 2, 3, . . . k
( a ) a + ( 0 ) 0
(2.236)
(2.237)
a=1
=1=
k
X
( a )2 + ( 0 )2
(2.238)
a=1
k
X
a=1
( a )2 = 4 4 ( 0 )2 4
(2.239)
98
a+k
X
a
(2.240)
a=l
99
a b
(2.241)
pairs
(2.242)
for a given a in
(2.243)
or
=0
(2.244)
But since and a belong to an admissible diagram we have that they satisfy
(2.233). Therefore, and also satisfy (2.233) and consequently D0 is an
admissible diagram.
Corollary 2.4 Any admissible diagram can not have subdiagrams of the form
shown in figure 2.14.
The reason is that by lemma 2.3 we would obtain that the diagrams shown
in figure 2.15 are subdiagrams of admissible diagrams. From lemmas 2.2 and
2.4 we see that this is impossible.
So, from the results obtained so far we see that an admissible diagram has
to have one of the forms shown in figure 2.16.
Consider the diagram B) of figure 2.16, and define the vectors
=
p
X
a=1
aa
q
X
a=1
aa
(2.245)
100
Figure 2.15:
Figure 2.16:
101
102
Therefore
2 =
p
X
a2 + 2
a=1
p
X
ab a b
pairs
a2
a=1
p1
X
a (a + 1)
a=1
= p2
p1
X
a = p2 p (p 1) /2
a=1
= p (p + 1) /2
(2.246)
where we have used the fact that 2a b = 1 for a and b being nearest
neighbours and 2a b = 0 otherwise. In a similar way we obtain that
2 = q (q + 1) /2
Since the points p andq are linked by a double line we have
2p q = 2
and so
= pq p q = pq/ 2
(2.247)
(2.248)
(2.249)
(2.250)
(2.251)
Since the equality can not hold because and are linearly independent, eq.
(2.251) can be written as
(p 1) (q 1) < 2
There are three possibilities for p, q 1, namely
1. p = q = 2
2. p = 1 and q any positive integer
3. q = 1 and p any positive integer
(2.252)
103
Figure 2.17:
Figure 2.18:
In the first case we have the diagram 2.17 which corresponds to the exceptional Lie algebra of rank 4 denoted F4 . In the other two cases we obtain the
diagram of figure 2.18 which corresponds to the classical Lie algebras so(2r+1)
or Sp(r) depending on the direction of the arrow.
Consider now the diagram D) of figure 2.16 and define the vectors
=
p1
X
aa
a=1
q1
X
aa
s1
X
aa
(2.253)
a=1
a=1
2 = q(q 1)
2 = s(s 1)
(2.254)
The vectors , , and (see diagram D) in figure 2.16) are linearly independent. Since 2 = 1 we have from (2.254)
( )2
(p 1) (p1 )2
=
2 2
2
(1 1/p)
=
2
cos2 (, ) =
(2.255)
(1 1/q)
2
(2.256)
cos2 (, ) =
(1 1/s)
2
(2.257)
and
We can write as
= ( )
+
(
)
+
(
)
+ ( 0 ) 0
| |2
| |2
| |2
(2.258)
104
Figure 2.19:
where 0 is a unit vector in the subspace perpendicular to , and . Then
2 = 1 =
( )2 ( )2 ( )2
+
+
+ ( 0 )2
2
2
2
(2.259)
Notice that ( ) has to be different from zero, since , , and are linarly
independent, we get the inequality
cos2 (, ) + cos2 (, ) + cos2 (, ) < 1
(2.260)
(2.261)
Figure 2.20:
105
106
Chapter 3
Representation theory
of Lie algebras
3.1
Introduction
In this chapter we shall develop further the concepts introduced in section 1.5
for group representations. The concept of a representation of a Lie algebra
is analogous to that of a group. A set of operators D1 , D2 , . . . acting on
a vector space V is a representation of a Lie algebra in the representation
space V if we can define an operation between any two of these operators such
that it reproduces the commutation relations of the Lie algebra. We will be
interested mainly on matrix representations and the operation will be the usual
commutator of matrices. In addition we shall consider the representations of
compact Lie algebras and Lie groups only, since the representation theory of
non compact Lie groups is beyond the scope of these lecture notes.
Some results on the representation theory of finite groups can be extended
to the case of compact Lie groups. In some sense this this is true because the
volume of the group space is finite for the case of compact Lie groups, and
therefore the integration over the group elements converge. We state without
proof two important results on the representation theory of compact Lie groups
which are also true for finite groups:
Theorem 3.1 A finite dimensional representation of a compact Lie group is
equivalent to a unitary one.
Theorem 3.2 A unitary representation can be decomposed into unitary irreducible representations.
107
3.2
We have defined in section 2.6 (see definition 2.12) the Cartan subalgebra of a
semisimple Lie algebra as the maximal abelian subalgebra wich can be diagonalized simultaneously. Therefore we can take the basis of the representation
space V as the eigenstates of the Cartan subalgebra generators. Then we have
Hi | i = i | i
i = 1, 2, 3...r(rank)
(3.1)
(3.2)
(3.3)
H | i =
and consenquently we have that
2
2
Any vector satisfying this condition is a weight, and in fact this is the
only condition a weight has to satisfy. From (2.148) we see that any root is a
weight but the converse is not true. Notice that 2
does not have to be an
2
integer and therefore the table 2.2 does not apply to the weights.
A weight is called dominant if it lies in the Fundamental Weyl Chamber or
on its borders. Obviously a dominant weight has a non negative scalar product
with any positive root. It is possible to find among the dominant weights, r
weights a , a = 1, 2...r, satisfying
2a b
= ab
b2
(3.4)
109
In orther words we can find r dominant weights which are orthogonal to all
simple roots except one. These weights are called fundamental weights. They
play an important role in representation theory as we will see below.
Consider now a simple root a and any weight . From (3.3) we have that
2 a
= ma = integer
a2
(3.5)
r
X
2a
ma a = 0
a2
a=1
(3.6)
ma a
(3.7)
a=1
Therefore any weight can be written as a linear combination of the fundamental weights with integer coefficients. We now want to show that any vector
formed by an integer linear combination of the fundamental weights is also a
weight, i.e., it satisfies the condition (3.3). In order to do that we introduce
the concept of co-root , which is a root devided by its squared lenght
v
Since
(v )2 =
and
1
2
2
2v v
=
2
2
(v )
(3.8)
(3.9)
(3.10)
one sees that the co-roots satisfy all the properties of roots and consequently
are also roots. However the co-roots of a given algebra G are the roots of
another algebra G v , called the dual algebra to G. The simply laced algebras,
su(N ) (AN1 ), so(2N ) (DN ), E6 , E7 and E8 , together with the exceptional
algebras G2 and F4 are self-dual algebras, in the sense that G = G v . However
so(2N +1) (BN ) is the dual algebra to sp(N ) (CN ) and vice versa. The Cartan
matrix of the dual algebra G v is the transpose of the Cartan matrix of G since
(Kab )v =
2av bv
2a b
=
= Kba
v 2
a2
(b )
(3.11)
a
a2
(3.12)
Any co-root can be written as a linear combination of the simple co-roots with
integer coefficients all of the same sign. To show that we observe from theorem
2.7 that
r
X
2
na a2 av
(3.13)
v = 2 =
a=1
and from (3.4) we get
2a
a2
(3.14)
r
2a v X
ma av
a
2
a=1
a=1
(3.15)
na =
Therefore
v =
r
X
since from (3.3) we have that 2a2 is an integer. In additon these integers are
all of the same sign since all a s lie on the Fundamental Weyl Chamber or on
its border.
Let be a vector defined by
=
r
X
ka a
(3.16)
a=1
where a are the fundamental weights and ka are arbitrary integers. Using
(3.15) and (3.4) we get
X
2
2b a X
v
=
2
=
m
k
=
ma ka
a
b
2
a2
a
a,b
(3.17)
111
lattice forms an abelian group under the addition of vectors. The root lattice is
an invariant subgroup and consequently the coset space /r has the structure
of a group (see section 1.4). One can show that /r corresponds to the center
of the covering group corresponding to the algebra which weight lattice is .
We will show that all the weights of a given irreducible representation of a
compact Lie algebra lie in the same coset.
Before giving some examples we would like to discuss the relation between
the simple roots and the fundamental weights, which constitute two basis for
the root (or weight) space. Since any root is a weight we have that the simple
roots can be written as integer linear combination of the fundamental weights.
Using (3.4) one gets that the integer coefficients are the entries of the Cartan
matrix, i.e.
X
a =
Kab b
(3.18)
b
and then
a =
1
Kab
b
(3.19)
So the fundamental weights are not, in general, written as integer linear combination of the simple roots.
Example 3.1 SU (2) has only one simple root and consequently only one fundamental weight. Choosing a normalization such that = 1, we have that
2
=1
2
and so
1
2
(3.20)
Therefore the weight lattice of SU (2) is formed by the integers and half integer
numbers and the root lattice only by the integers. Then
/r = ZZ2
(3.21)
2 1
1
2
1
=
3
2 1
1 2
(3.22)
1
(21 + 2 )
3
2 =
1
(1 + 22 )
3
(3.23)
3
KA
A
A
2
A 6
A
*
A
A
A
A
A
A
AU
1
-
!
1 3
,
2 6
!
3
2 = 0,
3
(3.24)
The vectors representing the fundamental weights are given in figure 3.1.
The root lattice, r , generated by the simple roots 1 and 2 , corresponds
to the points on the intersection of lines shown in the figure 3.2. The weight
lattice, generated by the fundamental weights 1 and 2 , are all points of r
plus the centroid of the triangles, shown by circles and plus signs on the figure
3.2.
The points of the weight lattice can be obtained from the origin, 1 and 2
by adding to them all points of the root lattice. Therefore the coset space /r
has three points which can be represented by 0, 1 and 2 . Since 1 + 2 =
1 + 2 and 31 = 21 + 2 lie in the same coset as 0, we see that /r has
the structure of the cyclic group ZZ3 which is the center of SU (3).
3.3
113
(3.25)
Hi | 0 i = Hi E | i
= (E Hi + [ Hi , E ]) | i
= (i + i ) E | i
(3.26)
satisfies
(3.27)
has weight + 1 + . . . + n .
For this reason the weights in an irreducible representation differ by a sum
of roots, and consequently they all lie in the same coset in /r . Since that
is the center of the covering group we see that the weights of an irreducible
representation is associated to only one element of the center.
In a finite dimensional representation, the number of weights is finite, since
this is at most the number of base states (remember the weights can be degenerated). Therefore, by applying sequences of step operators corresponding to
positive roots on a given state we will eventually get zero. So, an irreducible
finite dimensional representation possesses a state such that
E | i = 0
(3.28)
This state is called the highest weight state of the representation, and is the
highest weight. It is possible to show that there is only one highest weight
in an irrep. and only one highest weight state associated to it. That is, the
highest weight is unique and non degenerate.
All other states of the representation are obtained from the highest weight
state by the application of a sequence of step operators corresponding to negative roots. The state defined by
| i E1 E2 . . . En | i
(3.29)
115
states of the form (3.29). To see this, let be a a positive root and any of
the negative roots appearing in (3.29). Then we have
E | i = (E1 E + [ E , E1 ]) E2 . . . En | i
(3.30)
2
()
(3.31)
However we can show that the set of weights of a given representation, which
is a finite subset of , is invariant by the Weyl group. The state defined by
|
i S | i
(3.32)
S S1 x HS | i
S (x) H | i
(x) |
i
() x |
i
(3.33)
(3.34)
(3.35)
(3.36)
(3.37)
117
Example 3.3 In the example 3.1 we have seen that the only fundamental
weight of SU (2) is = 12 . Therefore the dominant weights of SU (2) are
the positive integers and half integers. Each one of these dominant weights
corresponds to an irreducible representation of SU (2). Then we have that
= 0 corresponds to the scalar representation, = 21 the spinorial rep. which
is the fundamental rep. of SU (2) (dim = 2), = 1 is the vectorial rep. which
is the adjoint of SU (2) (dim = 3) and so on.
Example 3.4 In the case of SU (3) we have two fundamental representations
with highest weights 1 , and 2 (see example 3.2. They are respectively the
triplet and antitriplet representations of SU (3). The rep. with highest weight
1 + 2 = 3 is the adjoint. All representations with highest weight of the form
with = n1 1 + n2 2 , with n1 and n2 non negative integers are irreducible
representations of SU(3).
3.4
and
E | qi
(3.38)
p and q are the greatest positive integers for which +p and q are weights
of the representation. One can show that all vectors of the form + n with
n integer and q < n < p , are weights of the representation. Therefore the
weights form unbroken strings, called weight strings , of the form
+ p ; + (p 1) ; . . . + ; ; ; . . . q
(3.39)
We have shown in the last section that the set of weights of a representation is
invariant under the Weyl group. The effect of the action of the Weyl reflection
on a weight is to add or subtract a multiple of the root , since () =
, and from (3.3) we have that 2
is an integer. Therefore the weight
2
2
2
string (3.39) is invariant by the Weyl reflection . In fact, reverses the
string (3.39) and consenquently we have that
( + p) = q =
and so
2
p
2
(3.40)
2
=qp
(3.41)
2
This result is similar to (2.187) which was obtained for root strings. However,
notice that the possible values of q p , in this case, are not restrict to the
values given in (2.187) (q p can, in principle, have any integer value). In the
case where is the highest weight of the representation we have that p is zero if
is a positive root, and q is zero if is negative. The relation (3.41) provides
a practical way of finding the weights of the representation. In some cases it is
easier to find some weights of a given representation by taking successive Weyl
reflections of the highest weight. However, this method does not provide, in
general, all the weights of the representation.
Once the weights are known one has to calculate their multiplicities. There
exists a formula, due to Kostant, which expresses the multiplicities directly as
a sum over the elements of the Weyl group. However, it is not easy to use
this formula in practice. There exists a recursive formula, called Freudenthals
119
( + ) ( + )
m () = 2
X p()
X
( + n) m ( + n)
(3.42)
>0 n=1
where
1X
2 >0
(3.43)
The first summation on the l.h.s. is over the positive roots and the second one
over all positive integers n such that + n is a weight of the representation,
and we have denoted by p () the highest value of n. By starting with m () = 1
one can use (3.43) to calculate the multiplicities of the weights from the higher
ones to the lower ones.
If the states | i1 and | i2 have the same weight, i.e., is degenerated,
then the weight () is also degenerate and has the same multiplicity as .
Using (3.32) we obtain that the states
| ()i1 = S | i1
| ()i2 = S | i2
and
(3.44)
have weight () and their linear independence follows from the linear independence of | i1 and | i2 . Indeed,
0 = x1 | ()i1 + x2 | ()i2 = S (x1 | i1 + x2 | i2 )
(3.45)
So, if | i1 and | i2 are linearly independent one gets that one must have
x1 = x2 = 0 and so, | ()i1 and | ()i2 are also linearly independent.
Therefore all the weights of a representation which are conjugate under the
Weyl group have the same multiplicity. This fact can be used to make the
Freudenthals formula more efficient in the calculation of the multiplicities.
Example 3.5 Using the results of example 2.14 we have that the Cartan matrix of so(5) ond its inverse are
K=
2 1
2 2
1
=
2
2 1
2 2
(3.46)
Then, using (3.19), we get that the fundamental weights of so(5) are
1 =
1
(21 + 2 )
2
2 = 1 + 2
(3.47)
21 2
=0
22
21 (21 + 2 )
=1
(21 + 2 )2
(3.48)
Therefore using (3.41) (with p = 0 since 1 is the highest weight) we get that
1 ;
(1 1 ) ;
(1 1 2 ) ;
(1 21 2 )
(3.49)
(3.50)
121
Again these weights are not degenerate and the representation has dimension
5. This is the vector representation of so(5).
Example 3.6 Consider the irrep. of su(3) with highest weight = 3 =
1 + 2 , i.e., the highest positive root. Using (3.41) and performing Weyl
reflections one can check that the weights of such rep. are all roots plus the
zero weight. Since the roots are conjugated to 3 = under the Weyl group we
conclude that they are non degenerated weights. The multiplicity of the zero
weight can be calculated from the Freundenthals formula. From (3.43) we have
that, in this case, = 3 and so from (3.42) we get
432 32 m (0) = 2 m (1 ) 12 + m (2 ) 22 + m (3 ) 32
(3.51)
3.5
The weight
(3.52)
and consequently
2 a
=1
a2
(3.53)
r
X
xb b
(3.54)
b=1
r
X
b=1
(3.55)
(3.56)
3.6
123
Casimir operators
(3.57)
for any g G, and where d sj0 (g) is the matrix representing g in the adjoint
j
(3.58)
Notice that such operator can only be defined on a given representation since
it involves the product of operators and not Lie brackets of the generators.
We then have
(3.59)
So, we have shown that Cn(D) commutes with any matrix of the representation
h
Cn(D) , D (g) = 0
(3.60)
D Tsj D Tsj+1
i
1
1h
{D Tsj , D Tsj+1 } +
D Tsj , D Tsj+1
2
2
1
=
{D Tsj , D Tsj+1 } + fstj sj+1 D (Tt )
(3.61)
2
and so, Cn(D) will have terms involving the product of (n 1) operators. Therefore, by totally symmetrizing the tensor s1 s2 ...sn we get operators Cn(D) which
are monomials of order n in D (Ts )s. Such operators are called Casimir operators, and n is called their order. They play an important role in representation
SU (r + 1)
SO(2r + 1)
Sp(r)
SO(2r)
2,
2,
2,
2,
2,
2,
2,
2,
2,
3,
4,
4,
4,
5,
6,
8,
6,
6
4, . . . r + 1
6, . . . 2r
6 . . . 2r
6 . . . 2r 2, r
6, 8, 9, 12
8, 10, 12, 14, 18
12, 14, 18, 20, 24, 30
8, 12
Table 3.1: The orders of the Casimir operators for the simple Lie Groups
theory. From Schurs lemma 1.1 it follows that in an irreducible representation
the Casimir operators have to be proportional to the identity.
One way of constructing tensors which are invariant under the adjoint
representation, is by considering traces of products of generators in a given
representation D0 , since
(3.62)
Then taking
s1 s2 ...sn
1
n!
(3.63)
permutations
we get Casimir operators. However, one finds that after the symetrization procedure very few tensors of the form above survive. It follows that a semisimple
Lie algebra of rank r possesses r invariant Casimir operators functionally independent. Their orders, for the simple Lie algebras, are given in table 3.1.
3.6.1
Notice from table 3.1 that all simple Lie groups have a quadratic Casimir
operator. That is because all such groups have an invariant symmetric tensor
of order two which is the Killing form (see section 2.4)
st = Tr (d (Ts ) d (Tt ))
(3.64)
and
(D)
C2
st D (Ts ) D (Tt )
(3.65)
3.7. CHARACTERS
125
C2
r
X
D (Hi ) D (Hi )+
i=1
2
(D (E ) D (E ) + D (E ) D (E )) (3.66)
>0 2
X
Since the Casimir operator commutes with all generators, we have from the
Schurs lemma 1.1 that in an irreducible representation it must be proportional to the unit matrix. Denoting by the highest weight of the irreducible
representation D we have
(D)
C2
r
X
| i =
2
+
[ D (E ) , D (E ) ] | i
>0 2
!
2i
i=1
2 2
H | i
+
>0 2
!
| i
(3.67)
>0
where we have used (3.28) and (2.125). So, if D, with highest weight , is
irreducible, we can write using (3.43) that
(D)
C2
= ( + 2) 1l = ( + )2 2 1l
(3.68)
3.7
Characters
(3.69)
Obviously equivalent representations (see section 1.5) have the same characters. Analogously, two conjugate elements, g1 = g3 g2 g31 , have the same character in all representations. Therefore the conjugacy classes can be labelled
by the characters.
ei 2 T2 T3 ei 2 T2 = T1
and consequently
(3.70)
ei 2 T2 eiT3 ei 2 T2 = eiT1
(3.71)
() =
iT3
j
X
eim
(3.72)
m=j
(3.73)
(3.74)
Therefore the conjugacy classes, and consequently the characters, can be labelled by r parameters or angles (r = rank).
However, the elements of the abelian group parametrized by and ()
have the same character, since from (2.155) we have
S eiH S1 = ei ()H
(3.75)
Thus the parameter and its Weyl reflections parametrize the same conjugacy
class.
The generalization of (3.73) to any compact group was done by H. Weyl in
1935. In a representation with highest weight the elements of the conjugacy
class labelled by have a character given by
() =
i(+)
W (sign) e
Q
ei >0 (1 ei )
(3.76)
3.7. CHARACTERS
127
where the summation is over the elements of the Weyl group W , and where
sign is 1 (1) if the element of the Weyl group is formed by an even (odd)
number of reflections. is the same as the one defined in (3.43). This relation
is called the Weyl character formula.
The character can also be calculated once one knows the multiplicities of
the weights of the representation. From (3.69) and (3.74) we have that
() = Tr D eiH =
m () ei
(3.77)
where the summation is over the weights of the representation and m () are
their multiplicities. These can be obtained from Freudenthals formula (3.42).
In the scalar representation the elements of the group are represented by
the unity and the highest weight is zero. So setting = 0 in (3.76) we obtain
what is called the Weyl denominator formula
X
(sign) ei() = ei
Y
1 ei
(3.78)
>0
(sign) ei(+)
i()
W (sign) e
() = PW
(3.79)
The dimension of the representation can be obtained from the Weyl character formula (3.76) noticing that
dimD = Tr (1l) = (0)
(3.80)
dimD =
( + )
>0
>0
(3.81)
Example 3.9 In the case of SO(3) (or SU (2)) we have that = 1, = 1/2
and consequently we have from (3.81) that
dim Dj = 2j + 1
(3.82)
This result can also be obtained from (3.73) by taking the limit 0 and
using LHospitals rule
dimension
(triplet) 3
(anti-triplet) 3
6
6
(adjoint) 8
10
10
15
15
( + ) 3 = m1 m2 + 2
(3.83)
So, from (3.81) the dimension of the irrep. of SU (3) with highest weight is
1
(m1 + 1) (m2 + 1) (m1 + m2 + 2)
(3.84)
2
In table 3.2 we give the dimensions of the smallest irreps. of SU (3).
dim D = dim D =
Example 3.11 Similarly let us consider the irreps. of SO(5) (or Sp(2)) with
highest weight = m1 1 + m2 2 . From example 2.14 we have that the positive
roots of SO(5) are 1 , 2 , 3 1 + 2 , and 4 21 + 2 , and so using
(3.4) and (3.56) we get (setting 12 = 1, 22 = 2)
2 1
2 2
2 3
3
2 4
=
= 1;
= 1;
=2
2
2
2
1
2
3
2
42
2 ( + ) 1
2 ( + ) 2
= m1 + 1 ;
= m2 + 1
(3.85)
2
1
22
2 ( + ) 3
1
2 ( + ) 4
1
=
(m1 + 2m2 + 3) ;
= (m1 + m2 + 2)
2
2
3
2
4
2
3.7. CHARACTERS
129
(m1 , m2 )
(1, 0)
(0, 1)
(2, 0)
(0, 2)
(1, 1)
(3, 0)
(0, 3)
(2, 1)
(1, 2)
dimension
(spinor) 4
(vector) 5
(adjoint) 10
14
16
20
30
35
40
Table 3.3: The dimensions of the smallest irreps. of SO(5) (or Sp(2))
Therefore from (3.81)
dim D(m1 ,m2 ) =
1
(m1 + 1) (m2 + 1) (m1 + m2 + 2) (m1 + 2m2 + 3)
6
(3.86)
The smallest irreps. of SO(5) (or Sp(2)) are shown in table 3.3.
We give in figures 3.4 and 3.5 the dimensions of the fundamental representations of the simple Lie algebras (extracted from [DYN 57]).
3.7. CHARACTERS
131
3.8
E = E
(3.87)
(3.88)
(0 ) h0 | i = 0
(3.89)
and so
and consequently states with different weights are orthogonal. In the case a
weight is degenerate, it is possible to find an orthogonal basis for the subspace
generated by the states corresponding to that degenerate weight. We then
shall denote the base states of the representation by | , ki where is the
corresponding weight and k is an integer number that runs from 1 to m(),
the multiplicity of . We can always normalize these states such that
h0 , k 0 | , ki = ,0 kk0
(3.90)
(3.91)
h0 , k 0 | T | 00 , k 00 ih00 , k 00 | T 0 | 0 , k 0 i
00 ,k00
h0 , k 0 | T 0 | 00 , k 00 ih00 , k 00 | T | 0 , k 0 i
00 ,k00
0
0
= h , k | [ T , T 0 ] | 0 , k 0 i
= D ([ T , T 0 ])(0 ,k0 ) (,k)
1
(3.92)
In order to simplify the notation we will denote the operators D (Hi ) and D (E ) by Hi
and E respectively.
133
| , kih, k |
(3.93)
,k
E | , ki =
| 0 , k 0 ih0 , k 0 | E | , ki
0 ,k0
m(+)
| + , lih + , l | E | , ki
(3.94)
l=1
where the sum is over the states of weight + . Therefore, from (3.91) one
has
D (E )(0 ,k0 ) (,k) = h + , k 0 | E | , ki0 ,+
(3.95)
The matrix elements of Hi are known once we have the weights of the
representation, since from (3.1) and (3.90)
D (Hi )(0 ,k0 ) (,k) = h0 , k 0 | Hi | , ki = i 0 , k0 ,k
(3.96)
(3.97)
2 H
2
(3.98)
one gets
2 H
| , ki
2
h, k | [ E , E ] | , ki = h, k |
(3.99)
2
2
= h, k | E E | , ki h, k | E E | , ki
=
m()
h, k | E | , lih , l | E | , ki
l=1
m(+)
X
l=1
h, k | E | + , lih + , l | E | , ki
m(+)
2
2
l=1
l=1
(3.100)
where m ( + ) and m ( ) are the multiplicities of the weights + and
respectively.
The relation (3.100) can be used to calculate the modules of the transition
amplitudes recursively. By taking to be a positive root and the highest
weight of the representation we have that the second term on the l.h.s. of
(3.100) vanishes. Since, in a irrep., is not degenerate we can neglect the
index k and write
X
| h, k | E | , li |
| h + , l | E | , ki |2 =
m()
X
l=1
| h | E | , li |2 =
2
=q
2
(3.101)
h + + , l | E | + , k 0 ih + , k 0 | E | , ki
k0 =1
m(+)
h + + , l | E | + , k 0 ih + , k 0 | E | , ki
k0 =1
= (q + 1) (, )h + + , l | E+ | , ki
(3.103)
3.8.1
135
m = j, j + 1, . . . , j 1, j
(3.104)
(3.105)
[ E+ , E ] = H
(3.106)
where H = 2 H/2 , with being the only positive root of SU (2). In section
2.5 we have used the basis
[ T3 , T ] = T
[ T+ , T ] = 2T3
(3.107)
(3.108)
(3.109)
Using the relation (3.100), which is the same as taking the expectation
value on the state | j, mi of both sides of the second relation in (3.107), we get
| hj, m | T+ | j, m 1i |2 | hj, m + 1 | T+ | j, mi |2 = 2m
(3.110)
where we have used the fact that T+ = T (see (3.87)). Notice that T+ | j, ji =
0, since j is the highest weight and so
| hj, j | T+ | j, j 1i |2 = 2j
(3.111)
Clearly, such result could also be obtained directly from (3.101). The other
matrix elements of T+ can then be obtained recursively from (3.110). Indeed,
jm1
X
l=0
Therefore
| hj, m + 1 | T+ | j, mi |2 = j(j + 1) m(m + 1)
(3.112)
hj, m + 1 | T+ | j, mi = hj, m | T | j, m + 1i
(3.113)
(3.114)
and since
we get
The phases of such matrix elements can be chosen to vanish, since in SU (2)
we do not have a relation like (3.103) to relate them. Therefore, we get
T | j, mi =
j(j + 1) m(m 1) | j, m 1i
(3.115)
and so,
(j)
(j)
Dm0 ,m (T ) = hj, m0 | T | j, mi
=
3.8.2
(3.116)
(3.117)
3
KA
A
A
1 1 A
137
HHA
Y
*
A H
A
A A
A A
A ?A
AA A
1 3A
AU
1
-
2a
0 ,
a2
a = 1, 2
(3.118)
where we have used (3.90), and where we have neglected the degeneracy index.
From (3.4) and the Cartan matrix of SU (3) (see example 2.13) we have
21 (1 1 )
= 1
12
21 (1 3 )
= 0
12
22 (1 3 )
=1
22
22 (1 1 )
=1
22
(3.119)
Denoting the states as (as a matter of ordering the rows and columus of the
matrices)
| 1i | 1 i ;
| 2i | 1 1 i ;
| 3i | 1 3 i
(3.120)
we obtain from (3.117), (3.118), (3.119) and that the matrices representing the
Cartan subalgebra generators are
1 0
0
D1 (H1 ) = 0 1 0
0 0
0
0 0 0
D1 (H2 ) = 0 1 0
0 0 1
(3.121)
(3.122)
(3.123)
(3.124)
These are the only non vanishing transition amplitudes. From (3.95) and
(3.120) we see that the only non vanishing elements of the matrices representing
the step operators are
D1 (E1 ) = h1 | E1 | 1 1 i ei
D1 (E2 ) = h1 1 | E2 | 1 3 i ei
D1 (E3 ) = h1 | E3 | 1 3 i ei
(3.125)
1
D (E1 ) = 0 0 0
0 0 0
0 0 0
D1 (E2 ) =
0 0 ei
0 0 0
(3.127)
0 0
0
0
D1 (E2 ) =
0 0
i
0 e
0
0 0 ei(+)
D1 (E3 ) = 0 0 0
0 0 0
0
0 0
D1 (E1 ) = ei 0 0
0
0 0
0
0 0
1
0 0
D (E3 ) = 0
i(+)
e
0 0
In general, the fases and are chosen to vanish. The algebra of SU (3)
is generated by taking real linear combination of the matrices Ha (a = 1, 2),
(E + E ) and (E E ). On the other hand the algebra of SL(3) is generated by the same matrices but the third one does not have the factor i. Notice
that in this way the triplet representation of the group SU (3) is unitary whilst
the triplet of SL(3) is not.
139
KA
A
2
A AA
A 6A
A A
AH
A
HH
A
A
j
2 3
A
A
A
A
AU
2 2
3.8.3
| 2i | 2 2 i ;
| 3i | 2 3 i
(3.128)
Using the Cartan matrix of SU (3) (see example 2.13), (3.4) and (3.118) we
get that the matrices which represent the Cartan subalgebra generators in the
Chevalley basis are
0 0 0
2
D (H1 ) = 0 1 0
0 0 1
1 0
0
2
D (H2 ) = 0 1 0
0 0
0
(3.129)
(3.130)
(3.131)
(3.132)
where, according to (3.130) and (3.131), we have introduced the fases , and
. From (3.87) we obtain the matrices for the negative step operators. Using
the fact that (q + 1) (1 , 2 ) = 1 we get from (3.103) that these fases have to
satisfy
+=+
(3.133)
Therefore the matrices which represent the step operators in the anti-triplet
representation are
0 0 0
D2 (E1 ) = 0 0 ei
0 0 0
0 0
0
0
D2 (E1 ) = 0 0
i
0 e
0
0 ei 0
0
D2 (E2 ) = 0 0
0 0
0
0
0 0
i
2
0 0
D (E2 ) = e
0
0 0
0 0 ei(+)
D2 (E3 ) = 0 0 0
0 0 0
(3.134)
0
0 0
2
0 0
D (E3 ) = 0
i(+)
e
0 0
So, these matrices are obtained from those of the triplet by making the change
E1 E2 and E3 E3 . From (3.121) and (3.129) we see the
Cartan subalgebra generators are also interchanged.
3.9
We have seen in definition 1.12 of section 1.5 the concept of tensor product
of representations. The idea is quite simple. Consider two irreducible repre0
sentations D and D of a Lie group G, with highest weights and 0 and
0
representation spaces V and V respectively. We can construct a third rep0
0
resentation by considering the tensor product space V V V . The
operators representing the group elements in the tensor product representation
are
0
0
D (g) D (g) D (g)
(3.135)
141
(3.136)
D (T1 ) , D (T2 )
D (T1 ) , D (T1 ) 1l
h
+ 1l D (T1 ) , D (T1 )
i
0
= D ([ T1 , T2 ]) 1l + 1l D ([ T1 , T2 ])
0
= D ([ T1 , T2 ])
(3.139)
Notice that if | , li and | 0 , l0 i are states of the representations V and
V with weights and 0 respectively, one gets
0
D (Hi ) | , li | 0 , l0 i = D (Hi ) | , li | 0 , l0 i
0
+ | , li D (Hi ) | 0 , l0 i
= (i + 0i ) | , li | 0 , l0 i
(3.140)
It then follows that the weigths of the representation V are the sums
0
of all weights of V with all weights of V . If and 0 are the highest weights
0
0
of V and V respectively, then the highest weight of V is + 0 , and the
corresponding state is
| + 0 i =| i | 0 i
(3.141)
which is clearly non-degenerate.
0
In general, the representation V is reducible and one can split it as the
sum of irreducible representations of G
0
V = 00 V
00
00
(3.142)
| + , ki =
X X
l=1
k
0 0
Cl,l
0 | , li | , l i
(3.143)
l0 =1
0
where m () and m (0 ) are the multiplicities of and 0 in V and V respectively, and k = 1, 2, . . . m ( + 0 ), with m ( + 0 ) being the multiplicity
0
k
of + 0 in V . Clearly, m ( + 0 ) = m () m (0 ). The constants Cl,l
0 are
the so-called Clebsch-Gordan coefficients.
Example 3.12 Let us consider the tensor product of two spinorial representations of SU (2). As discussed in section 3.8.1 it is a two dimensional representation with states | 21 , 12 i and | 21 , 12 i, and satisfying
1
T3 | 12 , 21 i = | 21 , 12 i
2
(3.144)
T+ | 12 , 21 i =| 12 , 12 i
T | 21 , 12 i = 0
(3.145)
One can easily construct the irreducible components by taking the highest
weight state | 21 , 12 i | 12 , 21 i and act with the lowering operator. One gets
1
D 2 2 (T ) | 12 , 12 i | 12 , 12 i = (T 1l + 1l T ) | 12 , 12 i | 21 , 12 i
= | 12 , 12 i | 21 , 12 i+ | 12 , 12 i | 12 , 21 i
and
2
D 2 2 (T )
and
| 12 , 12 i | 12 , 21 i = 2 | 21 , 12 i | 21 , 12 i
1
D 2 2 (T )
3
| 12 , 12 i | 12 , 12 i = 0
(3.146)
(3.147)
D 2 2 (T ) (| 12 , 21 i | 12 , 21 i | 12 , 12 i | 21 , 21 i) = 0
(3.148)
| 1, 0i (| 21 , 12 i | 12 , 12 i+ | 21 , 12 i | 12 , 12 i) / 2
| 1, 1i | 21 , 12 i | 12 , 21 i
(3.149)
143
| 0, 0i (| 12 , 12 i | 21 , 12 i | 12 , 21 i | 21 , 12 i) / 2
(3.150)
| 2 i ;
| 3 i ;
| 0i ;
| 00 i
(3.152)
SU (2) {E1 ,
(3.153)
(3.154)
| 0i ;
| 1 i
(3.155)
| 2 i
(3.157)
(3.158)
Bibliography
[ALD 86] R. Aldrovandi and J.G. Pereira, An introduction to geometrical
Physics, World Scientific (1995).
[AUM 77] L. Auslander and R.E. Mackenzie, Introduction to Differential Manifolds, Dover Publ., Inc., New York (1977).
[BAR 77] A. O. Barut and R. Raczka, Theory of group representations and
applications, Polish Scientific Publishers (1977).
[BUD 72] F. J. Budden; The Fascination of Groups; Cambridge University
Press (1972).
[CBW 82] Y. Choquet-Bruhat, C. De Witt-Morette and M. Dillard-Bleick,
Analysis, Manifolds and Physics, North-Holland Publ. Co. (1982).
[COR 84] J.F. Cornwell; Group theory in Physics; vols. I, II and III; Techniques in Physics 7; Academic Press (1984).
[DYN 57] E.B. Dynkin, Transl. Amer. Math. Soc. (2) 6 (1957) 111,
and (1) 9 (1962) 328.
[FLA 63] H. Flanders, Differential Forms with Applications to the Physical
Sciences, Academic Press (1957).
[HAM 62] M. Hamermesh; Group Theory and its Applications to Physical
Problems ; Addison-Wesley Publ. Comp. (1962).
[HEL 78] S. Helgason, Differential Geometry, Lie Groups and Symmetric
Spaces, Academic Press (1978).
[HUM 72] J.E. Humphreys, Introduction to Lie Algebra and Representation
Theory, Graduate Texts in Mathematics, Vol. 9, Springer-Verlag
(1972).
145
146
BIBLIOGRAPHY
[JAC 79] Nathan Jacobson; Lie Algebras, Dover Publ., Inc. (1979).
[LEZ 92] A. N. Leznov and M. V. Saveliev, GroupTheoretical Methods for Integration of Nonlinear Dynamical Systems, Progress in Physics Series,
v. 15, Birkha
userVerlag, Basel, 1992.
[OLI 82] D. I. Olive, Lectures on gauge theories and Lie algebras: with some
applications to spontaneous symmetry breaking and integrable dynamical systems, University of Virginia preprint (1982).
Index
abelian group, 10
abelian Lie algebra, 45
adjoint representation, 42
algebra
abelian, 45
automorphism, 42
compact, 46
Lie, 36
nilpotent, 55
simple, 45
solvable, 55
structure constants, 39, 41
semisimple, 45
associativity, 6
automorphism
automorphism group, 12
definition, 11
inner, 12, 43, 70
outer, 12, 43, 70
automorphism of a Lie algebra, 42
branching, 139
Cartan matrix, 75
Cartan subalgebra, 54, 55
Casimir operator
definition, 121
Casimir operators, 121
Casimir operators
quadractic, 48
center of a group, 16
centralizer, 16
character, 123
character
definition, 27
of Lie group, 123
Weyl formula, 125
character of a representation, 27
Chevalley basis, 84
Clebsch-Gordan coefficients, 140
closure, 6
co-root, 107
compact group, 33
compact semisimple Lie algebra, 46
completely reducible rep., 25
conjugacy class, 15
conjugate element, 15
conjugate subgroup, 15
continuos group, 33
coset
left coset space, 19
left cosets, 19
right coset space, 19
right cosets, 19
Coxeter number, 84
cyclic group, 10
dimension of a representation, 21
direct product, 17
dominant weight, 106
Dynkin diagram, 78
Dynkin index, 54
equivalent representations, 24
147
148
essential parameters, 33
exponential mapping, 40
factor group, 19
faithful representation, 21
finite discrete groups, 33
Freudenthals formula, 117
fundamental representation, 114
fundamental weights, 107
Fundamental Weyl Chamber, 71
group
abelian group, 10
adjoint representation, 42
center of a, 16
compact, 33
continuos, 33
cyclic group, 10
definition, 6
direct product of, 17
essential parameters, 33
finite discrete, 33
homomorphic groups, 11
infinite discrete, 33
isomorphic groups, 11
Lie, 34
non compact, 33
operator group, 21
order of a group, 14
quocient group, 19
representation of, 21
semisimple group, 16
simple group, 16
symmetric group, 9
topological, 34
Weyl, 69
group of transformations, 21
height of a root, 82
highest root, 84
INDEX
highest weight, 112
highest weight state, 112
homomorphism
definition, 11
homomorphic groups, 11
ideal, 45
identity, 6
identity
left identity, 6
improper subgoups, 13
infinite discrete groups, 33
inner automorphism, 12, 70
inner automorphism of algebras, 43
invariant bilinear trace form, 45
invariant subalgebra, 45
invariant subgroup, 15
inverse
left inverse, 7
inverse element, 6
isomorphism
definition, 11
isomorphic groups, 11
Killing form, 45
left coset space, 19
left cosets, 19
left invariant vector field, 38
left translations, 38
Lie algebra, 36
Lie group, 34
Lie subalgebra, 39
linear representation, 22
matrix representation, 22
minimal representation, 114
minimal weight, 114
negative root, 72
nilpotent algebra, 55
INDEX
non compact group, 33
normalizer, 54
one parameter subgroup, 40
operator group, 21
order of a group, 14
outer automorphism, 12, 70
outer automorphism of algebras, 43
positive root, 72
potentially real representation, 29
proper subgroups, 13
pseudo real representation, 29
quadractic Casimir operator, 48
quocient group, 19
real representation, 29
reducible representation, 24
representation
adjoint, 42
branching, 139
Clebsch-Gordan, 140
completely reducible, 25
dimension, 21, 125
equivalent, 24
essentially complex, 29
faithful, 21
fundamental, 114
linear, 22
matrix, 22
minimal, 114
of algebras, 105
potentially real, 29
pseudo real, 29
real, 29
reducible, 24
representation of a group, 21
space, 105
space of, 21
149
tensor product, 27
unitary, 25
character, 27
representation of a Lie algebra, 105
representation space, 21, 105
right coset space, 19
right cosets, 19
right translations, 38
root
co-root, 107
definition, 57
diagram, 70
height of, 82
highest, 84
lattice, 108
negative, 72
of su(3), 63
positive, 72
simple, 72
space decomposition, 57
string, 80
system of, 70
root diagram, 70
root diagram of su(3), 63
root lattice, 108
root space decomposition, 57
root string, 80
root system, 70
semisimple group, 16
semisimple Lie algebra, 45
simple group, 16
simple Lie algebra, 45
simple root, 72
solvable algebra, 55
step operators, 56, 57
structure constants, 39, 41
su(3)
roots, 63
150
SU(n)
center of, 17
subalgebra
Cartan, 54, 55
invariant, 45
subgroup
conjugate subgroup, 15
definition, 13
improper subgoups, 13
invariant subgroup, 15
one parameter, 40
proper subgroups, 13
symmetric group, 9
tangent space, 35
tangent vector, 35
tensor product representation, 27
topological group, 34
trace form, 45
transformations, group of, 21
unitary representation, 25
vector field
definition, 36
left invariant, 38
tagent vector, 35
weight
definition, 106
dominant, 106
fundamental, 107
highest, 112
lattice, 108
minimal, 114
strings, 116
weight lattice, 108
weight strings, 116
Weyl chambers, 71
Weyl character formula, 125
INDEX
Weyl denominator formula, 125
Weyl dimensionality formula, 125
Weyl group, 69
Weyl reflection, 67
Weyl-Cartan basis, 58, 59