Lecture 2 - Some Sources of Lie Algebras
Lecture 2 - Some Sources of Lie Algebras
We saw in the previous lecture that we can form a Lie algebra A , from an associative algebra A,
with binary operation the commutator bracket [a, b] = ab ba. We also saw that this construction
works for algebras satisfying any one of a variety of other conditions.
As Algebras of Derivations
Lie algebras are often constructed as the algebra of derivations of a given algebra. This corresponds
to the use of vector fields in geometry.
Definition 2.1. For any algebra A over a field F, a derivation of A is an F-vector space endo-
morphism D of A satisfying D(ab) = D(a)b + aD(b). Let Der(A) glA be denote the space of
derivations of A.
For an element a of a Lie algebra g, define a map ad(a) : g g, by b 7 [a, b]. This map is referred
to as the adjoint operator. Rewriting the Jacobi identity as
we see that ad(a) is a derivation of g. Derivations of this form are referred to as inner derivations
of g.
Proposition 2.1.
(b) The inner derivations of a Lie algebra g form an ideal of Der(g). More precisely,
1
Solution: The derivations of A are those maps D glA which satisfy D(ab) D(a)b aD(b) = 0
for all a and b in A. For fixed a and b, the left hand side of this equation is linear in D, so that
the set of endomorphisms satisfying that single equation is a subspace. The set of dervations is the
intersection over all a and b in A of these subspaces, which is a subspace.
We are only left to check that the bracket of two derivations is a derivation. For any a, b A and
D1 , D2 Der(A) we calculate:
[D1 , D2 ](ab) = D1 (D2 (a)b + aD2 (b)) D2 (D1 (a)b + aD1 (b))
= D1 D2 (a)b + D2 (a)D1 (b) + D1 (a)D2 (b) + aD1 D2 (b)
{D2 D1 (a)b + D1 (a)D2 (b) + D2 (a)D1 (b) + aD2 D1 (b)}
= D1 D2 (a)b D2 D1 (a)b + aD1 D2 (b) aD2 D1 (b)
= [D1 , D2 ](a)b + a[D1 , D2 ](b).
Thus the derivations are closed under the bracket, and so form a Lie subalgebra of glA .
Show that this bracket satisfies the axioms of a Lie algebra if and only if {xi , xi } = 0, {xi , xj } =
{xj , xi } and any triple xi , xj , xk satisfy the Jacobi identity.
Solution: If the Poisson bracket defines a Lie algebra structure for some choice of values {xi , xj },
then in particular, the axioms of a Lie algebra must be satisfied for brackets of terms xi . The
interesting question is whether the converse holds. We suppose then that the {xi , xj } are chosen
so that {xi , xj } = {xj , xi }, and so that the Jacobi identity is satisfied for triples xi , xj , xk .
The bilinearity of the bracket follows from the linearity of differentiation, and the skew-symmetry
follows from the assumption of the skew symmetry on the xi .
At this point we introduce some shorthands to simplify what follows. If f is any function, we write
fi for the derivative of f with respect to xi . When we are discussing an expression e in terms of
three functions f, g, h, we will write CS(e) for the cyclic summation of e, the expression formed
by summing those obtained from e by permuting the f, g, h cyclically. In particular, the Jacobi
identity will be CS({f, {g, h}}) = 0.
First we calculate the iterated bracket of monomials xi :
X
{xi , {xj , xk }} = {xj , xk }l {xi , xl } (an example of the shorthands described).
l
Now the iterated bracket of any three polynomials (or functions) f , g and h is:
X X
{h, {f, g}} = [fil gj hk + gjl fi hk ] {xi , xj }{xk , xl } + fi gj hk {xi , xj }l {xk , xl }.
i,j,k,l i,j,k,l
2
By the assumption that the Jacobi identity holds on the xi , we have (for any i, j, k):
X
CS(fi gj hk ){xi , xj }l {xk , xl } = 0,
l
for cyclicly permuting the f, g, h corresponds to cyclicly permuting the i, j, k (in the opposite order).
Thus we have: X
CS ({h, {f, g}}) = CS(fil gj hk + gjl fi hk ){xi , xj }{xk , xl }.
i,j,k,l
The remaining task can be viewed as finding the {x , x }{x , x } coefficient in this expression,
where we substitute all appearences of {x , x } for {x , x }, and so on. To do so, we tabulate
all the appearances of terms which are multiples of {x , x }{x , x }. We may as well assume here
that < and < .
Given a basis e1 , . . . , en of a Lie algebra g over F, the bracket is determined by the structure
constants ckij F, defined by:
X
[ei , ej ] = ckij ek .
k
The structure constants must satisfy the obvious skew-symmetry condition (ckii = 0 and ckij = ckji ),
and a more complicated (quadratic) condition corresponding to the Jacobi identity.
3
Definition 2.2. Let g1 , g2 , be two Lie algebras over F and : g1 g2 a linear map. We say that
is a homomorphism if it preserves the bracket: ([a, b]) = [(a), (b)], and an isomorphism if it is
bijective. If there exists an isomorphism , we say that g1 and g2 are isomorphic, written g1 = g2 .
Exercise 2.3. Let : g1 g2 be homomorphism. Then:
(b) im is a subalgebra of g2 .
(c) im
= g1 / ker .
(b) The image im is a subspace, again as is F-linear. Now for any u, v im , we may write
u = (x) and v = (y) for elements x, y g1 . Then [u, v] = [(x), (y)] = ([x, y]) im .
Thus the image is a subalgebra.
(c) Consider the map : g1 / ker im given by x + ker 7 (x). We must first see that
is well defined. If x + ker = x0 + ker , then x0 x ker , so that:
Thus our definition of does not depend on choice of representative, and is well defined.
It is trivial to see that is a homomorphism. Now suppose that x + ker ker . Then
(x) = 0, so that x ker , and x + ker = 0 + ker . Thus is injective, and that is
surjective is obvious. Thus is an isomorphism g1 / ker im .
Example 2.2. The general linear group GLn . Let {P } = , so that GLn (A) is the set of invertible
matrices with entries in A. This is a group for any A, so that GLn is an algebraic group.
Example 2.3. The special linear group SLn . Let {P } = {det(xij ) 1}, so that SLn (A) is the set
of invertible matrices with entries in A and determinant 1. This is a group for any A, so that SLn
is an algebraic group.
Exercise 2.4. Given B Matn (F) , let On,B (A) = {g GLn (A) : g T Bg = B}. Show that this
family of groups is given by an algebraic group.
4
Solution: For any unital commutative associative algebra A over F, the set
We then only have to show that the condition g T Bg = B can be written as a collection of polynomial
equations in the entries gij of the matrix g, with coefficients in F. This is obvious we have one
polynomial equation for each of the n2 entries of the matrix, and the coefficients depend only on
the entries of B.
Definition 2.4. Over a given field F, define the algebra of dual numbers D to be
D := F[]/(2 ) = {a + b | a, b F, 2 = 0}.
Example 2.4. (1) Lie GLn = GLn (F), since (In + X)1 = In X. (In X approximates
the inverse to order two, but over dual numbers, order two is ignored).
Solution: For (2), We need only prove the formula det(In + X) = 1 + tr (X). It is trivial when
n = 1, and we proceed by induction on n. Consider the matrix In + X, and the cofactor expansion
of the determinant along the final column. If i < n, the (i, n) entry is a multiple of , and so is every
entry of the ith column of the matrix obtained by removing the row and column containing (i, n).
Thus the corresponding cofactor has no contribution to the overall determinant. The determinant is
therefore 1 + Xnn multiplied by the minor corresponding to (n, n). By induction, the determinant
is (1 + Xnn )(1 + (tr (X) Xnn )). The result follows.
For (3), the following calculation gives the result:
Proof. We first show that Lie G is a subspace. Indeed, X Lie G iff P (In + X) = 0 for all .
Using the Taylor expansion:
X P
P (In + X) = P (In ) + (In ) xij ,
xij
i,j
as 2 = 0. Now as P (In ) = 0 (every group contains the identity), this condition is linear in the
xij , so that Lie G is a subspace.
5
Now suppose that X, Y Lie G. We wish to prove that XY Y X Lie G. We have:
(In + X)(In + 0 Y )(In + X)1 (In + 0 Y )1 = In + 0 (XY Y X) G(F[, 0 ]/(2 , (0 )2 )).