Abstract and Linear Algebra
Abstract and Linear Algebra
E. H. Connell
ii
E.H. Connell
Department of Mathematics
University of Miami
P.O. Box 249085
Coral Gables, Florida 33124 USA
[email protected]
°1999
c E.H. Connell
Introduction
In 1965 I first taught an undergraduate course in abstract algebra. It was fun to
teach because the material was interesting and the class was outstanding. Five of
those students later earned a Ph.D. in mathematics. Since then I have taught the
course about a dozen times from various texts. Over the years I developed a set of
lecture notes and in 1985 I had them typed so they could be used as a text. They
now appear (in modified form) as the first five chapters of this book. Here were some
of my motives at the time.
3) To order the material linearly. To the extent possible, each section should use
the previous sections and be used in the following sections.
Over the years I used the five chapters that were typed as a base for my algebra
courses, supplementing them as I saw fit. In 1996 I wrote a sixth chapter, giving
enough material for a full first year graduate course. This chapter was written in the
same “style” as the previous chapters, i.e., everything was right down to the nub. It
hung together pretty well except for the last two sections on determinants and dual
spaces. These were independent topics stuck on at the end. In the academic year
1997-98 I revised all six chapters and had them typed in LaTeX. This is the personal
background of how this book came about.
The presentation is compact and tightly organized, but still somewhat informal.
The proofs of many of the elementary theorems are omitted. These proofs are to
be provided by the professor in class or assigned as homework exercises. There is a
non-trivial theorem stated without proof in Chapter 4, namely the determinant of the
product is the product of the determinants. For the proper flow of the course, this
theorem should be assumed there without proof. The proof is contained in Chapter 6.
The Jordan form should not be considered part of Chapter 5. It is stated there only
as a reference for undergraduate courses. Finally, Chapter 6 is not written primarily
for reference, but as an additional chapter for more advanced courses.
v
This text is written with the conviction that it is more effective to teach abstract
and linear algebra as one coherent discipline rather than as two separate ones. Teach-
ing abstract algebra and linear algebra as distinct courses results in a loss of synergy
and a loss of momentum. Also I am convinced it is easier to build a course from a
base than to extract it from a big book. Because after you extract it, you still have to
build it. Basic algebra is a subject of incredible elegance and utility, but it requires
a lot of organization. This book is my attempt at that organization. Every effort
has been extended to make the subject move rapidly and to make the flow from one
topic to the next as seamless as possible. The goal is to stay focused and go forward,
because mathematics is learned in hindsight. I would have made the book shorter,
but I did not have any more time.
E. H. Connell
Department of Mathematics
University of Miami
Coral Gables, FL 33124
[email protected]
vi
Outline
Chapter 1 Background and Fundamentals of Mathematics
Sets, Cartesian products 1
Relations, partial orderings, Hausdorff maximality principle, 3
equivalence relations
Functions, bijections, strips, solutions of equations, 5
right and left inverses, projections
Notation for the logic of mathematics 13
Integers, subgroups, unique factorization 14
Chapter 2 Groups
Groups, scalar multiplication for additive groups 19
Subgroups, order, cosets 21
Normal subgroups, quotient groups, the integers mod n 25
Homomorphisms 27
Permutations, the symmetric groups 31
Product of groups 34
Chapter 3 Rings
Rings 37
Units, domains, fields 38
The integers mod n 40
Ideals and quotient rings 41
Homomorphisms 42
Polynomial rings 45
Product of rings 49
The Chinese remainder theorem 50
Characteristic 50
Boolean rings 51
Chapter 6 Appendix
The Chinese remainder theorem 108
Prime and maximal ideals and UFDs 109
Splitting short exact sequences 114
Euclidean domains 116
Jordan blocks 122
Jordan canonical form 123
Determinants 128
Dual spaces 130
viii
Chapter 1
Background and Fundamentals of
Mathematics
This chapter is fundamental, not just for algebra, but for all fields related to mathe-
matics. The basic concepts are products of sets, partial orderings, equivalence rela-
tions, functions, and the integers. An equivalence relation on a set A is shown to be
simply a partition of A into disjoint subsets. There is an emphasis on the concept
of function, and the properties of surjective, injective, and bijective. The notion of a
solution of an equation is central in mathematics, and most properties of functions
can be stated in terms of solutions of equations. In elementary courses the section
on the Hausdorff Maximality Principle should be ignored. The final section gives a
proof of the unique factorization theorem for the integers.
Notation Mathematics has its own universally accepted shorthand. The symbol
∃ means “there exists” and ∃! means “there exists a unique”. The symbol ∀ means
“for each” and ⇒ means “implies”. Some sets (or collections) are so basic they have
their own proprietary symbols. Five of these are listed below.
Sets Suppose A, B, C,... are sets. We use the standard notation for intersection
and union.
1
2 Background Chapter 1
of A and B.
Any set called an index set is assumed to be non-void. Suppose T is an index set and
for each t ∈ T , At is a set.
[
At = {x : ∃ t ∈ T with x ∈ At }
t∈T
\
At = {x : if t ∈ T, x ∈ At } = {x : ∀t ∈ T, x ∈ At }
t∈T
Exercise Suppose each of A and B is a set. The statement that A is not a subset
of B means .
(A ∩ B)0 = A0 ∪ B 0 and
(A ∪ B)0 = A0 ∩ B 0
Question Is (R × R2 ) = (R2 × R) = R3 ?
Relations
1) If a ∈ A, then a ∼ a. (reflexive)
2) If a ∼ b, then b ∼ a. (symmetric)
20 ) If a ∼ b and b ∼ a, then a = b. (anti-symmetric)
3) If a ∼ b and b ∼ c, then a ∼ c. (transitive)
1) If a ∈ A, then a ≤ a.
20 ) If a ≤ b and b ≤ a, then a = b.
3) If a ≤ b and b ≤ c, then a ≤ c.
on S. However the ordering may be linear on S but not linear on A. The HMP is
that any linearly ordered subset of a partially ordered set is contained in a maximal
linearly ordered subset.
In this book, the only applications of the HMP are to obtain maximal monotonic
collections of subsets.
Definition A collection of sets is said to be monotonic if, given any two sets of
the collection, one is contained in the other.
The HMP is used twice in this book. First, to show that infinitely generated
vector spaces have free bases, and second, in the Appendix, to show that rings have
maximal ideals (see pages 87 and 109). In each of these applications, the maximal
monotonic subcollection will have a maximal element. In elementary courses, these
results may be assumed, and thus the HMP may be ignored.
Theorem
Exercise Is there a relation on R satisfying 1), 2), 20 ) and 3) ? That is, is there
an equivalence relation on R which is also a partial ordering?
Exercise Let H ⊂ R2 be the line H = {(a, 2a) : a ∈ R}. Consider the collection
of all translates of H, i.e., all lines in the plane with slope 2. Find the equivalence
relation on R2 defined by this partition of R2 .
Functions
Just as there are two ways of viewing an equivalence relation, there are two ways
of defining a function. One is the “intuitive” definition, and the other is the “graph”
or “ordered pairs” definition. In either case, domain and range are inherent parts of
the definition. We use the “intuitive” definition because everyone thinks that way.
6 Background Chapter 1
f g
Composition Given W → X → Y define g ◦ f : W → Y by
(g ◦ f )(x) = g(f (x)).
f g h
Theorem (The associative law of composition) If V → W → X → Y , then
h ◦ (g ◦ f ) = (h ◦ g) ◦ f. This may be written as h ◦ g ◦ f .
Chapter 1 Background 7
Definitions Suppose f : X → Y .
Examples
Note There is no such thing as “the function sin(x).” A function is not defined
unless the domain and range are specified.
8 Background Chapter 1
Exercise Suppose X is a set with 6 elements and Y is a finite set with n elements.
If you are placing 6 pigeons in 6 holes, and you run out of pigeons before you fill
the holes, then you have placed 2 pigeons in one hole. In other words, in part 1) for
n = m = 6, if f is not surjective then f is not injective. Of course, the pigeonhole
principle does not hold for infinite sets, as can be seen by the following exercise.
Theorem Suppose f : X → Y .
1) The equation f (x) = y0 has at least one solution for each y0 ∈ Y iff
f is .
2) The equation f (x) = y0 has at most one solution for each y0 ∈ Y iff
f is .
3) The equation f (x) = y0 has a unique solution for each y0 ∈ Y iff
f is .
Right and Left Inverses One way to understand functions is to study right and
left inverses, which are defined after the next theorem.
f g
Theorem Suppose X → Y → W are functions.
1) f has a right inverse iff f is surjective. Any such right inverse must be
injective.
2) f has a left inverse iff f is injective. Any such left inverse must be
surjective.
Note The Axiom of Choice is not discussed in this book. However, if you worked
1) of the theorem above, you unknowingly used one version of it. For completeness,
we state this part of 1) again.
Note It is a classical theorem in set theory that the Axiom of Choice and the
Hausdorff Maximality Principle are equivalent. However in this text we do not go
that deeply into set theory. For our purposes it is assumed that the Axiom of Choice
and the HMP are true.
Y
@
¡
f1 ¡ @ f2
¡ f @
¡ @
ª
¡ ? R
@
π π2 -
X1 ¾ 1 X1 × X2 X2
One nice thing about this concept is that it works fine for infinite Cartesian
products.
Exercise This exercise is not used elsewhere in this text and may be omitted. It
is included here for students who wish to do a little more set theory. Suppose T is a
non-void set.
3) Define P(T ), the power set of T , to be the collection of all subsets of T (including
the null set). Show that if T is a finite set with n elements, P(T ) has 2n elements.
Each of the words “Lemma”, “Theorem”, and “Corollary” means “true state-
ment”. Suppose A and B are statements. A theorem may be stated in any of the
following ways:
Theorem A ⇒ B (A implies B ).
There are two ways to prove the theorem — to suppose A is true and show B is
true, or to suppose B is false and show A is false. The expressions “A ⇔ B”, “A is
equivalent to B”, and “A is true iff B is true ” have the same meaning (namely, that
A ⇒ B and B ⇒ A).
The important thing to remember is that thoughts and expressions flow through
the language. Mathematical symbols are shorthand for phrases and sentences in the
English language. For example, “x ∈ B ” means “x is an element of the set B.” If A
is the statement “x ∈ Z+ ” and B is the statement “x2 ∈ Z+ ”, then “A ⇒ B”means
“If x is a positive integer, then x2 is a positive integer”.
Theorem Suppose P (n) is a statement for each n = 1, 2, ... . Suppose P (1) is true
and for each n ≥ 1, P (n) ⇒ P (n + 1). Then for each n ≥ 1, P (n) is true.
Proof If the theorem is false, then ∃ a smallest positive integer m such that
P (m) is false. Since P (m − 1) is true, this is impossible.
The Integers
In this section, lower case letters a, b, c, ... will represent integers, i.e., elements
of Z. Here we will establish the following three basic properties of the integers.
All of this will follow from long division, which we now state formally.
1) 0 ∈ G.
2) If g1 and g2 ∈ G, then (m1 g1 + m2 g2 ) ∈ G for all integers m1 , m2 .
3) ∃! non-negative integer n such that G = nZ. In fact, if G 6= {0}
and n is the smallest positive integer in G, then G = nZ.
1) G contains a and b.
2) G is a subgroup. In fact, it is the smallest subgroup containing a and b.
It is called the subgroup generated by a and b.
3) Denote by (a, b) the smallest positive integer in G. By the previous
theorem, G = (a, b)Z, and thus (a, b) | a and (a, b) | b. Also note that
∃ m, n such that ma + nb = (a, b). The integer (a, b) is called
the greatest common divisor of a and b.
4) If n is an integer which divides a and b, then n also divides (a, b).
Definition If any one of these three conditions is satisfied, we say that a and b
are relatively prime.
We are now ready for our first theorem with any guts.
Proof Suppose a and b are relatively prime, c ∈ Z and a | bc. Then there exist
m, n with ma + nb = 1, and thus mac + nbc = c. Now a | mac and a | nbc. Thus
a | (mac + nbc) and so a | c.
Definition A prime is an integer p > 1 which does not factor, i.e., if p = ab then
a = ±1 or a = ±p. The first few primes are 2, 3, 5, 7, 11, 13, 17,... .
Proof Part 1) follows immediately from the definition of prime. Now suppose
p | ab. If p does not divide a, then by 1), (p, a) = 1 and by the previous theorem, p
must divide b. Thus 2) is true. Part 3) follows from 2) and induction on n.
Proof Factorization into primes is obvious, and uniqueness follows from 3) in the
theorem above. The power of this theorem is uniqueness, not existence.
Chapter 1 Background 17
Now that we have unique factorization and part 3) above, the picture becomes
transparent. Here are some of the basic properties of the integers in this light.
Theorem (Summary)
1) Suppose | a |> 1 has prime factorization a = ±ps11 · · · pskk . Then the only
divisors or a are of the form ±pt11 · · · ptkk where 0 ≤ ti ≤ si for i = 1, ..., k.
3) Suppose | a |> 1 and | b |> 1. Let {p1 , . . . , pk } be the union of the distinct
primes of their factorizations. Thus a = ±ps11 · · · pskk where 0 ≤ si and
b = ±pt11 · · · ptkk where 0 ≤ ti . Let ui be the minimum of si and ti . Then
(a, b) = pu1 1 · · · puk k . For example (23 · 5 · 11, 22 · 54 · 7) = 22 · 5.
Exercise Find (180,28), i.e., find the greatest common divisor of 180 and 28,
i.e., find the positive generator of the subgroup generated by {180,28}. Find integers
m and n such that 180m + 28n = (180, 28). Find the least common multiple of 180
and 28, and show that it is equal to (180 · 28)/(180, 28).
18 Background Chapter 1
Exercise We have defined the greatest common divisor (gcd) and the least com-
mon multiple (lcm) of a pair of integers. Now suppose n ≥ 2 and S = {a1 , a2 , .., an }
is a finite collection of integers with |ai | > 1 for 1 ≤ i ≤ n. Define the gcd and
the lcm of the elements of S and develop their properties. Express the gcd and the
lcm in terms of the prime factorizations of the ai . Show that the set of all linear
combinations of the elements of S is a subgroup of Z, and its positive generator is
the gcd of the elements of S.
Exercise Show that the gcd of S = {90, 70, 42} is 2, and find integers n1 , n2 , n3
such that 90n1 + 70n2 + 42n3 = 2. Also find the lcm of the elements of S.
Exercise Show that if the nth root of an integer is a rational number, then it
itself is an integer. That is, suppose c and n are integers greater than 1. There is a
unique positive real number x with xn = c. Show that if x is rational, then it is an
integer. Thus if p is a prime, its nth root is an irrational number.
Exercise Show that a positive integer is divisible by 3 iff the sum of its digits is
divisible by 3. More generally, let a = an an−1 . . . a0 = an 10n + an−1 10n−1 + · · · + a0
where 0 ≤ ai ≤ 9. Now let b = an + an−1 + · · · + a0 , and show that 3 divides a and b
with the same remainder. Although this is a straightforward exercise in long division,
it will be more transparent later on. In the language of the next chapter, it says that
[a] = [b] in Z3 .
Card Trick Ask friends to pick out seven cards from a deck and then to select one
to look at without showing it to you. Take the six cards face down in your left hand
and the selected card in your right hand, and announce you will place the selected
card in with the other six, but they are not to know where. Put your hands behind
your back and place the selected card on top, and bring the seven cards in front in
your left hand. Ask your friends to give you a number between one and seven (not
allowing one). Suppose they say three. You move the top card to the bottom, then
the second card to the bottom, and then you turn over the third card, leaving it face
up on top. Then repeat the process, moving the top two cards to the bottom and
turning the third card face up on top. Continue until there is only one card face
down, and this will be the selected card. Magic? Stay tuned for Chapter 2, where it
is shown that any non-zero element of Z7 has order 7.
Chapter 2
Groups
Groups are the central objects of algebra. In later chapters we will define rings and
modules and see that they are special cases of groups. Also ring homomorphisms and
module homomorphisms are special cases of group homomorphisms. Even though
the definition of group is simple, it leads to a rich and amazing theory. Everything
presented here is standard, except that the product of groups is given in the additive
notation. This is the notation used in later chapters for the products of rings and
modules. This chapter and the next two chapters are restricted to the most basic
topics. The approach is to do quickly the fundamentals of groups, rings, and matrices,
and to push forward to the chapter on linear algebra. This chapter is, by far and
above, the most difficult chapter in the book, because all the concepts are new.
1) If a, b, c ∈ G then a · (b · c) = (a · b) · c. If a, b, c ∈ G then a + (b + c) = (a + b) + c.
4) If a, b ∈ G, then a · b = b · a. If a, b ∈ G, then a + b = b + a.
19
20 Groups Chapter 2
Exercise. Write out the above theorem where G is an additive group. Note that
part (vii) states that G has a scalar multiplication over Z. This means that if a is in
G and n is an integer, there is defined an element an in G. This is so basic, that we
state it explicitly.
which we write as (−a − a · · − a). Then the following properties hold in general,
except the first requires that G be abelian.
(a + b)n = an + bn
a(n + m) = an + am
a(nm) = (an)m
a1 = a
Note that the plus sign is used ambiguously — sometimes for addition in G
and sometimes for addition in Z. In the language used in Chapter 5, this theorem
states that any additive abelian group is a Z-module. (See page 71.)
Exercise Suppose G is a non-void set with a binary operation φ(a, b) = a·b which
satisfies 1), 2) and [ 30 ) If a ∈ G, ∃b ∈ G with a · b = e]. Show (G, φ) is a group,
i.e., show b · a = e. In other words, the group axioms are stronger than necessary. If
every element has a right inverse, then every element has a two sided inverse.
Subgroups
1) if a, b ∈ H then a · b ∈ H
and 2) if a ∈ H then a−1 ∈ H.
22 Groups Chapter 2
elements, we say that o(G), the order of G, is infinite. If G has n elements, then
o(G) = n. Suppose a ∈ G and H = {ai : i ∈ Z}. H is an abelian subgroup of G
called the subgroup generated by a. We define the order of the element a to be the
order of H, i.e., the order of the subgroup generated by a. Let f : Z → H be the
surjective function defined by f (m) = am . Note that f (k + l) = f (k) · f (l) where
the addition is in Z and the multiplication is in the group H. We come now to the
first real theorem in group theory. It says that the element a has finite order iff f
is not injective, and in this case, the order of a is the smallest positive integer n
with an = e.
Proof Suppose j < i and ai = aj . Then ai−j = e and thus ∃ a smallest positive
integer n with an = e. This implies that the elements of {a0 , a1 , ..., an−1 } are distinct,
and we must show they are all of H. If m ∈ Z, the Euclidean algorithm states that
∃ integers q and r with 0 ≤ r < n and m = nq + r. Thus am = anq · ar = ar , and
so H = {a0 , a1 , ..., an−1 }, and am = e iff n|m. Later in this chapter we will see that
f is a homomorphism from an additive group to a multiplicative group and that,
in additive notation, H is isomorphic to Z or Zn .
Exercise Write out this theorem for G an additive group. To begin, suppose a is
an element of an additive group G, and H = {ai : i ∈ Z}.
Exercise Show that if G is a finite group of even order, then G has an odd number
of elements of order 2. Note that e is the only element of order 1.
Definition These equivalence classes are called right cosets. If the relation is
defined by a ∼ b iff b−1 · a ∈ H, then the equivalence classes are cl(a) = aH and
they are called left cosets. H is a left and right coset. If G is abelian, there is no
distinction between right and left cosets. Note that b−1 · a ∈ H iff a−1 · b ∈ H.
1) Ha = H iff a ∈ H.
2) If b ∈ Ha, then Hb = Ha, i.e., if h ∈ H, then H(h · a) = (Hh)a = Ha.
3) If Hc ∩ Ha 6= ∅, then Hc = Ha.
4) The right cosets form a partition of G, i.e., each a in G belongs to one and
only one right coset.
5) Elements a and b belong to the same right coset iff a · b−1 ∈ H iff b · a−1 ∈ H.
Proof There is no better way to develop facility with cosets than to prove this
theorem. Also write this theorem for G an additive group.
1) Any two right cosets have the same number of elements. That is, if a, b ∈ G,
f : Ha → Hb defined by f (h · a) = h · b is a bijection. Also any two left cosets
have the same number of elements. Since H is a right and left coset, any
two cosets have the same number of elements.
2) G has the same number of right cosets as left cosets. The bijection is given by
F (Ha) = a−1 H. The number of right (or left) cosets is called the index of
H in G.
4) If G is finite, and a ∈ G, then o(a) | o(G). (Proof: The order of a is the order
of the subgroup generated by a, and by 3) this divides the order of G.)
5) If G has prime order, then G is cyclic, and any element (except e) is a generator.
(Proof: Suppose o(G) = p and a ∈ G, a 6= e. Then o(a) | p and thus o(a) = p.)
Exercises
ii) Suppose G is the additive group Z and H = 3Z. Find the cosets of H.
iii) Think of a circle as the interval [0, 1] with end points identified. Suppose G = R
under addition and H = Z. Show that the collection of all the cosets of H
can be thought of as a circle.
Normal Subgroups
1) If a ∈ G, then aHa−1 = H
2) If a ∈ G, then aHa−1 ⊂ H
3) If a ∈ G, then aH = Ha
4) Every right coset is a left coset, i.e., if a ∈ G, ∃ b ∈ G with Ha = bH.
Note For any group G, G and e are normal subgroups. If G is an abelian group,
then every subgroup of G is normal.
Exercise Let A ⊂ R2 be the square with vertices (−1, 1), (1, 1), (1, −1), and
(−1, −1), and G be the collection of all “isometries” of A onto itself. These are
bijections of A onto itself which preserve distance and angles, i.e., which preserve dot
product. Show that with multiplication defined as composition, G is a multiplicative
group. Show that G has four rotations, two reflections about the axes, and two
reflections about the diagonals, for a total of eight elements. Show the collection of
rotations is a cyclic subgroup of order four which is a normal subgroup of G. Show
that the reflection about the x-axis together with the identity form a cyclic subgroup
of order two which is not a normal subgroup of G. Find the four right cosets of this
subgroup. Finally, find the four left cosets of this subgroup.
Chapter 2 Groups 27
Exercise Write out the above theorem for G an additive abelian group.
Theorem If n > 1 and a is any integer, then [a] is a generator of Zn iff (a, n) = 1.
Proof The element [a] is a generator iff the subgroup generated by [a] contains
[1] iff ∃ an integer k such that [a]k = [1] iff ∃ integers k and l such that ak + nl = 1.
Exercise Show that a positive integer is divisible by 3 iff the sum of its digits is
divisible by 3. Note that [10] = [1] in Z3 . (See the fifth exercise on page 18.)
Homomorphisms
Homomorphisms are functions between groups that commute with the group op-
erations. It follows that they honor identities and inverses. In this section we list
28 Groups Chapter 2
the basic properties. Properties 11), 12), and 13) show the connections between coset
groups and homomorphisms, and should be considered as the cornerstones of abstract
algebra.
1) f (e) = ē.
2) f (a−1 ) = f (a)−1 .
3) f is injective ⇔ ker(f ) = e.
solution, then the set of all solutions is a coset of N = ker(f ). This is a key fact
which is used routinely in topics such as systems of equations and linear
differential equations.
=
8) The composition of homomorphisms is a homomorphism, i.e., if h : Ḡ →G is a
=
homomorphism, then h ◦ f : G →G is a homomorphism.
f - Ḡ
G
½>
½
π ½
½
½ f¯
?½
G/H
Thus defining a homomorphism on a quotient group is the same as defining a
homomorphism on the numerator which sends the denominator to ē. The
image of f¯ is the image of f and the kernel of f¯ is ker(f )/H. Thus if H = ker(f ),
f¯ is injective, and thus G/H ≈ image(f ).
Exercise We know every finite group of prime order is cyclic and thus abelian.
Show that every group of order four is abelian.
g has a solution iff g lies in the image of f . Now suppose this equation has a solution
and S ⊂ G is the set of all solutions. For which subgroup H of G is S an H-coset?
Chapter 2 Groups 31
Permutations
Exercise Show that o(Sn ) = n!. Let X = {1, 2, ..., n}, Sn = S(X), and H =
{f ∈ Sn : (n)f = n}. Show H is a subgroup of Sn which is isomorphic to Sn−1 . Let
g be any permutation on X with (n)g = 1. Find g −1 Hg.
The next theorem shows that the symmetric groups are incredibly rich and com-
plex.
The Symmetric Groups Now let n ≥ 2 and let Sn be the group of all permu-
tations on {1, 2, ..., n}. The following definition shows that each element of Sn may
32 Groups Chapter 2
be represented by a matrix.
Listed here are seven basic properties of permutations. They are all easy except
4), which is rather delicate. Properties 8), 9), and 10) are listed solely for reference.
Theorem
2) Every permutation can be written uniquely (except for order) as the product of
disjoint cycles. (This is easy.)
4) The parity of the number of these transpositions is unique. This means that if
f is the product of p transpositions and also of q transpositions, then p is
even iff q is even. In this case, f is said to be an even permutation. In the other
case, f is an odd permutation.
5) A k-cycle is even (odd) iff k is odd (even). For example (1, 2, 3) = (1, 2)(1, 3) is
an even permutation.
odd. If f and g are both even or both odd, then g ◦ f is even. (Obvious.)
The following parts are not included in this course. They are presented here merely
for reference.
10) Sn can be generated by two elements. In fact, {(1, 2), (1, 2, ..., n)} generates Sn .
(Of course there are subgroups of Sn which cannot be generated by two
elements).
Proof of 4) The proof presented here uses polynomials in n variables with real
coefficients. Since polynomials will not be introduced until Chapter 3, the student
may skip the proof until after that chapter. Suppose S = {1, ..., n}. If σ is a
permutation on S and p = p(x1 , ..., xn ) is a polynomial in n variables, define σ(p)
to be the polynomial p(x(1)σ , ..., x(n)σ ). Thus if p = x1 x22 + x1 x3 , and σ is the trans-
position (1, 2), then σ(p) = x2 x21 + x2 x3 . Note that if σ1 and σ2 are permutations,
σ2 (σ1 (p)) = (σ1 ·σ2 )(p). Now let p be the product of all (xi −xj ) where 1 ≤ i < j ≤ n.
(For example, if n = 3, p = (x1 − x2 )(x1 − x3 )(x2 − x3 ).) If σ is a permutation on S,
then for each 1 ≤ i, j ≤ n with i 6= j, σ(p) has (xi − xj ) or (xj − xi ) as a factor. Thus
σ(p) = ±p. A careful examination shows that if σi is a transposition, σi (p) = −p.
Any permutation σ is the product of transpositions, σ = σ1 ·σ2 ···σt . Thus if σ(p) = p,
t must be even, and if σ(p) = −p, t must be odd.
Exercise
à !
1 2 3 4 5 6 7
1) Write as the product of disjoint cycles.
6 5 4 3 1 7 2
Write (1,5,6,7)(2,3,4)(3,7,1) as the product of disjoint cycles.
Write (3,7,1)(1,5,6,7)(2,3,4) as the product of disjoint cycles.
Which of these permutations are odd and which are even?
34 Groups Chapter 2
2) Suppose (a1 , . . . , ak ) and (c1 , . . . , c` ) are disjoint cycles. What is the order of
their product?
3) Suppose σ ∈ Sn . Show that σ −1 (1, 2, 3)σ = ((1)σ, (2)σ, (3)σ). This shows
that conjugation by σ is just a type of relabeling. Also let τ = (4, 5, 6) and
find τ −1 (1, 2, 3, 4, 5)τ .
5) Let A ⊂ R2 be the square with vertices (−1, 1), (1, 1), (1, −1), and (−1, −1),
and G be the collection of all isometries of A onto itself. We know from a
previous exercise that G is a group with eight elements. It follows from Cayley’s
theorem that G is isomorphic to a subgroup of S8 . Show that G is isomorphic
to a subgroup of S4 .
Product of Groups
Exercise Let R be the reals under addition. Show that the addition in the
product R × R is just the usual addition in analytic geometry.
One nice thing about the product of groups is that it works fine for any finite
number, or even any infinite number. The next theorem is stated in full generality.
36 Groups Chapter 2
Rings are additive abelian groups with a second operation called multiplication. The
connection between the two operations is provided by the distributive law. Assuming
the results of Chapter 2, this chapter flows smoothly. This is because ideals are also
normal subgroups and ring homomorphisms are also group homomorphisms. We do
not show that the polynomial ring F [x] is a unique factorization domain, although
with the material at hand, it would be easy to do. Also there is no mention of prime
or maximal ideals, because these concepts are unnecessary for our development of
linear algebra. These concepts are developed in the Appendix. A section on Boolean
rings is included because of their importance in logic and computer science.
Examples The basic commutative rings in mathematics are the integers Z, the
37
38 Rings Chapter 3
rational numbers Q, the real numbers R, and the complex numbers C. It will be shown
later that Zn , the integers mod n, has a natural multiplication under which it is a
commutative ring. Also if R is any commutative ring, we will define R[x1 , x2 , . . . , xn ],
a polynomical ring in n variables. Now suppose R is any ring, n ≥ 1, and Rn is the
collection of all n×n matrices over R. In the next chapter, operations of addition and
multiplication of matrices will be defined. Under these operations, Rn is a ring. This
is a basic example of a non-commutative ring. If n > 1, Rn is never commutative,
even if R is commutative.
The next two theorems show that ring multiplication behaves as you would wish
it to. They should be worked as exercises.
1) a · 0 = 0 · a = 0. Therefore 1 6= 0.
¯ ¯ ¯ ¯ ¯
2) (−a) · b = a · (−b) = −(a · b).
Units
generally, if a1 , a2 , ..., an are units, then their product is a unit with (a1 · a2 · · · an )−1 =
−1 −1
a−1
n · an−1 · · · a1 . The set of all units of R forms a multiplicative group denoted by
R∗ . Finally if a is a unit, (−a) is a unit and (−a)−1 = −(a−1 ).
Proof b = b · 1 = b · (a · c) = (b · a) · c = 1 · c = c.
¯ ¯
Domains and Fields In order to define these two types of rings, we first consider
the concept of zero divisor.
1) Zn is a domain.
2) Zn is a field.
3) n is a prime.
Exercise List the units and their inverses for Z7 and Z12 . Show that (Z7 )∗ is
a cyclic group but (Z12 )∗ is not. Show that in Z12 the equation x2 = 1 has four
¯
solutions. Finally show that if R is a domain, x2 = 1 can have at most two solutions
¯
in R.
Ideals in ring theory play a role analagous to normal subgroups in group theory.
left
Definition right
A subset I of a ring R is a ideal provided it is a subgroup
2−sided
a·b∈I
of the additive group R and if a ∈ R and b ∈ I, then b · a ∈ I
. The
a · b and b · a ∈ I
word “ideal ” means “2-sided ideal”. Of course, if R is commutative, every right or
left ideal is an ideal.
The following theorem is just an observation, but it is in some sense the beginning
of ring theory.
Homomorphisms
1) f is a group homomorphism
2) f (1R ) = 1R̄ and
¯ ¯
3) if a, b ∈ R then f (a · b) = f (a) · f (b). (On the left, multiplication
Chapter 3 Rings 43
f
R - R̄
½½
>
½
π ½
½
½ f¯
? ½½
R/I
Proof We know all this on the group level, and it is only necessary
to check that f¯ is a ring homomorphism, which is obvious.
Exercise Find a ring R with an ideal I and an element b such that b is not a unit
in R but (b + I) is a unit in R/I.
Exercise Now consider the case T = [0, 1] and R = R. Let A ⊂ R[0,1] be the
collection of all C ∞ functions, i.e., A ={f : [0, 1] → R : f has an infinite number of
derivatives}. Show A is a ring. Notice that much of the work has been done in the
previous exercise. It is only necessary to show that A is a subring of the ring R[0,1] .
Chapter 3 Rings 45
Polynomial Rings
Proof Suppose f and g are non-zero polynomials. Then deg(f )+deg(g) = deg(f g)
and thus f g is not 0. Another way to prove this theorem is to look at the bottom
¯
terms instead of the top terms. Let ai xi and bj xj be the first non-zero terms of f and
g. Then ai bj xi+j is the first non-zero term of f g.
Definition A domain T is a principal ideal domain (PID) if, given any ideal I,
∃ t ∈ T such that I = tT . Note that Z is a PID and any field is PID.
This map h is called an evaluation map. The theorem says that adding two
polynomials in R[x ] and evaluating is the same as evaluating and then adding in C.
Also multiplying two polynomials in R[x ] and evaluating is the same as evaluating
and then multiplying in C. In street language the theorem says you are free to send
x wherever you wish and extend to a ring homomorphism on R[x].
Exercise Show that, if R is a domain, the units of R[x ] are just the units of R.
Thus if F is a field, the units of F [x ] are the non-zero constants. Show that [1] + [2]x
is a unit in Z4 [x ].
are just ±1. Thus the associates of f are all cf with c 6= 0 while the associates of an
¯
integer n are just ±n. Here is the basic theorem. (This theory is developed in full in
the Appendix under the topic of Euclidean domains.)
1) F [x ]/(f ) is a domain.
2) F [x ]/(f ) is a field.
3) f is irreducible.
Theorem If R is a domain, R[x1 , x2 , ..., xn ] is a domain and its units are just the
units of R.
Chapter 3 Rings 49
Product of Rings
The product of rings works fine, just as does the product of groups.
Suppose n and m are relatively prime integers with n, m > 1. There is an exercise
in Chapter 2 to show that Znm and Zn × Zm are isomorphic as groups. It will now be
shown that they are also isomorphic as rings. (For a useful and elegant generalization
of this theorem, see the Appendix.)
Theorem Suppose n1 , ..., nt are integers, each ni > 1, and (ni , nj ) = 1 for all
i 6= j. Let fi : Z → Zni be defined by fi (a) = [a]. (Note that the bracket symbol is
used ambiguously.) Then the ring homomorphism f = (f1 , .., ft ) : Z → Zn1 × · · ×Znt
is surjective. Furthermore, the kernel of f is nZ, where n = n1 n2 · · nt . Thus Zn and
Zn1 × · · ×Znt are isomorphic rings.
Proof We wish to show that the order of f (1) is n, and thus f (1) is a group
generator, and thus f is surjective. The element f (1)m = ([1], .., [1])m = ([m], .., [m])
is zero iff m is a multiple of each of n1 , .., nt . Since their least common multiple is n,
the order of f (1) is n. (See the fourth exercise on page 36.)
Characteristic
The following theorem is just an observation, but it shows that in ring theory, the
ring of integers is a “cornerstone”.
Boolean Rings
This section is not used elsewhere in this book. However it fits easily here, and is
included for reference.
3) If R is a domain, R ≈ Z2 .
Proof Suppose a 6= 0. Then a · (1 − a) = 0 and so a = 1.
¯ ¯ ¯ ¯
4) The image of a Boolean ring is a Boolean ring. That is, if I is an ideal
of R with I 6= R, then every element of R/I is idempotent and thus R/I
is a Boolean ring. It follows from 3) that R/I is a domain iff R/I is a
field iff R/I ≈ Z2 . (In the language of Chapter 6, I is a prime ideal
iff I is a maximal ideal iff R/I ≈ Z2 ).
52 Rings Chapter 3
1) a ∈ R ⇒ a0 ∈ R.
2) a, b ∈ R ⇒ (a ∩ b) ∈ R.
3) a, b ∈ R ⇒ (a ∪ b) ∈ R.
4) ∅ ∈ R and X ∈ R.
Theorem If 1) and 2) are satisfied, then 3) and 4) are satisfied. In this case, R
is called a Boolean algebra of sets.
Proof Suppose 1) and 2) are true, and a, b ∈ R. Then a ∪ b = (a0 ∩ b0 )0 belongs to
R and so 3) is true. Since R is non-void, it contains some element a. Then ∅ = a ∩ a0
and X = a ∪ a0 belong to R, and so 4) is true.
Exercise Let X = {1, 2, ..., n} and let R be the Boolean ring of all subsets of
X. Note that o(R) = 2n . Define fi : R → Z2 by fi (a) = [1] iff i ∈ a. Show each
fi is a homomorphism and thus f = (f1 , ..., fn ) : R → Z2 × Z2 × · · ×Z2 is a ring
homomorphism. Show f is an isomorphism.
We first consider matrices in full generality, i.e., over an arbitrary ring R. However,
after the first few pages, it will be assumed that R is commutative. The topics,
such as invertible matrices, transpose, elementary matrices, systems of equations,
and determinant, are all classical. The highlight of the chapter is the theorem that a
square matrix is a unit in the matrix ring iff its determinant is a unit in the ring.
This chapter concludes with the theorem that similar matrices have the same deter-
minant, trace, and characteristic polynomial. This will be used in the next chapter
to show that an endomorphism on a finitely generated vector space has a well defined
determinant, trace, and characteristic polynomial.
Definition Suppose m and n are positive integers. Let Rm,n be the collection of
all m × n matrices
a1,1 . . . a1,n
. ..
A = (ai,j ) = .. . where each entry ai,j ∈ R.
am,1 . . . am,n
53
54 Matrices Chapter 4
Addition of matrices To “add” two matrices, they must have the same number
of rows and the same number of columns, i.e., addition is a binary operation Rm,n ×
Rm,n → Rm,n . The addition is defined by (ai,j ) + (bi,j ) = (ai,j + bi,j ), i.e., the i, j term
of the sum is the sum of the i, j terms. The following theorem is just an observation.
Theorem Rm,n is an additive abelian group. Its “zero” is the matrix 0 = 0m,n
all of whose terms are zero. Also −(ai,j ) = (−ai,j ). Furthermore, as additive groups,
Rm,n ≈ Rmn .
This theorem is entirely transparent. In the language of the next chapter, it merely
states that Rm,n is a right module over the ring R.
Definition The identity matrix In ∈ Rn is the square matrix whose diagonal terms
are 1 and whose off-diagonal terms are 0.
Transpose
Theorem 1) (At )t = A
2) (A + B)t = At + B t
3) If c ∈ R, (Ac)t = At c
4) (AB)t = B t At
5) If A ∈ Rn , then A is invertible iff At is invertible.
In this case (A−1 )t = (At )−1 .
Triangular Matrices
If A ∈ Rn , then A is upper (lower) triangular provided ai,j = 0 for all i > j (all
j > i). A is strictly upper (lower) triangular provided ai,j = 0 for all i ≥ j (all j ≥ i).
A is diagonal if it is upper and lower triangular, i.e., ai,j = 0 for all i 6= j. Note
that if A is upper (lower) triangular, then At is lower (upper) triangular.
a1
a2 0
Exercise Suppose A =
· is a diagonal matrix, B ∈ Rm,n ,
0 ·
an
and C ∈ Rn,p . Show that BA is obtained from B by multiplying column i of B by
ai . Show AC is obtained from C by multiplying row i of C by ai . Show A is a unit
in Rn iff each ai is a unit in R.
Scalar matrices A scalar matrix is a diagonal matrix for which all the diagonal
terms are equal, i.e., a matrix of the form cIn . The map R → Rn which sends c to
cIn is an injective ring homomorphism, and thus we may consider R to be a subring
of Rn . Multiplying by a scalar is the same as multiplying by a scalar matrix, and
thus scalar matrices commute with everything, i.e., if B ∈ Rn , (cIn )B = cB = Bc =
B(cIn ). Recall we are assuming R is a commutative ring.
1
0 1
1
Type 2 B=
1
1 0
1
1
1 ai,j
1 where i 6= j and ai,j is
Type 3 B=
1 any element of R.
0 1
1
In type 1, all the off-diagonal elements are zero. In type 2, there are two non-zero
off-diagonal elements. In type 3, there is at most one non-zero off-diagonal element,
and it may be above or below the diagonal.
For 1), perform row and column operations on A to reach the desired form. This
shows the matrices B and C may be selected as products of elementary matrices.
Part 2) also follows from this procedure. For part 3), use only row operations. Notice
that if T is in row-echelon form, the number of non-zero rows is the rank of T .
Systems of Equations
c1
·
Suppose A = (ai,j ) ∈ Rm,n and C =
∈ Rm = Rm,1 . The system
·
cm
a1,1 x1 + · · · + a1,n xn = c1
.. .. .. of m equations in n unknowns, can be written as one
. . .
am,1 x1 + · · · + am,n xn = cm
x1 c1
· ·
matrix equation in one unknown, namely as (ai,j ) = or AX = C.
· ·
xn cm
60 Matrices Chapter 4
Theorem
The geometry of systems of equations over a field will not become really trans-
parent until the development of linear algebra in Chapter 5.
Determinants
For each σ, a1,σ(1) · a2,σ(2) · · · an,σ(n) contains exactly one factor from each row and
one factor from each column. Since R is commutative, we may rearrange the factors
so that the first comes from the first column, the second from the second column, etc.
This means that there is a permutation τ on (1, 2, . . . , n) such that a1,σ(1) · · · an,σ(n) =
aτ (1),1 · · · aτ (n),n . We wish to show that τ = σ −1 and thus sign(σ) = sign(τ ). To
reduce the abstraction, suppose σ(2) = 5. Then the first expression will contain
the factor a2,5 . In the second expression, it will appear as aτ (5),5 , and so τ (5) = 2.
Anyway, τ is the inverse of σ and thus there are two ways to define determinant. It
follows that the determinant of a matrix is equal to the determinant of its transpose.
X X
Theorem |A| = sign(σ)a1,σ(1) · a2,σ(2) · · · an,σ(n) = sign(τ )aτ (1),1 · aτ (2),2 · · · aτ (n),n .
all σ all τ
Proof This is immediate from the definition of determinant and the distributive
law of multiplication in the ring R.
Exercise Rewrite the four preceding theorems using rows instead of columns.
The following theorem is just a summary of some of the work done so far.
Theorem For any 1 ≤ i ≤ n, | A |= ai,1 Ci,1 + ai,2 Ci,2 + · · · + ai,n Ci,n . For any
1 ≤ j ≤ n, | A |= a1,j C1,j + a2,j C2,j + · · · + an,j Cn,j . Thus if any row or any column is
zero, the determinant is zero.
a1 a2 a3
Exercise Let A = b1 b2 b3 . The determinant of A is the sum of six terms.
c1 c2 c3
Chapter 4 Matrices 63
Write out the determinant of A expanding by the first column and also expanding by
the second row.
The following remarkable theorem takes some work to prove. We assume it here
without proof. (For the proof, see page 130 of the Appendix.)
One of the major goals of this chapter is to prove the converse of the preceding
corollary.
We are now ready for one of the most beautiful and useful theorems in all of
mathematics.
Exercise Show that any right inverse of A is also a left inverse. That is, suppose
A, B ∈ Rn and AB = I. Show A is invertible with A−1 = B, and thus BA = I.
Similarity
Theorem B is similar to B.
Chapter 4 Matrices 65
Proof The first part of the theorem is immediate, and the second part is a special
case of the previous theorem.
Theorem If A and B are similar, then they have the same characteristic polyno-
mials.
Note This exercise is a special case of a more general theorem. A square matrix
over a field is nilpotent iff all its characteristic roots are 0 iff it is similar to a strictly
¯
upper triangular matrix. This remarkable result cannot be proved by matrix theory
alone, but depends on linear algebra (see pages 93 and 98).
Chapter 5
Linear Algebra
The exalted position held by linear algebra is based upon the subject’s ubiquitous
utility and ease of application. The basic theory is developed here in full generality,
i.e., modules are defined over an arbitrary ring R and not just over a field. The
elementary facts about cosets, quotients, and homomorphisms follow the same pat-
tern as in the chapters on groups and rings. We give a simple proof that if R is a
commutative ring and f : Rn → Rn is a surjective R-module homomorphism, then
f is an isomorphism. This shows that finitely generated free R-modules have a well
defined dimension, and simplifies much of the development of linear algebra. It is in
this chapter that the concepts about functions, solutions of equations, matrices, and
generating sets come together in one unified theory.
After the general theory, we restrict our attention to vector spaces, i.e., modules
over a field. The key theorem is that any vector space V has a free basis, and thus
if V is finitely generated, it has a well defined dimension, and incredible as it may
seem, this single integer determines V up to isomorphism. Also any endomorphism
f : V → V may be represented by a matrix, and any change of basis corresponds to
conjugation of that matrix. One of the goals in linear algebra is to select a basis so
that the matrix representing f has a simple form. For example, if f is not injective,
then f may be represented by a matrix whose first column is zero. As another
example, if f is nilpotent, then f may be represented by a strictly upper triangular
matrix. The theorem on Jordan canonical form is not proved in this chapter, and
should not be considered part of this chapter. It is stated here in full generality only
for reference and completeness. The proof is given in the Appendix. This chapter
concludes with the study of real inner product spaces, and with the beautiful theory
relating orthogonal matrices and symmetric matrices.
67
68 Linear Algebra Chapter 5
M ×R → M satisfying (a1 + a2 )r = a1 r + a 2 r
(m, r) → mr a(r1 + r2 ) = ar1 + ar2
a(r1 · r2 ) = (ar1 )r2
a1 = a
¯
for all a, a1 , a2 ∈ M and r, r1 , r2 ∈ R.
Convention Unless otherwise stated, the word “R-module” (or sometimes just
“module”) will mean “right R-module”.
Proof We know from page 22 that versions of 1) and 2) hold for subgroups, and
in particular for subgroups of additive abelian groups. To finish the proofs it is only
necessary to check scalar multiplication, which is immediate. Also the proof of 3) is
immediate. Note that if N1 and N2 are submodules of M , N1 + N2 is the smallest
submodule of M containing N1 ∪ N2 .
Homomorphisms
of this work has already been done in the chapter on groups (see page 28).
Theorem
Abelian groups are Z-modules On page 21, it is shown that any additive
group M admits a scalar multiplication by integers, and if M is abelian, the properties
are satisfied to make M a Z-module. Note that this is the only way M can be a Z-
module, because a1 = a, a2 = a + a, etc. Furthermore, if f : M → N is a group
homomorphism of abelian groups, then f is also a Z-module homomorphism.
Summary Additive abelian groups are “the same things” as Z-modules. While
group theory in general is quite separate from linear algebra, the study of additive
abelian groups is a special case of the study of R-modules.
Homomorphisms on Rn
evaluation at 1 gives a bijection from HomR (R, M ) to M , and this bijection is clearly
¯
a group isomorphism. If R is commutative, it is an isomorphism of R-modules.
In the case M = R, the above theorem states that multiplication on left by some
m ∈ R defines a right R-module homomorphism from R to R, and every module
homomorphism is of this form. The element m should be thought of as a 1 × 1
matrix. We now consider the case where the domain is Rn .
0 r1
¯
· ·
Homomorphisms on Rn Define ei ∈ Rn by ei =
1
i . Note that any
¯
· ·
0 rn
¯
can be written uniquely as e1 r1 + · · +en rn . The sequence {e1 , .., en } is called the
canonical free basis or standard basis for Rn .
Proof The proof is straightforward. Note this theorem gives a bijection from
HomR (Rn , M ) to M n = M ×M ×··×M and this bijection is a group isomorphism. We
will see later that the product M n is an R-module with scalar multiplication defined
by (m1 , m2 , .., mn )r = (m1 r, m2 r, .., mn r). If R is commutative so that HomR (Rn , M )
is an R-module, this theorem gives an R-module isomorphism from HomR (Rn , M ) to
M n.
This theorem reveals some of the great simplicity of linear algebra. It does not
matter how complicated the ring R is, or which R-module M is selected. Any R-
module homomorphism from Rn to M is determined by its values on the basis, and
any function from that basis to M extends uniquely to a homomorphism from Rn to
M.
Now let’s examine the special case M = Rm and show HomR (Rn , Rm ) ≈ Rm,n .
Even though this follows easily from the previous theorem and properties of ma-
trices, it is one of the great classical facts of linear algebra. Matrices over R give
R-module homomorphisms! Furthermore, addition of matrices corresponds to addi-
tion of homomorphisms, and multiplication of matrices corresponds to composition
of homomorphisms. These properties are made explicit in the next two theorems.
Proof This is just the associative law of matrix multiplication, C(AB) = (CA)B.
The previous theorem reveals where matrix multiplication comes from. It is the
matrix which represents the composition of the functions. In the case where the
domain and range are the same, we have the following elegant corollary.
This corollary shows one way non-commutative rings arise, namely as endomor-
phism rings. Even if R is commutative, Rn is never commutative unless n = 1.
We now return to the general theory of modules (over some given ring R).
74 Linear Algebra Chapter 5
After seeing quotient groups and quotient rings, quotient modules go through
without a hitch. As before, R is a ring and module means R-module.
The relationship between quotients and homomorphisms for modules is the same
as for groups and rings, as shown by the next theorem.
f
M - M̄
½½
>
½
π ½
½
½ f¯
? ½½
M/N
Proof On the group level this is all known from Chapter 2 (see page 29). It is
only necessary to check that f¯ is a module homomorphism, and this is immediate.
Chapter 5 Linear Algebra 75
1) M =Z K = 3Z L = 5Z K ∩ L = 15Z K +L=Z
K/K ∩ L = 3Z/15Z ≈ Z/5Z = (K + L)/L
2) M =Z K = 6Z L = 3Z (K ⊂ L)
(M/K)/(L/K) = (Z/6Z)/(3Z/6Z) ≈ Z/3Z = M/L
Infinite products work fine for modules, just as they do for groups and rings.
This is stated below in full generality, although the student should think of the finite
case. In the finite case something important holds for modules that does not hold
for non-abelian groups or rings – namely, the finite product is also a coproduct. This
makes the structure of module homomorphisms much more simple. For the finite
case we may use either the product or sum notation, i.e., M1 × M2 × · · ×Mn =
M1 ⊕ M2 ⊕ · · ⊕Mn .
For T = {1, 2} the product and sum properties are displayed in the following
commutative diagrams.
M M
¡ @ µ
¡ 6 I@
f1 ¡¡ @ f g1 ¡ @ g2
f @ 2 ¡ g @
¡ @ ¡ @
ª
¡ ? R
@ ¡ @
M1 ¾ M 1 ⊕ M2 - M2 M1 - M 1 ⊕ M2 ¾ M2
π1 π2 i1 i2
Theorem For finite T , the 1-1 correspondences in the above theorems actually
produce group isomorphisms. If R is commutative, they give isomorphisms of R-
modules.
Proof Let’s look at this theorem for products with n = 2. All it says is that if f =
(f1 , f2 ) and h = (h1 , h2 ), then f + h = (f1 + h1 , f2 + h2 ). If R is commutative, so that
the objects are R-modules and not merely additive groups, then the isomorphisms
are module isomorphisms. This says merely that f r = (f1 , f2 )r = (f1 r, f2 r).
Chapter 5 Linear Algebra 77
Summands
One basic question in algebra is “When does a module split as the sum of two
modules?”. Before defining summand, here are two theorems for background.
This is exactly what you would expect, and the next theorem is almost as intuitive.
1) Is 2Z a summand of Z?
2) Is 4Z8 a summand of Z8 ?
3) Is 3Z12 a summand of Z12 ?
4) Suppose n, m > 1. When is nZmn a summand of Zmn ?
M
Theorem For each t ∈ T , let Rt = RR and for each c ∈ T , let ec ∈ ⊕Rt = Rt
t∈T
be ec = {rt } where rc = l and rt = 0 if t 6= c. Then {ec }c∈T is a basis for ⊕Rt called
¯ ¯
the canonical basis or standard basis.
Chapter 5 Linear Algebra 79
Recall that we have already had the preceding theorem in the case S is the canon-
ical basis for M = Rn . The next theorem is so basic in linear algebra that it is used
without comment. Although the proof is easy, it should be worked carefully.
It will now be shown that any free R-module is isomorphic to one of the
canonical free R-modules.
M
Theorem An R-module N is free iff ∃ an index set T such that N ≈ Rt . In
t∈T
particular, N has a finite free basis of n elements iff N ≈ Rn .
Note that 2) here is essentially the first exercise for the case n = 1. That is, if
f : R → R is a surjective R-module homomorphism, then f is an isomorphism.
The theorem stated below gives a summary of results we have already had. It
shows that certain concepts about matrices, linear independence, injective homo-
morphisms, and solutions of equations, are all the same — they are merely stated in
different language. Suppose A ∈ Rm,n and f : Rn → Rm is the homomorphism associ-
ated with A, i.e., f (B) = AB. Let v1 , .., vn ∈ Rm be the columns of A, i.e., f (e
i
) = vi
λ1 c1
n
= column i of A. Let λ = . represent an element of R and C = .
λn cm
Chapter 5 Linear Algebra 81
represent an element of Rm .
Theorem
We now look at the preceding theorem in the special case where n = m and R
is a commutative ring. So far in this chapter we have just been cataloging. Now we
prove something more substantial, namely that if f : Rn → Rn is surjective, then f
is injective. Later on we will prove that if R is a field, injective implies surjective.
1) f is an automorphism.
5) f is surjective.
Proof Suppose 5) is true and show 2). Since f is onto, ∃ u1 , ..., un ∈ Rn with
f (ui ) = ei . Let g : Rn → Rn be the homomorphism satisfying g(ei ) = ui . Then f ◦ g
is the identity. Now g comes from some matrix D and thus AD = I. This shows that
A has a right inverse and is thus invertible. Recall that the proof of this fact uses
determinant, which requires that R be commutative.
We already know the first three properties are equivalent, 4) and 5) are equivalent,
and 3) implies 4). Thus the first five are equivalent. Furthermore, applying this
result to At shows that the last three properties are equivalent to each other. Since
| A |=| At |, 2) and 2t ) are equivalent.
Uniqueness of Dimension
Proof By the previous lemma, any basis for M must be finite. M has a basis of
n elements iff M ≈ Rn . The result follows because Rn ≈ Rm iff n = m.
Change of Basis
Before changing basis, we recall what a basis is. Previously we defined generat-
ing, independence, and basis for sequences, not for collections. For the concept of
generating it matters not whether you use sequences or collections, but for indepen-
denceÃand basis,!you must use sequences. Consider the columns of the real matrix
2 3 2
A= . If we consider the column vectors of A as a collection, there are
1 4 1
only two of them, yet we certainly don’t wish to say the columns of A form a basis
for R2 . In a set or collection, there is no concept of repetition. In order to make
sense, we must consider the columns of A as an ordered triple of vectors. When we
originally defined basis, we could have called it “indexed free basis” or even “ordered
free basis”.
Two sequences cannot begin to be equal unless they have the same index set.
We will follow the classical convention that an index set with n elements must be
{1, 2, .., n}, and thus a basis for M with n elements is a sequence S = {u1 , .., un }
or if you wish, S = (u1 , .., un ) ∈ M n . Suppose M is an R-module with a basis of
n elements. Recall there is a bijection α : HomR (Rn , M ) → M n defined by α(h) =
(h(e1 ), .., h(en )). Now h : Rn → M is an isomorphism iff α(h) is a basis for M .
The point of all this is that selecting a basis of n elements for M is the same as
selecting an isomorphism from Rn to M , and from this viewpoint, change of basis
can be displayed by the diagram below.
Proof The proof follows by seeing that the following diagram is commutative.
B
Rn - Rn
ei ei
@
I ¡
µ
@I
@ ¡
µ¡
≈@@R
@
vi
¡¡
ª
¡
vi¡ ≈
@R
@ ª
¡
f
C ≈ M - M ≈ C
µ
¡ u@
I
≈¡¡ui i@ ≈
@
µ
¡
¡ @
I
@
? ¡ª ?
¡ R@
@
¡
ªei e@
R
i
Rn - Rn
A
The diagram also explains what it means for A to be the matrix of f w.r.t. the
basis S. Let h : Rn → M be the isomorphism with h(ei ) = ui for 1 ≤ i ≤ n. Then
the matrix A ∈ Rn is the one determined by the endomorphism h−1 ◦f ◦h : Rn → Rn .
In other words, column i of A is h−1 (f (h(ei ))).
An important special case is where M = Rn and f : Rn → Rn is given by some
matrix W . Then h is given by the matrix U whose ith column is ui and A =
U −1 W U. In other words, W represents f w.r.t. the standard basis, and U −1 W U
represents f w.r.t. the basis {u1 , .., un }.
is the matrix of f w.r.t. some basis. By the previous theorem, all three are well
defined, i.e., do not depend upon the choice of basis.
à !
2 2 3 3
Exercise Let R = Z and f : Z → Z be defined by f (D) = D. Find
0 −1
(Ã ! Ã !)
2 3
the matrix of f w.r.t. the basis , .
1 1
Exercise Let L ⊂ R2 be the line L = {(r, 2r) : r ∈ R}. Show there is one
and only one homomorphism f : R2 → R2 which is the identity on L and has
f (−1, 1) = (1, −1). Find the matrix A ∈ R2 which represents f with respect to the
basis {(1, 2), (−1, 1)}. Find the determinant, trace, and characteristic polynomial of
f . Also find the matrix B ∈ R2 which represents f with respect to the standard
basis. Finally, find an invertible matrix C ∈ R2 with B = C −1 AC.
Vector Spaces
So far in this chapter we have been developing the theory of linear algebra in
general. The previous theorem, for example, holds for any commutative ring R, but
it must be assumed that the module M is free. Endomorphisms in general will not
have a determinant or trace. We now focus on the case where R is a field, and
show that in this case, every R-module is free. Thus any finitely generated R-module
will have a well defined dimension, and endomorphisms on it will have well defined
determinant, trace, and characteristic polynomial.
In this section, F is a field. F -modules may also be called vector spaces and
F -module homomorphisms may also be called linear transformations.
After so many routine theorems, it is nice to have one with real power. It not
only says any finite independent sequence can be extended to a basis, but it can be
extended to a basis inside any finite generating set containing it. This is one of the
theorems that makes linear algebra tick. The key hypothesis here is that the ring
is a field. If R = Z, then Z is a free module over itself, and the element 2 of Z is
independent. However it certainly cannot be extended to a basis. Also the finiteness
hypothesis in this theorem is only for convenience, as will be seen momentarily.
Since F is a commutative ring, any two bases of M must have the same number
of elements, and thus the dimension of M is well defined.
1) M ≈ F n iff dim(M ) = n.
2) M ≈ N iff dim(M ) = dim(N ).
3) F m ≈ F n iff n = m.
4) dim(M ⊕ N ) = dim(M ) + dim(N ).
Exercise Suppose R is a domain with the property that, for R-modules, every
submodule is a summand. Show R is a field.
Exercise Find a free Z-module which has a generating set containing no basis.
This theorem is just a summary of what we have for square matrices over fields.
Proof Except for 1) and 1t ), this theorem holds for any commutative ring R.
(See the section Relating these concepts to square matrices.) Parts 1) and 1t )
follow from the preceding section.
f (C) = AC. Show the following are equivalent. (See the exercise on page 79.)
1) f : Zn → Zn is injective.
3) |A| =
6 0.
4) f¯ : Rn → Rn is injective.
Theorem If A ∈ Fm,n , the row rank and the column rank of A are equal. This
number is called the rank of A and is ≤ min{m, n}.
Proof By the theorem above, elementary row and column operations change
neither the row rank nor the column rank. By row and column operations, A may be
changed to a matrix H where h1,1 = ·· = ht,t = 1 and all other entries are 0 (see the
¯ ¯
first exercise on page 59). Thus row rank = t = column rank.
Exercise Suppose A has rank t. Show that it is possible to select t rows and t
columns of A such that the determined t × t matrix is invertible. Show that the rank
of A is the largest integer t such that this is possible.
Exercise Suppose A ∈ Fm,n has rank t. What is the dimension of the solution
set of AX = 0?
¯
Proof If |A| = 0, image(f ) has dimension < n and thus f (V ) has n-dimensional
volume 0. If |A| = 6 0 then A is the product of elementary matrices (see page 59)
and for elementary matrices, the theorem is obvious. The result follows because the
determinant of the composition is the product of the determinants.
We first wish to show that these 4 statements are equivalent. We know that
1) and 2) are equivalent and also that 3) and 4) are equivalent because change of
basis corresponds to conjugation of the matrix. Now suppose 2) is true and show
4) is true. Suppose |A| = 0. Then |At | = 0 and by 2) ∃ C such that C −1 At C has
¯ ¯
first row zero. Thus (C −1 At C)t = C t A(C t )−1 has first row column zero. The result
follows by defining D = (C t )−1 . Also 4) implies 2).
Proof of the theorem We are free to select any of the 4 parts, and we select
part 3). Since | f |= 0, f is not injective and ∃ a non-zero v1 ∈ V with f (v1 ) = 0.
¯
Extend v1 to a basis {v1 , .., vn }. Then the matrix of f w.r.t this basis has first column
zero.
à !
3π 6
Exercise Let A = . Find an invertible matrix C ∈ R2 so that C −1 AC
2π 4
0 0 0
has first row zero. Also let A = 1 3 4
and find an invertible matrix D ∈ R3
2 1 4
so that D−1 AD has first column zero.
Nilpotent Homomorphisms
In this section it is shown that an endomorphism f is nilpotent iff all of its char-
acteristic roots are 0 iff it may be represented by a strictly upper triangular matrix.
¯
Note To obtain a matrix which is strictly lower triangular, reverse the order of
the basis.
94 Linear Algebra Chapter 5
Exercise Use the transpose principle to write 3 other versions of this theorem.
Eigenvalues
The nicest thing you can say about a matrix is that it is similar to a diagonal
matrix. Here is one case where that happens.
Proof Suppose {v1 , .., vk } is dependent. Suppose t is the smallest positive integer
such that {v1 , .., vt } is dependent, and v1 r1 + · · +vt rt = 0 is a non-trivial linear
¯
combination. Note that at least two of the coefficients must be non-zero. Now
(f − λt )(v1 r1 + · · +vt rt ) = v1 (λ1 − λt )r1 + · · +vt−1 (λt−1 − λt )rt−1 + 0 = 0 is a shorter
¯ ¯
96 Linear Algebra Chapter 5
non-trivial linear combination. This is a contradiction and proves 1). Part 2) follows
from 1) because dim(V ) = n.
à !
0 1
Exercise Let A = ∈ R2 . Find an invertible C ∈ C2 such that C −1 AC
−1 0
is diagonal. Show that C cannot be selected in R2 . Find the characteristic polyno-
mial of A.
We could continue and finally give an ad hoc proof of the Jordan canonical form,
but in this chapter we prefer to press on to inner product spaces. The Jordan form
will be developed in Chapter 6 as part of the general theory of finitely generated
modules over Euclidean domains. The next section is included only as a convenient
reference.
This section should be just skimmed or omitted entirely. It is unnecessary for the
rest of this chapter, and is not properly part of the flow of the chapter. The basic
facts of Jordan form are summarized here simply for reference.
The statement that a square matrix B over a field F is a Jordan block means that
∃ λ ∈ F such that B is a lower triangular matrix of the form
λ 0
1 λ
B=
· . B gives a homomorphism g : F m → F m with g(em ) = λem
·
0 1 λ
and g(ei ) = ei+1 + λei for 1 ≤ i < m. Note that CPB (x) = (x − λ)m and so λ is the
only eigenvalue of B, and B satisfies its characteristic polynomial, i.e., CPB (B) = 0.
¯
Chapter 5 Linear Algebra 97
eigenvalue λi . Then n1 + · · +nt = n and CPD (x) = (x − λ1 )n1 · ·(x − λt )nt . Note that
a diagonal matrix is a special case of Jordan form. D is a diagonal matrix iff each
ni = 1, i.e., iff each Jordan block is a 1 × 1 matrix.
Theorem Jordan form (when it exists) is unique. This means that if A and D are
similar matrices in Jordan form, they have the same Jordan blocks, except possibly
in different order.
The reader should use the transpose principle to write three other versions of the
first theorem. Also note that we know one special case of this theorem, namely that
if A has n distinct eigenvalues in F , then A is similar to a diagonal matrix. Later on
it will be shown that if A is a symmetric real matrix, then A is similar to a diagonal
matrix.
Let’s look at the classical case A ∈ Rn . The complex numbers are algebraically
closed. This means that CPA (x) will factor completely in C[x], and thus ∃ C ∈ Cn
with C −1 AC in Jordan form. C may be selected to be in Rn iff all the eigenvalues of
A are real.
Exercise Find all real matrices in Jordan form that have the following charac-
teristic polynomials: x(x − 2), (x − 2)2 , (x − 2)(x − 3)(x − 4), (x − 2)(x − 3)2 ,
(x − 2)2 (x − 3)2 , (x − 2)(x − 3)3 .
The two most important fields for mathematics and science in general are the
real numbers and the complex numbers. Finitely generated vector spaces over R or
C support inner products and are thus geometric as well as algebraic objects. The
theories for the real and complex cases are quite similar, and both could have been
treated here. However, for simplicity, attention is restricted to the case F = R.
In the remainder of this chapter, the power and elegance of linear algebra become
transparent for all to see.
Definition Suppose V is a real vector space. An inner product (or dot product)
on V is a function V × V → R which sends (u, v) to u · v and satisfies
√ √
Proof of 2) Let a = v · v and b = u · u. If a or b is 0, the result is obvious.
Suppose neither a nor b is 0. Now 0 ≤ (ua ± vb) · (ua ± vb) = (u · u)a2 ± 2ab(u · v)+
(v ·v)b2 = b2 a2 ±2ab(u·v) +a2 b2 . Dividing by 2ab yields 0 ≤ ab±(u·v) or | u·v |≤ ab.
Chapter 5 Linear Algebra 99
Theorem √ Suppose V has an inner product. Define the norm or length of a vector
v by kvk = v · v. The following properties hold.
1) kvk = 0 iff v = 0.
¯
2) kvrk = kvk | r |.
Theorem Suppose V is a real vector space with a basis S = {v1 , .., vn }. Then
there is a unique inner product on V which makes S an orthornormal basis. It is
given by the formula (v1 r1 + · · +vn rn ) · (v1 s1 + · · +vn sn ) = r1 s1 + · · +rn sn .
and v.
Proof Let u = (r1 , .., rn ) and v = (s1 , .., sn ). By the law of cosines ku − vk2 =
kuk + kvk2 − 2kukkvk cos Θ. So (r1 − s1 )2 + · · +(rn − sn )2 = r12 + · · +rn2 + s21 + · ·
2
Gram-Schmidt orthonormalization
w
wk+1 = . Then by the previous theorem, {w1 , .., wk+1 } is an orthonormal basis
kwk
for the subspace generated by {w1 , .., wk , vk+1 }. In this manner an orthonormal basis
for W is constructed.
Now suppose W has dimension n and {w1 , .., wk } is an orthonormal sequence in
W . Since this sequence is independent, it extends to a basis {w1 , .., wk , vk+1 , .., vn }.
The process above may be used to modify this to an orthonormal basis {w1 , .., wn }.
Orthogonal Matrices
As noted earlier, linear algebra is not so much the study of vector spaces as it is
the study of endomorphisms. We now wish to study isometries from Rn to Rn .
We know from a theorem on page 90 that an endomorphism preserves volume iff
its determinant is ±1. Isometries preserve inner product, and thus preserve angle and
distance, and so certainly preserve volume.
Proof A left inverse of a matrix is also a right inverse (see the exercise on
page 64). Thus 1) and 2) are equivalent because each of them says A is invert-
ible with A−1 = At . Now {e1 , .., en } is the canonical orthonormal basis for Rn , and
f (ei ) is column i of A. Thus by the previous section, 1) and 3) are equivalent.
Theorem
1) If A is orthogonal, | A |= ±1.
Proof Part 1) follows from |A|2 = |A| |At | = |I| = 1. Part 2) is imme-
diate, because isometries clearly form a subgroup of the multiplicative group of
all automorphisms. For part 3) assume f : Rn → Rn is an isometry. Then
ku − vk2 = (u − v) · (u − v) = f (u − v) · f (u − v) = kf (u − v)k2 = kf (u) − f (v)k2 .
The proof that f preserves angles follows from u · v = kukkvkcosΘ.
à !
cosΘ −sinΘ
Exercise Show that if A ∈ O(2) has |A| = 1, then A = for
sinΘ cosΘ
some number Θ. (See the exercise on page 56.)
2
Exercise (topology) Let Rn ≈ Rn have its usual metric topology. This means
a sequence of matrices {Ai } converges to A iff it converges coordinatewise. Show
Gln (R) is an open subset and O(n) is closed and compact. Let h : Gln (R) →
O(n) be defined by Gram-Schmidt. Show H : Gln (R) × [0, 1] → Gln (R) defined by
H(A, t) = (1 − t)A + th(A) is a deformation retract of Gln (R) to O(n).
We continue with the case F = R. Our goals are to prove that, if A is a symmetric
matrix, all of its eigenvalues are real and that ∃ an orthogonal matrix C such that
C −1 AC is diagonal. As background, we first note that symmetric is the same as
self-adjoint.
The next theorem has geometric and physical implications, but for us, just the
incredibility of it all will suffice.
Chapter 5 Linear Algebra 105
Now suppose by induction that the theorem is true for symmetric matrices in
Rt for t < n, and suppose A is a symmetric n × n matrix. Denote by λ1 , .., λk the
distinct eigenvalues of A, k ≤ n. If k = n, the proof is immediate, because then there
is a basis of eigenvectors of length 1, and they must form an orthonormal basis. So
suppose k < n. Let v1 , .., vk be eigenvectors for λ1 , .., λk with each k vi k= 1. They
may be extended to an orthonormal basis v1 , .., vn . With respect to this basis,
λ1
·
(B)
·
.
the transformation A is represented by
λk
(0) (D)
Since this is a symmetric matrix, B = 0 and D is a symmetric matrix of smaller
size.Ã By induction,
! ∃ an orthogonal C such that C −1 DC is diagonal. Thus conjugating
I 0
by makes the entire matrix diagonal.
0 C
à !
2 2
Exercise Let A = . Find an orthogonal C such that C −1 AC is diagonal.
2 2
à !
2 1
Do the same for A = .
1 2
The five previous chapters were designed for a year undergraduate course in algebra.
In this appendix, enough material is added to form a basic first year graduate course.
Two of the main goals are to characterize finitely generated abelian groups and to
prove the Jordan canonical form. The style is the same as before, i.e., everything is
right down to the nub. The organization is mostly a linearly ordered sequence except
for the last two sections on determinants and dual spaces. These are independent
sections added on at the end.
Suppose R is a commutative ring. An R-module M is said to be cyclic if it can
be generated by one element, i.e., M ≈ R/I where I is an ideal of R. The basic
theorem of this chapter is that if R is a Euclidean domain and M is a finitely generated
R-module, then M is the sum of cyclic modules. Thus if M is torsion free, it is a
free R-module. Since Z is a Euclidean domain, finitely generated abelian groups
are the sums of cyclic groups.
Now suppose F is a field and V is a finitely generated F -module. If T : V → V is
a linear transformation, then V becomes an F [x]-module by defining vx = T (v). Now
F [x] is a Euclidean domain and so VF [x] is the sum of cyclic modules. This classical
and very powerful technique allows an easy proof of the canonical forms. There is a
basis for V so that the matrix representing T is in Rational canonical form. If the
characteristic polynomial of T factors into the product of linear polynomials, then
there is a basis for V so that the matrix representing T is in Jordan canonical form.
This always holds if F = C. A matrix in Jordan form is a lower triangular matrix
with the eigenvalues of T displayed on the diagonal, so this is a powerful concept.
In the chapter on matrices, it is stated without proof that the determinant of the
product is the product of the determinants. A proof of this, which depends upon the
classification of certain types of alternating multilinear forms, is given in this chapter.
The final section gives the fundamentals of dual spaces.
107
108 Appendix Chapter 6
On page 50 in the chapter on rings, the Chinese Remainder Theorem was proved
for the integers. Here it is presented in full generality. Surprisingly, the theorem holds
even for non-commutative rings.
Definition Suppose R is a ring and A1 , A2 , ..., Am are ideals of R. Then the sum
A1 + A2 + · · · + Am is the set of all a1 + a2 + · · · + am with ai ∈ Ai . The product
A1 A2 · · · Am is the set of all finite sums of elements a1 a2 · · · am with ai ∈ Ai . Note
that the sum and product of ideals are ideals and A1 A2 · · · Am ⊂ (A1 ∩ A2 ∩ · · · ∩ Am ).
Theorem If A and B are ideals of a ring R, then the following are equivalent.
Note To properly appreciate this proof, the student should work the exercise on
group theory at the end of this section.
Examples If R is a domain, the associates of 1 are the units of R, while the only
¯
associate of 0 is 0 itself. If n ∈ Z is not zero, then its associates are n and −n.
¯ ¯
If F is a field and g ∈ F [x] is a non-zero polynomial, then the associates of g are
all cg where c is a non-zero constant.
The following theorem is elementary, but it shows how associates fit into the
scheme of things. An element a divides b (a|b) if ∃! c ∈ R with ac = b.
1) a ∼ b.
2) a|b and b|a.
3) aR = bR.
Parts 1) and 3) above show there is a bijection from the associate classes of R to
the principal ideals of R. Thus if R is a PID, there is a bijection from the associate
classes of R to the ideals of R. If an element generates a non-zero prime ideal, it is
called a prime element.
Note If a is a prime and a|c1 c2 · · · cn , then a|ci for some i. This follows from the
definition and induction on n. If each cj is irreducible, then a ∼ ci for some i.
1) R is a UFD.
2) Every irreducible element is prime, i.e., a irreducible ⇔ a is prime.
This is a revealing and useful theorem. If R is a FD, then R is a UFD iff each
irreducible element generates a prime ideal. Fortunately, principal ideal domains
have this property, as seen in the next theorem.
1) aR is a maximal ideal.
2) aR is a prime ideal, i.e., a is a prime element.
3) a is irreducible.
Proof Every maximal ideal is a prime ideal, so 1) ⇒ 2). Every prime element is
an irreducible element, so 2) ⇒ 3). Now suppose a is irreducible and show aR is a
maximal ideal. If I is an ideal containing aR, ∃ b ∈ R with I = bR. Since b divides
a, b is a unit or an associate of a. This means I = R or I = aR.
112 Appendix Chapter 6
Our goal is to prove that a PID is a UFD. Using the two theorems above, it
only remains to show that a PID is a FD. The proof will not require that ideals be
principally generated, but only that they be finitely generated. This turns out to
be equivalent to the property that any collection of ideals has a “maximal” element.
We shall see below that this is a useful concept which fits naturally into the study of
unique factorization domains.
Proof Suppose there is a non-zero non-unit element that does not factor as the
finite product of irreducibles. Consider all ideals dR where d does not factor. Then ∃
a maximal one cR. The element c must be reducible, i.e., c = ab where neither a nor
b is a unit. Each of aR and bR properly contains cR, and so each of a and b factors as
Chapter 6 Appendix 113
You see the basic structure of UFDs is quite easy. It takes more work to prove
the following theorems, which are stated here only for reference.
Note The combination of the last two theorems shows that Noetherian is a ubiq-
uitous property which is satisfied by many of the basic rings in commutative algebra.
Next are presented two of the standard examples of Noetherian domains that are
not unique factorization domains.
√ √
Exercise Let R = Z( 5) = {n + m 5 : n, m ∈√Z}. Show that
√ R is a subring of
R which is not a UFD. In particular 2 · 2 = (1 − 5) · (−1 − 5) are two distinct
114 Appendix Chapter 6
1) K is a summand of B.
2) g has a right inverse, i.e., ∃ a homomorphism h : C → B with g ◦ h = I : C → C.
(h is called a splitting map.)
f - B
g - C
A
Z ½>
½
Z ½
iZ ≈ ½π2
1 Z ½
Z
~
Z ? ½
A⊕C
1) R is a PID.
2) Every submodule of RR is a free R-module of dimension ≤ 1.
This theorem restates the ring property of PID as a module property. Although
this theorem is transparent, it is a precursor to the following classical result.
Consider the following short exact sequences, where f : Rn−1 → Rn−1 ⊕ R is inclusion
and g = π : Rn−1 ⊕ R → R is the projection.
f π
0 −→ Rn−1 −→ Rn−1 ⊕ R −→ R −→ 0
0 −→ A ∩ Rn−1 −→ A −→ π(A) −→ 0
Exercise Let A ⊂ Z2 be the subgroup generated by {(6, 24), (16, 64)}. Show A
is a free Z-module of dimension 1.
Euclidean Domains
The ring Z possesses the Euclidean algorithm and the polynomial ring F [x] has
the division algorithm. The concept of Euclidean domain is an abstraction of these
properties. The axioms are so miniscule that it is surprising you get this much juice
out of them. However they are exactly what you need, and it is possible to just play
around with matrices and get some deep results. If R is a Euclidean domain and M
is a finitely generated R-module, then M is the sum of cyclic modules. This is one of
the great classical theorems of abstract algebra, and you don’t have to worry about
it becoming obsolete. Here N will denote the set of all non-negative integers, not
just the set of positive integers.
The following remarkable theorem is the foundation for the results of this section.
d1 0 · · · 0
0 d2
.. ...
.
dm
0
0 0
where each di 6= 0, and di |di+1 for 1 ≤ i < m. Also d1 generates the ideal of R
¯
generated by the entries of (ai,j ).
Proof Let I ⊂ R be the ideal generated by the elements of the matrix A = (ai,j ).
If E ∈ Rn , then the ideal J generated by the elements of EA has J ⊂ I. If E is
invertible, then J = I. In the same manner, if E ∈ Rt is invertible and J is the ideal
generated by the elements of AE, then J = I. This means that row and column
operations on A do not change the ideal I. Since R is a PID, there is an element d1
with I = d1 R, and this will turn out to be the d1 displayed in the theorem.
The matrix (ai,j ) has at least one non-zero element d with φ(d) a miminum.
However, row and column operations on (ai,j ) may produce elements with smaller
118 Appendix Chapter 6
d1 0 · · · 0
0
..
. cij
0
Note that d1 divides each ci,j , and thus I = d1 R. The proof now follows by induction
on the size of the matrix.
≈ ⊂ ≈
Rt −→ A −→ B −→ Rn
ei −→ hi gi −→ ei
≈ ≈
with di |di+1 . Since changing the isomorphisms Rt −→ A and B −→ Rn corresponds
to changing the bases {h1 , h2 , ..., ht } and {g1 , g2 , ..., gn }, the theorem follows.
The way Theorem 5 is stated, some or all of the elements di may be units, and for
such di , R/di = 0. If we assume that no di is a unit, then the elements d1 , d2 , ..., dt
¯
are called invariant factors. They are unique up to associates, but we do not bother
with that here. If R = Z and we select the di to be positive, they are unique. If
R = F [x] and we select the di to be monic, then they are unique. The splitting in
Theorem 5 is not the ultimate because the modules R/di may be split into the sum
of other cyclic modules. To prove this we need the following Lemma.
comaximal and thus by the Chinese Remainder Theorem, the natural map is a ring
isomorphism. Since the natural map is also an R-module homomorphism, it is an
R-module isomorphism.
This theorem carries the splitting as far as it can go, as seen by the next exercise.
0 −→ T (M ) −→ M −→ M/T (M ) −→ 0
To complete this section, here are two more theorems that follow from the work
we have done.
Exercise For which primes p and q is the group of units (Zp ×Zq )∗ a cyclic group?
We know from Exercise 2) on page 59 that an invertible matrix over a field is the
product of elementary matrices. This result also holds for any invertible matrix over
a Euclidean domain.
d1 0
d2
...
0 dn
where each di 6= 0 and di |di+1 for 1 ≤ i < n. Also d1 generates the ideal generated
¯
by the entries of A. Furthermore A is invertible iff each di is a unit. Thus if A is
invertible, A is the product of elementary matrices.
122 Appendix Chapter 6
à ! à !
3 11 3 11
Exercise Let R = Z, A = and D = . Perform elementary
0 4 1 4
operations on A and D to obtain diagonal matrices where the first diagonal element
divides the second diagonal element. Write D as the product of elementary matrices.
Find the characteristic polynomials of A and D. Find an elementary matrix B over
Z such that B −1 AB is diagonal. Find an invertible matrix C in R2 such that C −1 DC
is diagonal. Show C cannot be selected in Q2 .
Jordan Blocks
In this section, we define the two special types of square matrices used in the
Rational and Jordan canonical forms. Note that the Jordan block B(q) is the sum
of a scalar matrix and a nilpotent matrix. A Jordan block displays its eigenvalue
on the diagonal, and is more interesting than the companion matrix C(q). But as
we shall see later, the Rational canonical form will always exist, while the Jordan
canonical form will exist iff the characteristic polynomial factors as the product of
linear polynomials.
Theorem Let V have the free basis {1, x, x2 , ..., xn−1 }. The companion matrix
Chapter 6 Appendix 123
representing T is
0 ... ... 0 −a0
1 0 ... 0 −a1
0 1 0 −a2
C(q) =
.. ... ... ..
. .
0 . . . . . . 1 −an−1
Theorem Suppose λ ∈ R and q(x) = (x − λ)n . Let V have the free basis
{1, (x − λ), (x − λ)2 , . . . , (x − λ)n−1 }. Then the matrix representing T is
λ 0 ... ... 0
1 λ 0 ... 0
..
B(q) = 0 1 λ .
.. . . . . . . ..
. .
0 ... ... 1 λ
Note For n = 1, C(a0 + x) = B(a0 + x) = (−a0 ). This is the only case where a
block matrix may be the zero matrix.
Note In B(q), if you wish to have the 1s above the diagonal, reverse the order of
the basis for V .
We are finally ready to prove the Rational and Jordan forms. Using the previous
sections, all that’s left to do is to put the pieces together. (For an overview of Jordan
form, see the section in Chapter 5.)
124 Appendix Chapter 6
C(d1 )
C(d2 )
...
C(dt )
Chapter 6 Appendix 125
C(ps11 )
C(ps22 ) 0
...
0
C(psrr )
The characteristic polynomial of T is p = ps11 · · · psrr and p(T ) = 0. This is called the
¯
Rational canonical form for T .
B((x − λ1 )s1 )
0
B((x − λ2 )s2 )
...
0
B((x − λr )sr )
126 Appendix Chapter 6
To conclude this section here are a few comments on the minimal polynomial of a
linear transformation. This part should be studied only if you need it. Suppose V is
an n-dimensional vector space over a field F and T : V → V is a linear transformation.
As before we make V a module over F [x] with T (v) = vx.
Chapter 6 Appendix 127
Definition Ann(VF [x] ) is the set of all h ∈ F [x] which annihilate V , i.e., which
satisfy V h = 0. This is a non-zero ideal of F [x] and is thus generated by a unique
¯
monic polynomial u(x) ∈ F (x), Ann(VF [x] ) = uF [x]. The polynomial u is called the
minimal polynomial of T . Note that u(T ) = 0 and if h(x) ∈ F [x], h(T ) = 0 iff h is a
¯ ¯
multiple of u in F [x]. If p(x) ∈ F [x] is the characteristic polynomial of T , p(T ) = 0
¯
and thus p is a multiple of u.
Now we state this again in terms of matrices. Suppose A ∈ Fn is a matrix
representing T . Then u(A) = 0 and if h(x) ∈ F [x], h(A) = 0 iff h is a multiple of
¯ ¯
u in F [x]. If p(x) ∈ F [x] is the characteristic polynomial of A, then p(A) = 0 and
¯
thus p is a multiple of u. The polynomial u is also called the minimal polynomial of
A. Note that these properties hold for any matrix representing T , and thus similar
matrices have the same minimal polynomial. If A is given to start with, use the linear
transformation T : F n → F n determined by A to define the polynomial u.
Exercise Suppose Ai ∈ Fni has qi as its characteristic polynomial and its minimal
A1 0
A2
polynomial, and A =
... .
Find the characteristic polynomial
0 An
and the minimal polynomial of A.
Exercise Suppose A ∈ Fn .
Determinants
In the chapter on matrices, it is stated without proof that the determinant of the
product is the product of the determinants (see page 63). The purpose of this section
is to give a proof of this. We suppose R is a commutative ring, C is an R-module,
n ≥ 2, and B1 , B2 , . . . , Bn is a sequence of R-modules.
Definition
1) f is symmetric means f (b1 , . . . , bn ) = f (bτ (1) , . . . , bτ (n) ) for all
permutations τ on {1, 2, . . . , n}.
2) f is skew-symmetric if f (b1 , . . . , bn ) = sign(τ )f (bτ (1) , . . . , bτ (n) ) for all τ .
Chapter 6 Appendix 129
Theorem
i) Each of these three types defines a submodule of the set of all
R-multilinear maps.
ii) Alternating ⇒ skew-symmetric.
iii) If no element of C has order 2, then alternating ⇐⇒ skew-symmetric.
Proof For n = 2, you can simply write it out. f (a1,1 e1 + a2,1 e2 , a1,2 e1 + a2,2 e2 ) =
a1,1 a1,2 f (e1 , e1 ) + a1,1 a2,2 f (e1 , e2 ) + a2,1 a1,2 f (e2 , e1 ) + a2,1 a2,2 f (e2 , e2 ) = (a1,1 a2,2 −
a1,2 a2,1 )f (e1 , e2 ) = |A|f (e1 , e2 ). For the general case, f (a1,1 e1 + a2,1 e2 + · · · +
P
an,1 en , ....., a1,n e1 + a2,n e2 + · · · + an,n en ) = ai1 ,1 ai2 ,2 · · · ain ,n f (ei1 , ei2 , ..., ein ) where
the sum is over all 1 ≤ i1 ≤ n, 1 ≤ i2 ≤ n, ..., 1 ≤ in ≤ n. However, if any is = it
for s 6= t, that term is 0 because f is alternating. Therefore the sum is
P P
just all τ aτ (1),1 aτ (2),2 · · · aτ (n),n f (eτ (1) , eτ (2) , . . . , eτ (n) ) = all τ sign(τ )aτ (1),1
aτ (2),2 · · · aτ (n),n f (e1 , e2 , . . . , en ) = |A|f (e1 , e2 , ..., en ).
This incredible classification of these alternating forms makes the proof of the
following theorem easy. (See the third theorem on page 63.)
Dual Spaces
The concept of dual module is basic, not only in algebra, but also in other areas
such as differential geometry and topology. If V is a finitely generated vector space
over a field F , its dual V ∗ is defined as V ∗ = HomF (V, F ). V ∗ is isomorphic to V , but
in general there is no natural isomorphism from V to V ∗ . However there is a natural
isomorphism from V to V ∗∗ , and so V ∗ is the dual of V and V may be considered
to be the dual of V ∗ . This remarkable fact has many expressions in mathematics.
For example, a tangent plane to a differentiable manifold is a real vector space. The
union of these spaces is the tangent bundle, while the union of the dual spaces is the
cotangent bundle. Thus the tangent (cotangent) bundle may be considered to be the
dual of the cotangent (tangent) bundle. The sections of the tangent bundle are called
vector fields while the sections of the cotangent bundle are called 1-forms.
In algebraic topology, homology groups are derived from chain complexes, while
cohomology groups are derived from the dual chain complexes. The sum of the
cohomology groups forms a ring, while the sum of the homology groups does not.
Chapter 6 Appendix 131
Thus the concept of dual module has considerable power. We develop here the basic
theory of dual modules.
Theorem
g h
M1 - M2 - M3
PP Z
PP
PP f ◦ hZ
P Z f
f ◦ h ◦ g PPPP Z
PPZZ
~ ?
P
q
P
W
Theorem
i) If g : M → N is a surjective homomorphism, then H(g) : H(N ) → H(M )
is injective.
ii) If g : M → N is an injective homomorphism and g(M ) is a summand
of N , then H(g) : H(N ) → H(M ) is surjective.
iii) If R is a field, then g is surjective (injective) iff H(g) is injective
(surjective).
Proof This is a good exercise.
Theorem Suppose M has a finite free basis {v1 , ..., vn }. Define vi∗ ∈ M ∗ by
vi (v1 r1 + · · · + vn rn ) = ri . Thus vi∗ (vj ) = δi,j . Then v1∗ , . . . , vn∗ is a free basis for
∗
α -
M M ∗∗
g g ∗∗
? ?
α -
N N ∗∗
Proof {α(v1 ), . . . , α(vn )} is the dual basis of {v1∗ , . . . , vn∗ }, i.e., α(vi ) = (vi∗ )∗ .
134 Appendix Chapter 6
Note Suppose R is a field and C is the category of finitely generated vector spaces
over R. In the language of category theory, α is a natural equivalence between the
identity functor and the double dual.
Note For finitely generated vector spaces, α is used to identify V and V ∗∗ . Under
this identification V ∗ is the dual of V and V is the dual of V ∗ . Also, if {v1 , . . . , vn }
is a basis for V and {vi∗ , . . . , vn∗ } its dual basis, then {v1 , . . . , vn } is the dual basis for
{v1∗ , . . . , vn∗ }.
In general there is no natural way to identify V and V ∗ . However for real inner
product spaces there is.
Note If {v1 , . . . , vn } is any orthonormal basis for V, {β(v1 ), . . . , β(vn )} is the dual
basis of {v1 , . . . , vn }, that is β(vi ) = vi∗ . The isomorphism β : V → V ∗ defines an
inner product on V ∗ , and under this structure, β is an isometry. If {v1 , . . . , vn } is
an orthonormal basis for V, {v1∗ , . . . , vn∗ } is an orthonormal basis for V ∗ . Also, if U
is another n-dimensional IPS and f : V → U is an isometry, then f ∗ : U ∗ → V ∗
is an isometry and the following diagram commutes.
β -
V V∗
6
f f∗
?
β -
U U∗
135
136 Index
Unique factorization,
in principal ideal domains, 113
of integers, 16
Unique factorization domain (UFD), 111
Unit in a ring, 38