Linear Algebra Done Right 4th Edition 4th
Sheldon Axler download
https://ebookbell.com/product/linear-algebra-done-right-4th-
edition-4th-sheldon-axler-53455626
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Linear Algebra Done Right 4th Sheldon Axler
https://ebookbell.com/product/linear-algebra-done-right-4th-sheldon-
axler-53511184
Linear Algebra Done Right Third Edition 3rd Ed Rev And Enl Sheldon
Axler
https://ebookbell.com/product/linear-algebra-done-right-third-
edition-3rd-ed-rev-and-enl-sheldon-axler-21896752
Linear Algebra Done Right Second Edition Sheldon Axler
https://ebookbell.com/product/linear-algebra-done-right-second-
edition-sheldon-axler-36705626
Linear Algebra Done Right Third Edition 3rd Edition Sheldon Axler
https://ebookbell.com/product/linear-algebra-done-right-third-
edition-3rd-edition-sheldon-axler-38311718
Linear Algebra Done Right Sheldon Axler
https://ebookbell.com/product/linear-algebra-done-right-sheldon-
axler-59042536
Linear Algebra Done Wrong Sergei Treil
https://ebookbell.com/product/linear-algebra-done-wrong-sergei-
treil-6687498
Linear Algebra 5th Edition Stephen H Friedberg Arnold J Insel
https://ebookbell.com/product/linear-algebra-5th-edition-stephen-h-
friedberg-arnold-j-insel-44875288
Linear Algebra From The Beginnings To The Jordan Normal Forms
Toshitsune Miyake
https://ebookbell.com/product/linear-algebra-from-the-beginnings-to-
the-jordan-normal-forms-toshitsune-miyake-45796752
Linear Algebra For Everyone Gilbert Strang
https://ebookbell.com/product/linear-algebra-for-everyone-gilbert-
strang-46073830
Undergraduate Texts in Mathematics
Sheldon Axler
Linear Algebra
Done Right
Fourth Edition
Undergraduate Texts in Mathematics
Undergraduate Texts in Mathematics
Series Editors
Pamela Gorkin, Mathematics Department, Bucknell University, Lewisburg, PA, USA
Jessica Sidman, Mathematics and Statistics, Amherst College, Amherst, MA, USA
Advisory Board
Colin Adams, Williams College, Williamstown, MA, USA
Jayadev S. Athreya, University of Washington, Seattle, WA, USA
Nathan Kaplan, University of California, Irvine, CA, USA
Jill Pipher, Brown University, Providence, RI, USA
Jeremy Tyson, University of Illinois at Urbana-Champaign, Urbana, IL, USA
Undergraduate Texts in Mathematics are generally aimed at third- and fourth-
year undergraduate mathematics students at North American universities. These
texts strive to provide students and teachers with new perspectives and novel
approaches. The books include motivation that guides the reader to an appreciation
of interrelations among different aspects of the subject. They feature examples that
illustrate key concepts as well as exercises that strengthen understanding.
Sheldon Axler
Linear Algebra Done Right
Fourth Edition
Sheldon Axler
San Francisco, CA, USA
ISSN 0172-6056 ISSN 2197-5604 (electronic)
Undergraduate Texts in Mathematics
ISBN 978-3-031-41025-3 ISBN 978-3-031-41026-0 (eBook)
https://doi.org/10.1007/978-3-031-41026-0
Mathematics Subject Classification (2020): 15-01, 15A03, 15A04, 15A15, 15A18, 15A21
© Sheldon Axler 1996, 1997, 2015, 2024. This book is an open access publication.
Open Access This book is licensed under the terms of the Creative Commons Attribution-NonCommercial
4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncom-
mercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you
give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the book’s
Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copyright holder.
This work is subject to copyright. All commercial rights are reserved by the author(s), whether the whole
or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Regarding these commercial rights a non-exclusive
license has been granted to the publisher.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland.
Paper in this product is recyclable.
About the Author
Sheldon Axler received his undergraduate degree from Princeton University,
followed by a PhD in mathematics from the University of California at Berkeley.
As a postdoctoral Moore Instructor at MIT, Axler received a university-wide
teaching award. He was then an assistant professor, associate professor, and
professor at Michigan State University, where he received the first J. Sutherland
Frame Teaching Award and the Distinguished Faculty Award.
Axler received the Lester R. Ford Award for expository writing from the Math-
ematical Association of America in 1996, for a paper that eventually expanded into
this book. In addition to publishing numerous research papers, he is the author of
six mathematics textbooks, ranging from freshman to graduate level. Previous
editions of this book have been adopted as a textbook at over 375 universities and
colleges and have been translated into three languages.
Axler has served as Editor-in-Chief of the Mathematical Intelligencer and
Associate Editor of the American Mathematical Monthly. He has been a member
of the Council of the American Mathematical Society and of the Board of Trustees
of the Mathematical Sciences Research Institute. He has also served on the
editorial board of Springer’s series Undergraduate Texts in Mathematics, Graduate
Texts in Mathematics, Universitext, and Springer Monographs in Mathematics.
Axler is a Fellow of the American Mathematical Society and has been a
recipient of numerous grants from the National Science Foundation.
Axler joined San Francisco State University as chair of the Mathematics
Department in 1997. He served as dean of the College of Science & Engineering
from 2002 to 2015, when he returned to a regular faculty appointment as a
professor in the Mathematics Department.
Carrie Heeter, Bishnu Sarangi
The author and his cat Moon.
Cover equation: Formula for the 𝑛th Fibonacci number. Exercise 21 in Section 5D
uses linear algebra to derive this formula.
v
Contents
About the Author v
Preface for Students xii
Preface for Instructors xiii
Acknowledgments xvii
Chapter 1
Vector Spaces 1
1A 𝐑𝑛 and 𝐂𝑛 2
Complex Numbers 2
Lists 5
𝐅𝑛 6
Digression on Fields 10
Exercises 1A 10
1B Definition of Vector Space 12
Exercises 1B 16
1C Subspaces 18
Sums of Subspaces 19
Direct Sums 21
Exercises 1C 24
Chapter 2
Finite-Dimensional Vector Spaces 27
2A Span and Linear Independence 28
Linear Combinations and Span 28
Linear Independence 31
Exercises 2A 37
vi
Contents vii
2B Bases 39
Exercises 2B 42
2C Dimension 44
Exercises 2C 48
Chapter 3
Linear Maps 51
3A Vector Space of Linear Maps 52
Definition and Examples of Linear Maps 52
Algebraic Operations on ℒ(𝑉, 𝑊) 55
Exercises 3A 57
3B Null Spaces and Ranges 59
Null Space and Injectivity 59
Range and Surjectivity 61
Fundamental Theorem of Linear Maps 62
Exercises 3B 66
3C Matrices 69
Representing a Linear Map by a Matrix 69
Addition and Scalar Multiplication of Matrices 71
Matrix Multiplication 72
Column–Row Factorization and Rank of a Matrix 77
Exercises 3C 79
3D Invertibility and Isomorphisms 82
Invertible Linear Maps 82
Isomorphic Vector Spaces 86
Linear Maps Thought of as Matrix Multiplication 88
Change of Basis 90
Exercises 3D 93
3E Products and Quotients of Vector Spaces 96
Products of Vector Spaces 96
Quotient Spaces 98
Exercises 3E 103
3F Duality 105
Dual Space and Dual Map 105
Null Space and Range of Dual of Linear Map 109
viii Contents
Matrix of Dual of Linear Map 113
Exercises 3F 115
Chapter 4
Polynomials 119
Zeros of Polynomials 122
Division Algorithm for Polynomials 123
Factorization of Polynomials over 𝐂 124
Factorization of Polynomials over 𝐑 127
Exercises 4 129
Chapter 5
Eigenvalues and Eigenvectors 132
5A Invariant Subspaces 133
Eigenvalues 133
Polynomials Applied to Operators 137
Exercises 5A 139
5B The Minimal Polynomial 143
Existence of Eigenvalues on Complex Vector Spaces 143
Eigenvalues and the Minimal Polynomial 144
Eigenvalues on Odd-Dimensional Real Vector Spaces 149
Exercises 5B 150
5C Upper-Triangular Matrices 154
Exercises 5C 160
5D Diagonalizable Operators 163
Diagonal Matrices 163
Conditions for Diagonalizability 165
Gershgorin Disk Theorem 170
Exercises 5D 172
5E Commuting Operators 175
Exercises 5E 179
Chapter 6
Inner Product Spaces 181
6A Inner Products and Norms 182
Inner Products 182
Contents ix
Norms 186
Exercises 6A 191
6B Orthonormal Bases 197
Orthonormal Lists and the Gram–Schmidt Procedure 197
Linear Functionals on Inner Product Spaces 204
Exercises 6B 207
6C Orthogonal Complements and Minimization Problems 211
Orthogonal Complements 211
Minimization Problems 217
Pseudoinverse 220
Exercises 6C 224
Chapter 7
Operators on Inner Product Spaces 227
7A Self-Adjoint and Normal Operators 228
Adjoints 228
Self-Adjoint Operators 233
Normal Operators 235
Exercises 7A 239
7B Spectral Theorem 243
Real Spectral Theorem 243
Complex Spectral Theorem 246
Exercises 7B 247
7C Positive Operators 251
Exercises 7C 255
7D Isometries, Unitary Operators, and Matrix Factorization 258
Isometries 258
Unitary Operators 260
QR Factorization 263
Cholesky Factorization 266
Exercises 7D 268
7E Singular Value Decomposition 270
Singular Values 270
SVD for Linear Maps and for Matrices 273
Exercises 7E 278
x Contents
7F Consequences of Singular Value Decomposition 280
Norms of Linear Maps 280
Approximation by Linear Maps with Lower-Dimensional Range 283
Polar Decomposition 285
Operators Applied to Ellipsoids and Parallelepipeds 287
Volume via Singular Values 291
Properties of an Operator as Determined by Its Eigenvalues 293
Exercises 7F 294
Chapter 8
Operators on Complex Vector Spaces 297
8A Generalized Eigenvectors and Nilpotent Operators 298
Null Spaces of Powers of an Operator 298
Generalized Eigenvectors 300
Nilpotent Operators 303
Exercises 8A 306
8B Generalized Eigenspace Decomposition 308
Generalized Eigenspaces 308
Multiplicity of an Eigenvalue 310
Block Diagonal Matrices 314
Exercises 8B 316
8C Consequences of Generalized Eigenspace Decomposition 319
Square Roots of Operators 319
Jordan Form 321
Exercises 8C 324
8D Trace: A Connection Between Matrices and Operators 326
Exercises 8D 330
Chapter 9
Multilinear Algebra and Determinants 332
9A Bilinear Forms and Quadratic Forms 333
Bilinear Forms 333
Symmetric Bilinear Forms 337
Quadratic Forms 341
Exercises 9A 344
Contents xi
9B Alternating Multilinear Forms 346
Multilinear Forms 346
Alternating Multilinear Forms and Permutations 348
Exercises 9B 352
9C Determinants 354
Defining the Determinant 354
Properties of Determinants 357
Exercises 9C 367
9D Tensor Products 370
Tensor Product of Two Vector Spaces 370
Tensor Product of Inner Product Spaces 376
Tensor Product of Multiple Vector Spaces 378
Exercises 9D 380
Photo Credits 383
Symbol Index 384
Index 385
Colophon: Notes on Typesetting 390
Preface for Students
You are probably about to begin your second exposure to linear algebra. Unlike
your first brush with the subject, which probably emphasized Euclidean spaces
and matrices, this encounter will focus on abstract vector spaces and linear maps.
These terms will be defined later, so don’t worry if you do not know what they
mean. This book starts from the beginning of the subject, assuming no knowledge
of linear algebra. The key point is that you are about to immerse yourself in
serious mathematics, with an emphasis on attaining a deep understanding of the
definitions, theorems, and proofs.
You cannot read mathematics the way you read a novel. If you zip through a
page in less than an hour, you are probably going too fast. When you encounter
the phrase “as you should verify”, you should indeed do the verification, which
will usually require some writing on your part. When steps are left out, you need
to supply the missing pieces. You should ponder and internalize each definition.
For each theorem, you should seek examples to show why each hypothesis is
necessary. Discussions with other students should help.
As a visual aid, definitions are in yellow boxes and theorems are in blue boxes
(in color versions of the book). Each theorem has an infomal descriptive name.
Please check the website below for additional information about the book,
including a link to videos that are freely available to accompany the book.
Your suggestions, comments, and corrections are most welcome.
Best wishes for success and enjoyment in learning linear algebra!
Sheldon Axler
San Francisco State University
website: https://linear.axler.net
e-mail: [email protected]
xii
Preface for Instructors
You are about to teach a course that will probably give students their second
exposure to linear algebra. During their first brush with the subject, your students
probably worked with Euclidean spaces and matrices. In contrast, this course will
emphasize abstract vector spaces and linear maps.
The title of this book deserves an explanation. Most linear algebra textbooks
use determinants to prove that every linear operator on a finite-dimensional com-
plex vector space has an eigenvalue. Determinants are difficult, nonintuitive,
and often defined without motivation. To prove the theorem about existence of
eigenvalues on complex vector spaces, most books must define determinants,
prove that a linear operator is not invertible if and only if its determinant equals 0,
and then define the characteristic polynomial. This tortuous (torturous?) path
gives students little feeling for why eigenvalues exist.
In contrast, the simple determinant-free proofs presented here (for example,
see 5.19) offer more insight. Once determinants have been moved to the end of
the book, a new route opens to the main goal of linear algebra—understanding
the structure of linear operators.
This book starts at the beginning of the subject, with no prerequisites other
than the usual demand for suitable mathematical maturity. A few examples
and exercises involve calculus concepts such as continuity, differentiation, and
integration. You can easily skip those examples and exercises if your students
have not had calculus. If your students have had calculus, then those examples and
exercises can enrich their experience by showing connections between different
parts of mathematics.
Even if your students have already seen some of the material in the first few
chapters, they may be unaccustomed to working exercises of the type presented
here, most of which require an understanding of proofs.
Here is a chapter-by-chapter summary of the highlights of the book:
• Chapter 1: Vector spaces are defined in this chapter, and their basic properties
are developed.
• Chapter 2: Linear independence, span, basis, and dimension are defined in this
chapter, which presents the basic theory of finite-dimensional vector spaces.
• Chapter 3: This chapter introduces linear maps. The key result here is the
fundamental theorem of linear maps: if 𝑇 is a linear map on 𝑉, then dim 𝑉 =
dim null 𝑇 + dim range 𝑇. Quotient spaces and duality are topics in this chapter
at a higher level of abstraction than most of the book; these topics can be
skipped (except that duality is needed for tensor products in Section 9D).
xiii
xiv Preface for Instructors
• Chapter 4: The part of the theory of polynomials that will be needed to un-
derstand linear operators is presented in this chapter. This chapter contains no
linear algebra. It can be covered quickly, especially if your students are already
familiar with these results.
• Chapter 5: The idea of studying a linear operator by restricting it to small sub-
spaces leads to eigenvectors in the early part of this chapter. The highlight of this
chapter is a simple proof that on complex vector spaces, eigenvalues always ex-
ist. This result is then used to show that each linear operator on a complex vector
space has an upper-triangular matrix with respect to some basis. The minimal
polynomial plays an important role here and later in the book. For example, this
chapter gives a characterization of the diagonalizable operators in terms of the
minimal polynomial. Section 5E can be skipped if you want to save some time.
• Chapter 6: Inner product spaces are defined in this chapter, and their basic
properties are developed along with tools such as orthonormal bases and the
Gram–Schmidt procedure. This chapter also shows how orthogonal projections
can be used to solve certain minimization problems. The pseudoinverse is then
introduced as a useful tool when the inverse does not exist. The material on
the pseudoinverse can be skipped if you want to save some time.
• Chapter 7: The spectral theorem, which characterizes the linear operators for
which there exists an orthonormal basis consisting of eigenvectors, is one of
the highlights of this book. The work in earlier chapters pays off here with espe-
cially simple proofs. This chapter also deals with positive operators, isometries,
unitary operators, matrix factorizations, and especially the singular value de-
composition, which leads to the polar decomposition and norms of linear maps.
• Chapter 8: This chapter shows that for each operator on a complex vector space,
there is a basis of the vector space consisting of generalized eigenvectors of the
operator. Then the generalized eigenspace decomposition describes a linear
operator on a complex vector space. The multiplicity of an eigenvalue is defined
as the dimension of the corresponding generalized eigenspace. These tools are
used to prove that every invertible linear operator on a complex vector space
has a square root. Then the chapter gives a proof that every linear operator on
a complex vector space can be put into Jordan form. The chapter concludes
with an investigation of the trace of operators.
• Chapter 9: This chapter begins by looking at bilinear forms and showing that the
vector space of bilinear forms is the direct sum of the subspaces of symmetric
bilinear forms and alternating bilinear forms. Then quadratic forms are diag-
onalized. Moving to multilinear forms, the chapter shows that the subspace of
alternating 𝑛-linear forms on an 𝑛-dimensional vector space has dimension one.
This result leads to a clean basis-free definition of the determinant of an opera-
tor. For complex vector spaces, the determinant turns out to equal the product of
the eigenvalues, with each eigenvalue included in the product as many times as
its multiplicity. The chapter concludes with an introduction to tensor products.
Preface for Instructors xv
This book usually develops linear algebra simultaneously for real and complex
vector spaces by letting 𝐅 denote either the real or the complex numbers. If you and
your students prefer to think of 𝐅 as an arbitrary field, then see the comments at the
end of Section 1A. I prefer avoiding arbitrary fields at this level because they intro-
duce extra abstraction without leading to any new linear algebra. Also, students are
more comfortable thinking of polynomials as functions instead of the more formal
objects needed for polynomials with coefficients in finite fields. Finally, even if the
beginning part of the theory were developed with arbitrary fields, inner product
spaces would push consideration back to just real and complex vector spaces.
You probably cannot cover everything in this book in one semester. Going
through all the material in the first seven or eight chapters during a one-semester
course may require a rapid pace. If you must reach Chapter 9, then consider
skipping the material on quotient spaces in Section 3E, skipping Section 3F
on duality (unless you intend to cover tensor products in Section 9D), covering
Chapter 4 on polynomials in a half hour, skipping Section 5E on commuting
operators, and skipping the subsection in Section 6C on the pseudoinverse.
A goal more important than teaching any particular theorem is to develop in
students the ability to understand and manipulate the objects of linear algebra.
Mathematics can be learned only by doing. Fortunately, linear algebra has many
good homework exercises. When teaching this course, during each class I usually
assign as homework several of the exercises, due the next class. Going over the
homework might take up significant time in a typical class.
Some of the exercises are intended to lead curious students into important
topics beyond what might usually be included in a basic second course in linear
algebra.
The author’s top ten
Listed below are the author’s ten favorite results in the book, in order of their
appearance in the book. Students who leave your course with a good understanding
of these crucial results will have an excellent foundation in linear algebra.
• any two bases of a vector space have the same length (2.34)
• fundamental theorem of linear maps (3.21)
• existence of eigenvalues if 𝐅 = 𝐂 (5.19)
• upper-triangular form always exists if 𝐅 = 𝐂 (5.47)
• Cauchy–Schwarz inequality (6.14)
• Gram–Schmidt procedure (6.32)
• spectral theorem (7.29 and 7.31)
• singular value decomposition (7.70)
• generalized eigenspace decomposition theorem when 𝐅 = 𝐂 (8.22)
• dimension of alternating 𝑛-linear forms on 𝑉 is 1 if dim 𝑉 = 𝑛 (9.37)
xvi Preface for Instructors
Major improvements and additions for the fourth edition
• Over 250 new exercises and over 70 new examples.
• Increasing use of the minimal polynomial to provide cleaner proofs of multiple
results, including necessary and sufficient conditions for an operator to have an
upper-triangular matrix with respect to some basis (see Section 5C), necessary
and sufficient conditions for diagonalizability (see Section 5D), and the real
spectral theorem (see Section 7B).
• New section on commuting operators (see Section 5E).
• New subsection on pseudoinverse (see Section 6C).
• New subsections on QR factorization/Cholesky factorization (see Section 7D).
• Singular value decomposition now done for linear maps from an inner product
space to another (possibly different) inner product space, rather than only deal-
ing with linear operators from an inner product space to itself (see Section 7E).
• Polar decomposition now proved from singular value decomposition, rather than
in the opposite order; this has led to cleaner proofs of both the singular value
decomposition (see Section 7E) and the polar decomposition (see Section 7F).
• New subsection on norms of linear maps on finite-dimensional inner prod-
uct spaces, using the singular value decomposition to avoid even mentioning
supremum in the definition of the norm of a linear map (see Section 7F).
• New subsection on approximation by linear maps with lower-dimensional range
(see Section 7F).
• New elementary proof of the important result that if 𝑇 is an operator on a finite-
dimensional complex vector space 𝑉, then there exists a basis of 𝑉 consisting
of generalized eigenvectors of 𝑇 (see 8.9).
• New Chapter 9 on multilinear algebra, including bilinear forms, quadratic
forms, multilinear forms, and tensor products. Determinants now are defined
using a basis-free approach via alternating multilinear forms.
• New formatting to improve the student-friendly appearance of the book. For
example, the definition and result boxes now have rounded corners instead of
right-angle corners, for a gentler look. The main font size has been reduced
from 11 point to 10.5 point.
Please check the website below for additional links and information about the
book. Your suggestions, comments, and corrections are most welcome.
Best wishes for teaching a successful linear algebra class!
Sheldon Axler Contact the author, or Springer if the
San Francisco State University author is not available, for permission
website: https://linear.axler.net for translations or other commercial
e-mail: [email protected] reuse of the contents of this book.
Acknowledgments
I owe a huge intellectual debt to all the mathematicians who created linear algebra
over the past two centuries. The results in this book belong to the common heritage
of mathematics. A special case of a theorem may first have been proved long ago,
then sharpened and improved by many mathematicians in different time periods.
Bestowing proper credit on all contributors would be a difficult task that I have
not undertaken. In no case should the reader assume that any result presented
here represents my original contribution.
Many people helped make this a better book. The three previous editions of
this book were used as a textbook at over 375 universities and colleges around
the world. I received thousands of suggestions and comments from faculty and
students who used the book. Many of those suggestions led to improvements
in this edition. The manuscript for this fourth edition was class tested at 30
universities. I am extremely grateful for the useful feedback that I received from
faculty and students during this class testing.
The long list of people who should be thanked for their suggestions would
fill up many pages. Lists are boring to read. Thus to represent all contributors
to this edition, I will mention only Noel Hinton, a graduate student at Australian
National University, who sent me more suggestions and corrections for this fourth
edition than anyone else. To everyone who contributed suggestions, let me say
how truly grateful I am to all of you. Many many thanks!
I thank Springer for providing me with help when I needed it and for allowing
me the freedom to make the final decisions about the content and appearance
of this book. Special thanks to the two terrific mathematics editors at Springer
who worked with me on this project—Loretta Bartolini during the first half of
my work on the fourth edition, and Elizabeth Loew during the second half of my
work on the fourth edition. I am deeply indebted to David Kramer, who did a
magnificent job of copyediting and prevented me from making many mistakes.
Extra special thanks to my fantastic partner Carrie Heeter. Her understanding
and encouragement enabled me to work intensely on this new edition. Our won-
derful cat Moon, whose picture appears on the About the Author page, provided
sweet breaks throughout the writing process. Moon died suddenly due to a blood
clot as this book was being finished. We are grateful for five precious years with
him.
Sheldon Axler
xvii
Chapter 1
Vector Spaces
Linear algebra is the study of linear maps on finite-dimensional vector spaces.
Eventually we will learn what all these terms mean. In this chapter we will define
vector spaces and discuss their elementary properties.
In linear algebra, better theorems and more insight emerge if complex numbers
are investigated along with real numbers. Thus we will begin by introducing the
complex numbers and their basic properties.
We will generalize the examples of a plane and of ordinary space to 𝐑𝑛 and
𝐂𝑛, which we then will generalize to the notion of a vector space. As we will see,
a vector space is a set with operations of addition and scalar multiplication that
satisfy natural algebraic properties.
Then our next topic will be subspaces, which play a role for vector spaces
analogous to the role played by subsets for sets. Finally, we will look at sums
of subspaces (analogous to unions of subsets) and direct sums of subspaces
(analogous to unions of disjoint sets). Pierre Louis Dumesnil, Nils Forsberg
René Descartes explaining his work to Queen Christina of Sweden.
Vector spaces are a generalization of the description of a plane
using two coordinates, as published by Descartes in 1637.
© Sheldon Axler 2024
S. Axler, Linear Algebra Done Right, Undergraduate Texts in Mathematics,
1
https://doi.org/10.1007/978-3-031-41026-0_1
2 Chapter 1 Vector Spaces
1A 𝐑𝑛 and 𝐂𝑛
Complex Numbers
You should already be familiar with basic properties of the set 𝐑 of real numbers.
Complex numbers were invented so that we can take square roots of negative
numbers. The idea is to assume we have a square root of −1, denoted by 𝑖, that
obeys the usual rules of arithmetic. Here are the formal definitions.
1.1 definition: complex numbers, 𝐂
• A complex number is an ordered pair (𝑎, 𝑏), where 𝑎, 𝑏 ∈ 𝐑 , but we will
write this as 𝑎 + 𝑏𝑖.
• The set of all complex numbers is denoted by 𝐂 :
𝐂 = {𝑎 + 𝑏𝑖 ∶ 𝑎, 𝑏 ∈ 𝐑}.
• Addition and multiplication on 𝐂 are defined by
(𝑎 + 𝑏𝑖) + (𝑐 + 𝑑𝑖) = (𝑎 + 𝑐) + (𝑏 + 𝑑)𝑖,
(𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = (𝑎𝑐 − 𝑏𝑑) + (𝑎𝑑 + 𝑏𝑐)𝑖;
here 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝐑 .
If 𝑎 ∈ 𝐑 , we identify 𝑎 + 0𝑖 with the real number 𝑎. Thus we think of 𝐑 as a
subset of 𝐂 . We usually write 0 + 𝑏𝑖 as just 𝑏𝑖, and we usually write 0 + 1𝑖 as just 𝑖.
To motivate the definition of complex The symbol 𝑖 was first used to denote
multiplication given above, pretend that √−1 by Leonhard Euler in 1777.
we knew that 𝑖2 = −1 and then use the
usual rules of arithmetic to derive the formula above for the product of two
complex numbers. Then use that formula to verify that we indeed have
𝑖2 = −1.
Do not memorize the formula for the product of two complex numbers—you
can always rederive it by recalling that 𝑖2 = −1 and then using the usual rules of
arithmetic (as given by 1.3). The next example illustrates this procedure.
1.2 example: complex arithmetic
The product (2 + 3𝑖)(4 + 5𝑖) can be evaluated by applying the distributive and
commutative properties from 1.3:
(2 + 3𝑖)(4 + 5𝑖) = 2 ⋅ (4 + 5𝑖) + (3𝑖)(4 + 5𝑖)
= 2 ⋅ 4 + 2 ⋅ 5𝑖 + 3𝑖 ⋅ 4 + (3𝑖)(5𝑖)
= 8 + 10𝑖 + 12𝑖 − 15
= −7 + 22𝑖.
Section 1A 𝐑𝑛 and 𝐂𝑛 3
Our first result states that complex addition and complex multiplication have
the familiar properties that we expect.
1.3 properties of complex arithmetic
commutativity
𝛼 + 𝛽 = 𝛽 + 𝛼 and 𝛼𝛽 = 𝛽𝛼 for all 𝛼, 𝛽 ∈ 𝐂 .
associativity
(𝛼 + 𝛽) + 𝜆 = 𝛼 + (𝛽 + 𝜆) and (𝛼𝛽)𝜆 = 𝛼(𝛽𝜆) for all 𝛼, 𝛽, 𝜆 ∈ 𝐂 .
identities
𝜆 + 0 = 𝜆 and 𝜆1 = 𝜆 for all 𝜆 ∈ 𝐂 .
additive inverse
For every 𝛼 ∈ 𝐂 , there exists a unique 𝛽 ∈ 𝐂 such that 𝛼 + 𝛽 = 0.
multiplicative inverse
For every 𝛼 ∈ 𝐂 with 𝛼 ≠ 0, there exists a unique 𝛽 ∈ 𝐂 such that 𝛼𝛽 = 1.
distributive property
𝜆(𝛼 + 𝛽) = 𝜆𝛼 + 𝜆𝛽 for all 𝜆, 𝛼, 𝛽 ∈ 𝐂 .
The properties above are proved using the familiar properties of real numbers
and the definitions of complex addition and multiplication. The next example
shows how commutativity of complex multiplication is proved. Proofs of the
other properties above are left as exercises.
1.4 example: commutativity of complex multiplication
To show that 𝛼𝛽 = 𝛽𝛼 for all 𝛼, 𝛽 ∈ 𝐂 , suppose
𝛼 = 𝑎 + 𝑏𝑖 and 𝛽 = 𝑐 + 𝑑𝑖,
where 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝐑 . Then the definition of multiplication of complex numbers
shows that
𝛼𝛽 = (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖)
= (𝑎𝑐 − 𝑏𝑑) + (𝑎𝑑 + 𝑏𝑐)𝑖
and
𝛽𝛼 = (𝑐 + 𝑑𝑖)(𝑎 + 𝑏𝑖)
= (𝑐𝑎 − 𝑑𝑏) + (𝑐𝑏 + 𝑑𝑎)𝑖.
The equations above and the commutativity of multiplication and addition of real
numbers show that 𝛼𝛽 = 𝛽𝛼.
4 Chapter 1 Vector Spaces
Next, we define the additive and multiplicative inverses of complex numbers,
and then use those inverses to define subtraction and division operations with
complex numbers.
1.5 definition: −𝛼, subtraction, 1/𝛼, division
Suppose 𝛼, 𝛽 ∈ 𝐂 .
• Let −𝛼 denote the additive inverse of 𝛼. Thus −𝛼 is the unique complex
number such that
𝛼 + (−𝛼) = 0.
• Subtraction on 𝐂 is defined by
𝛽 − 𝛼 = 𝛽 + (−𝛼).
• For 𝛼 ≠ 0, let 1/𝛼 anddenote the multiplicative inverse of 𝛼. Thus 1/𝛼 is
1
𝛼
the unique complex number such that
𝛼(1/𝛼) = 1.
• For 𝛼 ≠ 0, division by 𝛼 is defined by
𝛽/𝛼 = 𝛽(1/𝛼).
So that we can conveniently make definitions and prove theorems that apply
to both real and complex numbers, we adopt the following notation.
1.6 notation: 𝐅
Throughout this book, 𝐅 stands for either 𝐑 or 𝐂 .
Thus if we prove a theorem involving The letter 𝐅 is used because 𝐑 and 𝐂
𝐅, we will know that it holds when 𝐅 is are examples of what are called fields.
replaced with 𝐑 and when 𝐅 is replaced
with 𝐂 .
Elements of 𝐅 are called scalars. The word “scalar” (which is just a fancy
word for “number”) is often used when we want to emphasize that an object is a
number, as opposed to a vector (vectors will be defined soon).
For 𝛼 ∈ 𝐅 and 𝑚 a positive integer, we define 𝛼𝑚 to denote the product of 𝛼
with itself 𝑚 times:
⏟.
𝛼𝑚 = 𝛼⋯𝛼
𝑚 times
This definition implies that
𝑛
(𝛼𝑚 ) = 𝛼𝑚𝑛 and (𝛼𝛽)𝑚 = 𝛼𝑚 𝛽𝑚
for all 𝛼, 𝛽 ∈ 𝐅 and all positive integers 𝑚, 𝑛.
Section 1A 𝐑𝑛 and 𝐂𝑛 5
Lists
Before defining 𝐑𝑛 and 𝐂𝑛, we look at two important examples.
1.7 example: 𝐑2 and 𝐑3
• The set 𝐑2, which you can think of as a plane, is the set of all ordered pairs of
real numbers:
𝐑2 = {(𝑥, 𝑦) ∶ 𝑥, 𝑦 ∈ 𝐑}.
• The set 𝐑3, which you can think of as ordinary space, is the set of all ordered
triples of real numbers:
𝐑3 = {(𝑥, 𝑦, 𝑧) ∶ 𝑥, 𝑦, 𝑧 ∈ 𝐑}.
To generalize 𝐑2 and 𝐑3 to higher dimensions, we first need to discuss the
concept of lists.
1.8 definition: list, length
• Suppose 𝑛 is a nonnegative integer. A list of length 𝑛 is an ordered collec-
tion of 𝑛 elements (which might be numbers, other lists, or more abstract
objects).
• Two lists are equal if and only if they have the same length and the same
elements in the same order.
Lists are often written as elements Many mathematicians call a list of
separated by commas and surrounded by length 𝑛 an 𝑛-tuple.
parentheses. Thus a list of length two is
an ordered pair that might be written as (𝑎, 𝑏). A list of length three is an ordered
triple that might be written as (𝑥, 𝑦, 𝑧). A list of length 𝑛 might look like this:
(𝑧1 , …, 𝑧𝑛 ).
Sometimes we will use the word list without specifying its length. Remember,
however, that by definition each list has a finite length that is a nonnegative integer.
Thus an object that looks like (𝑥1 , 𝑥2 , … ), which might be said to have infinite
length, is not a list.
A list of length 0 looks like this: ( ). We consider such an object to be a list
so that some of our theorems will not have trivial exceptions.
Lists differ from sets in two ways: in lists, order matters and repetitions have
meaning; in sets, order and repetitions are irrelevant.
1.9 example: lists versus sets
• The lists (3, 5) and (5, 3) are not equal, but the sets {3, 5} and {5, 3} are equal.
• The lists (4, 4) and (4, 4, 4) are not equal (they do not have the same length),
although the sets {4, 4} and {4, 4, 4} both equal the set {4}.
6 Chapter 1 Vector Spaces
𝐅𝑛
To define the higher-dimensional analogues of 𝐑2 and 𝐑3, we will simply replace
𝐑 with 𝐅 (which equals 𝐑 or 𝐂 ) and replace the 2 or 3 with an arbitrary positive
integer.
1.10 notation: 𝑛
Fix a positive integer 𝑛 for the rest of this chapter.
1.11 definition: 𝐅𝑛 , coordinate
𝐅𝑛 is the set of all lists of length 𝑛 of elements of 𝐅 :
𝐅𝑛 = {(𝑥1 , …, 𝑥𝑛 ) ∶ 𝑥𝑘 ∈ 𝐅 for 𝑘 = 1, …, 𝑛}.
For (𝑥1 , …, 𝑥𝑛 ) ∈ 𝐅𝑛 and 𝑘 ∈ {1, …, 𝑛}, we say that 𝑥𝑘 is the 𝑘 th coordinate of
(𝑥1 , …, 𝑥𝑛 ).
If 𝐅 = 𝐑 and 𝑛 equals 2 or 3, then the definition above of 𝐅𝑛 agrees with our
previous notions of 𝐑2 and 𝐑3.
1.12 example: 𝐂4
𝐂4 is the set of all lists of four complex numbers:
𝐂4 = {(𝑧1 , 𝑧2 , 𝑧3 , 𝑧4 ) ∶ 𝑧1 , 𝑧2 , 𝑧3 , 𝑧4 ∈ 𝐂}.
If 𝑛 ≥ 4, we cannot visualize 𝐑𝑛 as Read Flatland: A Romance of Many
a physical object. Similarly, 𝐂1 can be Dimensions, by Edwin A. Abbott, for
thought of as a plane, but for 𝑛 ≥ 2, the an amusing account of how 𝐑3 would
human brain cannot provide a full image be perceived by creatures living in 𝐑2.
of 𝐂𝑛. However, even if 𝑛 is large, we This novel, published in 1884, may
can perform algebraic manipulations in help you imagine a physical space of
𝐅𝑛 as easily as in 𝐑2 or 𝐑3. For example, four or more dimensions.
addition in 𝐅𝑛 is defined as follows.
1.13 definition: addition in 𝐅𝑛
Addition in 𝐅𝑛 is defined by adding corresponding coordinates:
(𝑥1 , …, 𝑥𝑛 ) + (𝑦1 , …, 𝑦𝑛 ) = (𝑥1 + 𝑦1 , …, 𝑥𝑛 + 𝑦𝑛 ).
Often the mathematics of 𝐅𝑛 becomes cleaner if we use a single letter to denote
a list of 𝑛 numbers, without explicitly writing the coordinates. For example, the
next result is stated with 𝑥 and 𝑦 in 𝐅𝑛 even though the proof requires the more
cumbersome notation of (𝑥1 , …, 𝑥𝑛 ) and (𝑦1 , …, 𝑦𝑛 ).
Section 1A 𝐑𝑛 and 𝐂𝑛 7
1.14 commutativity of addition in 𝐅𝑛
If 𝑥, 𝑦 ∈ 𝐅𝑛, then 𝑥 + 𝑦 = 𝑦 + 𝑥.
Proof Suppose 𝑥 = (𝑥1 , …, 𝑥𝑛 ) ∈ 𝐅𝑛 and 𝑦 = (𝑦1 , …, 𝑦𝑛 ) ∈ 𝐅𝑛. Then
𝑥 + 𝑦 = (𝑥1 , …, 𝑥𝑛 ) + (𝑦1 , …, 𝑦𝑛 )
= (𝑥1 + 𝑦1 , …, 𝑥𝑛 + 𝑦𝑛 )
= (𝑦1 + 𝑥1 , …, 𝑦𝑛 + 𝑥𝑛 )
= (𝑦1 , …, 𝑦𝑛 ) + (𝑥1 , …, 𝑥𝑛 )
= 𝑦 + 𝑥,
where the second and fourth equalities above hold because of the definition of
addition in 𝐅𝑛 and the third equality holds because of the usual commutativity of
addition in 𝐅.
If a single letter is used to denote an The symbol means “end of proof ”.
element of 𝐅𝑛, then the same letter with
appropriate subscripts is often used when
coordinates must be displayed. For example, if 𝑥 ∈ 𝐅𝑛, then letting 𝑥 equal
(𝑥1 , …, 𝑥𝑛 ) is good notation, as shown in the proof above. Even better, work with
just 𝑥 and avoid explicit coordinates when possible.
1.15 notation: 0
Let 0 denote the list of length 𝑛 whose coordinates are all 0:
0 = (0, …, 0).
Here we are using the symbol 0 in two different ways—on the left side of the
equation above, the symbol 0 denotes a list of length 𝑛, which is an element of 𝐅𝑛,
whereas on the right side, each 0 denotes a number. This potentially confusing
practice actually causes no problems because the context should always make
clear which 0 is intended.
1.16 example: context determines which 0 is intended
Consider the statement that 0 is an additive identity for 𝐅𝑛 :
𝑥+0=𝑥 for all 𝑥 ∈ 𝐅𝑛.
Here the 0 above is the list defined in 1.15, not the number 0, because we have
not defined the sum of an element of 𝐅𝑛 (namely, 𝑥) and the number 0.
8 Chapter 1 Vector Spaces
A picture can aid our intuition. We will
draw pictures in 𝐑2 because we can sketch
this space on two-dimensional surfaces
such as paper and computer screens. A
typical element of 𝐑2 is a point 𝑣 = (𝑎, 𝑏).
Sometimes we think of 𝑣 not as a point
but as an arrow starting at the origin and Elements of 𝐑2 can be thought of
ending at (𝑎, 𝑏), as shown here. When we as points or as vectors.
think of an element of 𝐑 as an arrow, we
2
refer to it as a vector.
When we think of vectors in 𝐑2 as arrows, we
can move an arrow parallel to itself (not changing
its length or direction) and still think of it as the
same vector. With that viewpoint, you will often
gain better understanding by dispensing with the
coordinate axes and the explicit coordinates and A vector.
just thinking of the vector, as shown in the figure here. The two arrows shown
here have the same length and same direction, so we think of them as the same
vector.
Whenever we use pictures in 𝐑2 or Mathematical models of the economy
use the somewhat vague language of can have thousands of variables, say
points and vectors, remember that these 𝑥1 , …, 𝑥5000 , which means that we must
are just aids to our understanding, not sub- work in 𝐑5000 . Such a space cannot be
stitutes for the actual mathematics that dealt with geometrically. However, the
we will develop. Although we cannot algebraic approach works well. Thus
draw good pictures in high-dimensional our subject is called linear algebra.
spaces, the elements of these spaces are
as rigorously defined as elements of 𝐑2.
For example, (2, −3, 17, 𝜋, √2) is an element of 𝐑5, and we may casually
refer to it as a point in 𝐑5 or a vector in 𝐑5 without worrying about whether the
geometry of 𝐑5 has any physical meaning.
Recall that we defined the sum of two elements of 𝐅𝑛 to be the element of 𝐅𝑛
obtained by adding corresponding coordinates; see 1.13. As we will now see,
addition has a simple geometric interpretation in the special case of 𝐑2.
Suppose we have two vectors 𝑢 and 𝑣 in 𝐑2
that we want to add. Move the vector 𝑣 parallel
to itself so that its initial point coincides with the
end point of the vector 𝑢, as shown here. The
sum 𝑢 + 𝑣 then equals the vector whose initial
point equals the initial point of 𝑢 and whose end
point equals the end point of the vector 𝑣, as The sum of two vectors.
shown here.
In the next definition, the 0 on the right side of the displayed equation is the
list 0 ∈ 𝐅𝑛.
Section 1A 𝐑𝑛 and 𝐂𝑛 9
1.17 definition: additive inverse in 𝐅𝑛 , −𝑥
For 𝑥 ∈ 𝐅𝑛, the additive inverse of 𝑥, denoted by −𝑥, is the vector −𝑥 ∈ 𝐅𝑛
such that
𝑥 + (−𝑥) = 0.
Thus if 𝑥 = (𝑥1 , …, 𝑥𝑛 ), then −𝑥 = (−𝑥1 , …, −𝑥𝑛 ).
The additive inverse of a vector in 𝐑2 is the
vector with the same length but pointing in the
opposite direction. The figure here illustrates
this way of thinking about the additive inverse
in 𝐑2. As you can see, the vector labeled −𝑥 has
the same length as the vector labeled 𝑥 but points A vector and its additive inverse.
in the opposite direction.
Having dealt with addition in 𝐅𝑛, we now turn to multiplication. We could
define a multiplication in 𝐅𝑛 in a similar fashion, starting with two elements of
𝐅𝑛 and getting another element of 𝐅𝑛 by multiplying corresponding coordinates.
Experience shows that this definition is not useful for our purposes. Another
type of multiplication, called scalar multiplication, will be central to our subject.
Specifically, we need to define what it means to multiply an element of 𝐅𝑛 by an
element of 𝐅.
1.18 definition: scalar multiplication in 𝐅𝑛
The product of a number 𝜆 and a vector in 𝐅𝑛 is computed by multiplying
each coordinate of the vector by 𝜆:
𝜆(𝑥1 , …, 𝑥𝑛 ) = (𝜆𝑥1 , …, 𝜆𝑥𝑛 );
here 𝜆 ∈ 𝐅 and (𝑥1 , …, 𝑥𝑛 ) ∈ 𝐅𝑛.
Scalar multiplication has a nice geo- Scalar multiplication in 𝐅𝑛 multiplies
metric interpretation in 𝐑2. If 𝜆 > 0 and together a scalar and a vector, getting
𝑥 ∈ 𝐑2, then 𝜆𝑥 is the vector that points a vector. In contrast, the dot product in
in the same direction as 𝑥 and whose 𝐑2 or 𝐑3 multiplies together two vec-
length is 𝜆 times the length of 𝑥. In other tors and gets a scalar. Generalizations
words, to get 𝜆𝑥, we shrink or stretch 𝑥 of the dot product will become impor-
by a factor of 𝜆, depending on whether tant in Chapter 6.
𝜆 < 1 or 𝜆 > 1.
If 𝜆 < 0 and 𝑥 ∈ 𝐑2, then 𝜆𝑥 is the
vector that points in the direction opposite
to that of 𝑥 and whose length is |𝜆| times
the length of 𝑥, as shown here.
Scalar multiplication.
10 Chapter 1 Vector Spaces
Digression on Fields
A field is a set containing at least two distinct elements called 0 and 1, along with
operations of addition and multiplication satisfying all properties listed in 1.3.
Thus 𝐑 and 𝐂 are fields, as is the set of rational numbers along with the usual
operations of addition and multiplication. Another example of a field is the set
{0, 1} with the usual operations of addition and multiplication except that 1 + 1 is
defined to equal 0.
In this book we will not deal with fields other than 𝐑 and 𝐂 . However, many
of the definitions, theorems, and proofs in linear algebra that work for the fields
𝐑 and 𝐂 also work without change for arbitrary fields. If you prefer to do so,
throughout much of this book (except for Chapters 6 and 7, which deal with inner
product spaces) you can think of 𝐅 as denoting an arbitrary field instead of 𝐑
or 𝐂 . For results (except in the inner product chapters) that have as a hypothesis
that 𝐅 is 𝐂 , you can probably replace that hypothesis with the hypothesis that 𝐅
is an algebraically closed field, which means that every nonconstant polynomial
with coefficients in 𝐅 has a zero. A few results, such as Exercise 13 in Section
1C, require the hypothesis on 𝐅 that 1 + 1 ≠ 0.
Exercises 1A
1 Show that 𝛼 + 𝛽 = 𝛽 + 𝛼 for all 𝛼, 𝛽 ∈ 𝐂 .
2 Show that (𝛼 + 𝛽) + 𝜆 = 𝛼 + (𝛽 + 𝜆) for all 𝛼, 𝛽, 𝜆 ∈ 𝐂 .
3 Show that (𝛼𝛽)𝜆 = 𝛼(𝛽𝜆) for all 𝛼, 𝛽, 𝜆 ∈ 𝐂 .
4 Show that 𝜆(𝛼 + 𝛽) = 𝜆𝛼 + 𝜆𝛽 for all 𝜆, 𝛼, 𝛽 ∈ 𝐂 .
5 Show that for every 𝛼 ∈ 𝐂 , there exists a unique 𝛽 ∈ 𝐂 such that 𝛼 + 𝛽 = 0.
6 Show that for every 𝛼 ∈ 𝐂 with 𝛼 ≠ 0, there exists a unique 𝛽 ∈ 𝐂 such
that 𝛼𝛽 = 1.
7 Show that
−1 + √3𝑖
2
is a cube root of 1 (meaning that its cube equals 1).
8 Find two distinct square roots of 𝑖.
9 Find 𝑥 ∈ 𝐑4 such that
(4, −3, 1, 7) + 2𝑥 = (5, 9, −6, 8).
10 Explain why there does not exist 𝜆 ∈ 𝐂 such that
𝜆(2 − 3𝑖, 5 + 4𝑖, −6 + 7𝑖) = (12 − 5𝑖, 7 + 22𝑖, −32 − 9𝑖).
Section 1A 𝐑𝑛 and 𝐂𝑛 11
11 Show that (𝑥 + 𝑦) + 𝑧 = 𝑥 + (𝑦 + 𝑧) for all 𝑥, 𝑦, 𝑧 ∈ 𝐅𝑛.
12 Show that (𝑎𝑏)𝑥 = 𝑎(𝑏𝑥) for all 𝑥 ∈ 𝐅𝑛 and all 𝑎, 𝑏 ∈ 𝐅.
13 Show that 1𝑥 = 𝑥 for all 𝑥 ∈ 𝐅𝑛.
14 Show that 𝜆(𝑥 + 𝑦) = 𝜆𝑥 + 𝜆𝑦 for all 𝜆 ∈ 𝐅 and all 𝑥, 𝑦 ∈ 𝐅𝑛.
15 Show that (𝑎 + 𝑏)𝑥 = 𝑎𝑥 + 𝑏𝑥 for all 𝑎, 𝑏 ∈ 𝐅 and all 𝑥 ∈ 𝐅𝑛.
“Can you do addition?” the White Queen asked. “What’s one and one and one
and one and one and one and one and one and one and one?”
“I don’t know,” said Alice. “I lost count.”
—Through the Looking Glass, Lewis Carroll
12 Chapter 1 Vector Spaces
1B Definition of Vector Space
The motivation for the definition of a vector space comes from properties of
addition and scalar multiplication in 𝐅𝑛 : Addition is commutative, associative,
and has an identity. Every element has an additive inverse. Scalar multiplication
is associative. Scalar multiplication by 1 acts as expected. Addition and scalar
multiplication are connected by distributive properties.
We will define a vector space to be a set 𝑉 with an addition and a scalar
multiplication on 𝑉 that satisfy the properties in the paragraph above.
1.19 definition: addition, scalar multiplication
• An addition on a set 𝑉 is a function that assigns an element 𝑢 + 𝑣 ∈ 𝑉
to each pair of elements 𝑢, 𝑣 ∈ 𝑉.
• A scalar multiplication on a set 𝑉 is a function that assigns an element
𝜆𝑣 ∈ 𝑉 to each 𝜆 ∈ 𝐅 and each 𝑣 ∈ 𝑉.
Now we are ready to give the formal definition of a vector space.
1.20 definition: vector space
A vector space is a set 𝑉 along with an addition on 𝑉 and a scalar multiplication
on 𝑉 such that the following properties hold.
commutativity
𝑢 + 𝑣 = 𝑣 + 𝑢 for all 𝑢, 𝑣 ∈ 𝑉.
associativity
(𝑢 + 𝑣) + 𝑤 = 𝑢 + (𝑣 + 𝑤) and (𝑎𝑏)𝑣 = 𝑎(𝑏𝑣) for all 𝑢, 𝑣, 𝑤 ∈ 𝑉 and for all
𝑎, 𝑏 ∈ 𝐅.
additive identity
There exists an element 0 ∈ 𝑉 such that 𝑣 + 0 = 𝑣 for all 𝑣 ∈ 𝑉.
additive inverse
For every 𝑣 ∈ 𝑉, there exists 𝑤 ∈ 𝑉 such that 𝑣 + 𝑤 = 0.
multiplicative identity
1𝑣 = 𝑣 for all 𝑣 ∈ 𝑉.
distributive properties
𝑎(𝑢 + 𝑣) = 𝑎𝑢 + 𝑎𝑣 and (𝑎 + 𝑏)𝑣 = 𝑎𝑣 + 𝑏𝑣 for all 𝑎, 𝑏 ∈ 𝐅 and all 𝑢, 𝑣 ∈ 𝑉.
The following geometric language sometimes aids our intuition.
1.21 definition: vector, point
Elements of a vector space are called vectors or points.
Section 1B Definition of Vector Space 13
The scalar multiplication in a vector space depends on 𝐅. Thus when we need
to be precise, we will say that 𝑉 is a vector space over 𝐅 instead of saying simply
that 𝑉 is a vector space. For example, 𝐑𝑛 is a vector space over 𝐑 , and 𝐂𝑛 is a
vector space over 𝐂 .
1.22 definition: real vector space, complex vector space
• A vector space over 𝐑 is called a real vector space.
• A vector space over 𝐂 is called a complex vector space.
Usually the choice of 𝐅 is either clear from the context or irrelevant. Thus we
often assume that 𝐅 is lurking in the background without specifically mentioning it.
With the usual operations of addition The simplest vector space is {0}, which
and scalar multiplication, 𝐅𝑛 is a vector contains only one point.
space over 𝐅, as you should verify. The
example of 𝐅𝑛 motivated our definition of vector space.
1.23 example: 𝐅∞
𝐅∞ is defined to be the set of all sequences of elements of 𝐅 :
𝐅∞ = {(𝑥1 , 𝑥2 , … ) ∶ 𝑥𝑘 ∈ 𝐅 for 𝑘 = 1, 2, …}.
Addition and scalar multiplication on 𝐅∞ are defined as expected:
(𝑥1 , 𝑥2 , … ) + (𝑦1 , 𝑦2 , … ) = (𝑥1 + 𝑦1 , 𝑥2 + 𝑦2 , … ),
𝜆(𝑥1 , 𝑥2 , … ) = (𝜆𝑥1 , 𝜆𝑥2 , … ).
With these definitions, 𝐅∞ becomes a vector space over 𝐅, as you should verify.
The additive identity in this vector space is the sequence of all 0’s.
Our next example of a vector space involves a set of functions.
1.24 notation: 𝐅𝑆
• If 𝑆 is a set, then 𝐅𝑆 denotes the set of functions from 𝑆 to 𝐅.
• For 𝑓, 𝑔 ∈ 𝐅𝑆, the sum 𝑓 + 𝑔 ∈ 𝐅𝑆 is the function defined by
( 𝑓 + 𝑔)(𝑥) = 𝑓 (𝑥) + 𝑔(𝑥)
for all 𝑥 ∈ 𝑆.
• For 𝜆 ∈ 𝐅 and 𝑓 ∈ 𝐅𝑆, the product 𝜆 𝑓 ∈ 𝐅𝑆 is the function defined by
(𝜆 𝑓 )(𝑥) = 𝜆 𝑓 (𝑥)
for all 𝑥 ∈ 𝑆.
14 Chapter 1 Vector Spaces
As an example of the notation above, if 𝑆 is the interval [0, 1] and 𝐅 = 𝐑 , then
𝐑 is the set of real-valued functions on the interval [0, 1].
[0, 1]
You should verify all three bullet points in the next example.
1.25 example: 𝐅𝑆 is a vector space
• If 𝑆 is a nonempty set, then 𝐅𝑆 (with the operations of addition and scalar
multiplication as defined above) is a vector space over 𝐅.
• The additive identity of 𝐅𝑆 is the function 0 ∶ 𝑆 → 𝐅 defined by
0(𝑥) = 0
for all 𝑥 ∈ 𝑆.
• For 𝑓 ∈ 𝐅𝑆, the additive inverse of 𝑓 is the function − 𝑓 ∶ 𝑆 → 𝐅 defined by
(− 𝑓 )(𝑥) = − 𝑓 (𝑥)
for all 𝑥 ∈ 𝑆.
The vector space 𝐅𝑛 is a special case The elements of the vector space 𝐑[0, 1]
of the vector space 𝐅𝑆 because each are real-valued functions on [0, 1], not
(𝑥1 , …, 𝑥𝑛 ) ∈ 𝐅𝑛 can be thought of as lists. In general, a vector space is an
a function 𝑥 from the set {1, 2, …, 𝑛} to 𝐅 abstract entity whose elements might
by writing 𝑥(𝑘) instead of 𝑥𝑘 for the 𝑘 th be lists, functions, or weird objects.
coordinate of (𝑥1 , …, 𝑥𝑛 ). In other words,
we can think of 𝐅𝑛 as 𝐅{1, 2, …, 𝑛}. Similarly, we can think of 𝐅∞ as 𝐅{1, 2, … }.
Soon we will see further examples of vector spaces, but first we need to develop
some of the elementary properties of vector spaces.
The definition of a vector space requires it to have an additive identity. The
next result states that this identity is unique.
1.26 unique additive identity
A vector space has a unique additive identity.
Proof Suppose 0 and 0′ are both additive identities for some vector space 𝑉.
Then
0′ = 0′ + 0 = 0 + 0′ = 0,
where the first equality holds because 0 is an additive identity, the second equality
comes from commutativity, and the third equality holds because 0′ is an additive
identity. Thus 0′ = 0, proving that 𝑉 has only one additive identity.
Each element 𝑣 in a vector space has an additive inverse, an element 𝑤 in the
vector space such that 𝑣 + 𝑤 = 0. The next result shows that each element in a
vector space has only one additive inverse.
Section 1B Definition of Vector Space 15
1.27 unique additive inverse
Every element in a vector space has a unique additive inverse.
Proof Suppose 𝑉 is a vector space. Let 𝑣 ∈ 𝑉. Suppose 𝑤 and 𝑤′ are additive
inverses of 𝑣. Then
𝑤 = 𝑤 + 0 = 𝑤 + (𝑣 + 𝑤′ ) = (𝑤 + 𝑣) + 𝑤′ = 0 + 𝑤′ = 𝑤′.
Thus 𝑤 = 𝑤′, as desired.
Because additive inverses are unique, the following notation now makes sense.
1.28 notation: −𝑣, 𝑤 − 𝑣
Let 𝑣, 𝑤 ∈ 𝑉. Then
• −𝑣 denotes the additive inverse of 𝑣;
• 𝑤 − 𝑣 is defined to be 𝑤 + (−𝑣).
Almost all results in this book involve some vector space. To avoid having to
restate frequently that 𝑉 is a vector space, we now make the necessary declaration
once and for all.
1.29 notation: 𝑉
For the rest of this book, 𝑉 denotes a vector space over 𝐅.
In the next result, 0 denotes a scalar (the number 0 ∈ 𝐅 ) on the left side of the
equation and a vector (the additive identity of 𝑉) on the right side of the equation.
1.30 the number 0 times a vector
0𝑣 = 0 for every 𝑣 ∈ 𝑉.
Proof For 𝑣 ∈ 𝑉, we have The result in 1.30 involves the additive
identity of 𝑉 and scalar multiplication.
0𝑣 = (0 + 0)𝑣 = 0𝑣 + 0𝑣.
The only part of the definition of a vec-
Adding the additive inverse of 0𝑣 to both tor space that connects vector addition
sides of the equation above gives 0 = 0𝑣, and scalar multiplication is the dis-
as desired. tributive property. Thus the distribu-
tive property must be used in the proof
In the next result, 0 denotes the addi- of 1.30.
tive identity of 𝑉. Although their proofs
are similar, 1.30 and 1.31 are not identical. More precisely, 1.30 states that the
product of the scalar 0 and any vector equals the vector 0, whereas 1.31 states that
the product of any scalar and the vector 0 equals the vector 0.
16 Chapter 1 Vector Spaces
1.31 a number times the vector 0
𝑎0 = 0 for every 𝑎 ∈ 𝐅.
Proof For 𝑎 ∈ 𝐅, we have
𝑎0 = 𝑎(0 + 0) = 𝑎0 + 𝑎0.
Adding the additive inverse of 𝑎0 to both sides of the equation above gives 0 = 𝑎0,
as desired.
Now we show that if an element of 𝑉 is multiplied by the scalar −1, then the
result is the additive inverse of the element of 𝑉.
1.32 the number −1 times a vector
(−1)𝑣 = −𝑣 for every 𝑣 ∈ 𝑉.
Proof For 𝑣 ∈ 𝑉, we have
𝑣 + (−1)𝑣 = 1𝑣 + (−1)𝑣 = (1 + (−1))𝑣 = 0𝑣 = 0.
This equation says that (−1)𝑣, when added to 𝑣, gives 0. Thus (−1)𝑣 is the
additive inverse of 𝑣, as desired.
Exercises 1B
1 Prove that −(−𝑣) = 𝑣 for every 𝑣 ∈ 𝑉.
2 Suppose 𝑎 ∈ 𝐅, 𝑣 ∈ 𝑉, and 𝑎𝑣 = 0. Prove that 𝑎 = 0 or 𝑣 = 0.
3 Suppose 𝑣, 𝑤 ∈ 𝑉. Explain why there exists a unique 𝑥 ∈ 𝑉 such that
𝑣 + 3𝑥 = 𝑤.
4 The empty set is not a vector space. The empty set fails to satisfy only one
of the requirements listed in the definition of a vector space (1.20). Which
one?
5 Show that in the definition of a vector space (1.20), the additive inverse
condition can be replaced with the condition that
0𝑣 = 0 for all 𝑣 ∈ 𝑉.
Here the 0 on the left side is the number 0, and the 0 on the right side is the
additive identity of 𝑉.
The phrase a “condition can be replaced” in a definition means that the
collection of objects satisfying the definition is unchanged if the original
condition is replaced with the new condition.
Section 1B Definition of Vector Space 17
6 Let ∞ and −∞ denote two distinct objects, neither of which is in 𝐑 . Define
an addition and scalar multiplication on 𝐑 ∪ {∞, −∞} as you could guess
from the notation. Specifically, the sum and product of two real numbers is
as usual, and for 𝑡 ∈ 𝐑 define
⎧−∞ if 𝑡 < 0, ⎧∞ if 𝑡 < 0,
{
{ {
{
𝑡∞ = ⎨0 if 𝑡 = 0, 𝑡(−∞) = ⎨0 if 𝑡 = 0,
{ {
{∞
⎩ if 𝑡 > 0 , {
⎩ −∞ if 𝑡 > 0,
and
𝑡 + ∞ = ∞ + 𝑡 = ∞ + ∞ = ∞,
𝑡 + (−∞) = (−∞) + 𝑡 = (−∞) + (−∞) = −∞,
∞ + (−∞) = (−∞) + ∞ = 0.
With these operations of addition and scalar multiplication, is 𝐑 ∪ {∞, −∞}
a vector space over 𝐑 ? Explain.
7 Suppose 𝑆 is a nonempty set. Let 𝑉 𝑆 denote the set of functions from 𝑆 to 𝑉.
Define a natural addition and scalar multiplication on 𝑉 𝑆, and show that 𝑉 𝑆
is a vector space with these definitions.
8 Suppose 𝑉 is a real vector space.
• The complexification of 𝑉, denoted by 𝑉𝐂 , equals 𝑉× 𝑉. An element of
𝑉𝐂 is an ordered pair (𝑢, 𝑣), where 𝑢, 𝑣 ∈ 𝑉, but we write this as 𝑢 + 𝑖𝑣.
• Addition on 𝑉𝐂 is defined by
(𝑢1 + 𝑖𝑣1 ) + (𝑢2 + 𝑖𝑣2 ) = (𝑢1 + 𝑢2 ) + 𝑖(𝑣1 + 𝑣2 )
for all 𝑢1 , 𝑣1 , 𝑢2 , 𝑣2 ∈ 𝑉.
• Complex scalar multiplication on 𝑉𝐂 is defined by
(𝑎 + 𝑏𝑖)(𝑢 + 𝑖𝑣) = (𝑎𝑢 − 𝑏𝑣) + 𝑖(𝑎𝑣 + 𝑏𝑢)
for all 𝑎, 𝑏 ∈ 𝐑 and all 𝑢, 𝑣 ∈ 𝑉.
Prove that with the definitions of addition and scalar multiplication as above,
𝑉𝐂 is a complex vector space.
Think of 𝑉 as a subset of 𝑉𝐂 by identifying 𝑢 ∈ 𝑉 with 𝑢 + 𝑖0. The construc-
tion of 𝑉𝐂 from 𝑉 can then be thought of as generalizing the construction
of 𝐂𝑛 from 𝐑𝑛.
18 Chapter 1 Vector Spaces
1C Subspaces
By considering subspaces, we can greatly expand our examples of vector spaces.
1.33 definition: subspace
A subset 𝑈 of 𝑉 is called a subspace of 𝑉 if 𝑈 is also a vector space with the
same additive identity, addition, and scalar multiplication as on 𝑉.
The next result gives the easiest way Some people use the terminology
to check whether a subset of a vector linear subspace, which means the
space is a subspace. same as subspace.
1.34 conditions for a subspace
A subset 𝑈 of 𝑉 is a subspace of 𝑉 if and only if 𝑈 satisfies the following
three conditions.
additive identity
0 ∈ 𝑈.
closed under addition
𝑢, 𝑤 ∈ 𝑈 implies 𝑢 + 𝑤 ∈ 𝑈.
closed under scalar multiplication
𝑎 ∈ 𝐅 and 𝑢 ∈ 𝑈 implies 𝑎𝑢 ∈ 𝑈.
Proof If 𝑈 is a subspace of 𝑉, then 𝑈 The additive identity condition above
satisfies the three conditions above by the could be replaced with the condition
definition of vector space. that 𝑈 is nonempty (because then tak-
Conversely, suppose 𝑈 satisfies the ing 𝑢 ∈ 𝑈 and multiplying it by 0
three conditions above. The first condi- would imply that 0 ∈ 𝑈 ). However,
tion ensures that the additive identity of if a subset 𝑈 of 𝑉 is indeed a sub-
𝑉 is in 𝑈. The second condition ensures space, then usually the quickest way
that addition makes sense on 𝑈. The third to show that 𝑈 is nonempty is to show
condition ensures that scalar multiplica- that 0 ∈ 𝑈.
tion makes sense on 𝑈.
If 𝑢 ∈ 𝑈, then −𝑢 [which equals (−1)𝑢 by 1.32] is also in 𝑈 by the third
condition above. Hence every element of 𝑈 has an additive inverse in 𝑈.
The other parts of the definition of a vector space, such as associativity and
commutativity, are automatically satisfied for 𝑈 because they hold on the larger
space 𝑉. Thus 𝑈 is a vector space and hence is a subspace of 𝑉.
The three conditions in the result above usually enable us to determine quickly
whether a given subset of 𝑉 is a subspace of 𝑉. You should verify all assertions
in the next example.
Section 1C Subspaces 19
1.35 example: subspaces
(a) If 𝑏 ∈ 𝐅, then
{(𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) ∈ 𝐅4 ∶ 𝑥3 = 5𝑥4 + 𝑏}
is a subspace of 𝐅4 if and only if 𝑏 = 0.
(b) The set of continuous real-valued functions on the interval [0, 1] is a subspace
of 𝐑[0, 1].
(c) The set of differentiable real-valued functions on 𝐑 is a subspace of 𝐑𝐑.
(d) The set of differentiable real-valued functions 𝑓 on the interval (0, 3) such
that 𝑓 ′(2) = 𝑏 is a subspace of 𝐑(0, 3) if and only if 𝑏 = 0.
(e) The set of all sequences of complex numbers with limit 0 is a subspace of 𝐂∞.
Verifying some of the items above The set {0} is the smallest subspace of
shows the linear structure underlying 𝑉, and 𝑉 itself is the largest subspace
parts of calculus. For example, (b) above of 𝑉. The empty set is not a subspace
requires the result that the sum of two of 𝑉 because a subspace must be a
continuous functions is continuous. As vector space and hence must contain at
another example, (d) above requires the least one element, namely, an additive
result that for a constant 𝑐, the derivative identity.
of 𝑐 𝑓 equals 𝑐 times the derivative of 𝑓.
The subspaces of 𝐑2 are precisely {0}, all lines in 𝐑2 containing the origin,
and 𝐑2. The subspaces of 𝐑3 are precisely {0}, all lines in 𝐑3 containing the origin,
all planes in 𝐑3 containing the origin, and 𝐑3. To prove that all these objects are
indeed subspaces is straightforward—the hard part is to show that they are the
only subspaces of 𝐑2 and 𝐑3. That task will be easier after we introduce some
additional tools in the next chapter.
Sums of Subspaces
When dealing with vector spaces, we are The union of subspaces is rarely a sub-
usually interested only in subspaces, as space (see Exercise 12), which is why
opposed to arbitrary subsets. The notion we usually work with sums rather than
of the sum of subspaces will be useful. unions.
1.36 definition: sum of subspaces
Suppose 𝑉1 , …, 𝑉𝑚 are subspaces of 𝑉. The sum of 𝑉1 , …, 𝑉𝑚 , denoted by
𝑉1 + ⋯ + 𝑉𝑚 , is the set of all possible sums of elements of 𝑉1 , …, 𝑉𝑚 . More
precisely,
𝑉1 + ⋯ + 𝑉𝑚 = {𝑣1 + ⋯ + 𝑣𝑚 ∶ 𝑣1 ∈ 𝑉1 , …, 𝑣𝑚 ∈ 𝑉𝑚 }.
20 Chapter 1 Vector Spaces
Let’s look at some examples of sums of subspaces.
1.37 example: a sum of subspaces of 𝐅3
Suppose 𝑈 is the set of all elements of 𝐅3 whose second and third coordinates
equal 0, and 𝑊 is the set of all elements of 𝐅3 whose first and third coordinates
equal 0:
𝑈 = {(𝑥, 0, 0) ∈ 𝐅3 ∶ 𝑥 ∈ 𝐅} and 𝑊 = {(0, 𝑦, 0) ∈ 𝐅3 ∶ 𝑦 ∈ 𝐅}.
Then
𝑈 + 𝑊 = {(𝑥, 𝑦, 0) ∈ 𝐅3 ∶ 𝑥, 𝑦 ∈ 𝐅},
as you should verify.
1.38 example: a sum of subspaces of 𝐅4
Suppose
𝑈 = {(𝑥, 𝑥, 𝑦, 𝑦) ∈ 𝐅4 ∶ 𝑥, 𝑦 ∈ 𝐅} and 𝑊 = {(𝑥, 𝑥, 𝑥, 𝑦) ∈ 𝐅4 ∶ 𝑥, 𝑦 ∈ 𝐅}.
Using words rather than symbols, we could say that 𝑈 is the set of elements
of 𝐅4 whose first two coordinates equal each other and whose third and fourth
coordinates equal each other. Similarly, 𝑊 is the set of elements of 𝐅4 whose first
three coordinates equal each other.
To find a description of 𝑈 + 𝑊, consider a typical element (𝑎, 𝑎, 𝑏, 𝑏) of 𝑈 and
a typical element (𝑐, 𝑐, 𝑐, 𝑑) of 𝑊, where 𝑎, 𝑏, 𝑐, 𝑑 ∈ 𝐅. We have
(𝑎, 𝑎, 𝑏, 𝑏) + (𝑐, 𝑐, 𝑐, 𝑑) = (𝑎 + 𝑐, 𝑎 + 𝑐, 𝑏 + 𝑐, 𝑏 + 𝑑),
which shows that every element of 𝑈 + 𝑊 has its first two coordinates equal to
each other. Thus
1.39 𝑈 + 𝑊 ⊆ {(𝑥, 𝑥, 𝑦, 𝑧) ∈ 𝐅4 ∶ 𝑥, 𝑦, 𝑧 ∈ 𝐅}.
To prove the inclusion in the other direction, suppose 𝑥, 𝑦, 𝑧 ∈ 𝐅. Then
(𝑥, 𝑥, 𝑦, 𝑧) = (𝑥, 𝑥, 𝑦, 𝑦) + (0, 0, 0, 𝑧 − 𝑦),
where the first vector on the right is in 𝑈 and the second vector on the right is
in 𝑊. Thus (𝑥, 𝑥, 𝑦, 𝑧) ∈ 𝑈 + 𝑊, showing that the inclusion 1.39 also holds in the
opposite direction. Hence
𝑈 + 𝑊 = {(𝑥, 𝑥, 𝑦, 𝑧) ∈ 𝐅4 ∶ 𝑥, 𝑦, 𝑧 ∈ 𝐅},
which shows that 𝑈 + 𝑊 is the set of elements of 𝐅4 whose first two coordinates
equal each other.
The next result states that the sum of subspaces is a subspace, and is in fact the
smallest subspace containing all the summands (which means that every subspace
containing all the summands also contains the sum).
Section 1C Subspaces 21
1.40 sum of subspaces is the smallest containing subspace
Suppose 𝑉1 , …, 𝑉𝑚 are subspaces of 𝑉. Then 𝑉1 + ⋯ + 𝑉𝑚 is the smallest
subspace of 𝑉 containing 𝑉1 , …, 𝑉𝑚 .
Proof The reader can verify that 𝑉1 + ⋯ + 𝑉𝑚 contains the additive identity 0
and is closed under addition and scalar multiplication. Thus 1.34 implies that
𝑉1 + ⋯ + 𝑉𝑚 is a subspace of 𝑉.
The subspaces 𝑉1 , …, 𝑉𝑚 are all con- Sums of subspaces in the theory of vec-
tained in 𝑉1 + ⋯ + 𝑉𝑚 (to see this, consider tor spaces are analogous to unions of
sums 𝑣1 + ⋯ + 𝑣𝑚 where all except one subsets in set theory. Given two sub-
of the 𝑣𝑘 ’s are 0). Conversely, every sub- spaces of a vector space, the smallest
space of 𝑉 containing 𝑉1 , …, 𝑉𝑚 contains subspace containing them is their sum.
𝑉1 + ⋯ + 𝑉𝑚 (because subspaces must Analogously, given two subsets of a set,
contain all finite sums of their elements). the smallest subset containing them is
Thus 𝑉1 + ⋯ + 𝑉𝑚 is the smallest subspace their union.
of 𝑉 containing 𝑉1 , …, 𝑉𝑚 .
Direct Sums
Suppose 𝑉1 , …, 𝑉𝑚 are subspaces of 𝑉. Every element of 𝑉1 + ⋯ + 𝑉𝑚 can be
written in the form
𝑣1 + ⋯ + 𝑣𝑚 ,
where each 𝑣𝑘 ∈ 𝑉𝑘 . Of special interest are cases in which each vector in
𝑉1 + ⋯ + 𝑉𝑚 can be represented in the form above in only one way. This situation
is so important that it gets a special name (direct sum) and a special symbol (⊕).
1.41 definition: direct sum, ⊕
Suppose 𝑉1 , …, 𝑉𝑚 are subspaces of 𝑉.
• The sum 𝑉1 + ⋯ + 𝑉𝑚 is called a direct sum if each element of 𝑉1 + ⋯ + 𝑉𝑚
can be written in only one way as a sum 𝑣1 + ⋯ + 𝑣𝑚 , where each 𝑣𝑘 ∈ 𝑉𝑘 .
• If 𝑉1 + ⋯ + 𝑉𝑚 is a direct sum, then 𝑉1 ⊕ ⋯ ⊕ 𝑉𝑚 denotes 𝑉1 + ⋯ + 𝑉𝑚 ,
with the ⊕ notation serving as an indication that this is a direct sum.
1.42 example: a direct sum of two subspaces
Suppose 𝑈 is the subspace of 𝐅3 of those vectors whose last coordinate equals 0,
and 𝑊 is the subspace of 𝐅3 of those vectors whose first two coordinates equal 0:
𝑈 = {(𝑥, 𝑦, 0) ∈ 𝐅3 ∶ 𝑥, 𝑦 ∈ 𝐅} and 𝑊 = {(0, 0, 𝑧) ∈ 𝐅3 ∶ 𝑧 ∈ 𝐅}.
Then 𝐅3 = 𝑈 ⊕ 𝑊, as you should verify.
≈ 7𝜋 Chapter 1 Vector Spaces
1.43 example: a direct sum of multiple subspaces
Suppose 𝑉𝑘 is the subspace of 𝐅𝑛 of To produce ⊕ in T X, type \oplus.
E
those vectors whose coordinates are all
0, except possibly in the 𝑘 th slot; for example, 𝑉2 = {(0, 𝑥, 0, …, 0) ∈ 𝐅𝑛 ∶ 𝑥 ∈ 𝐅}.
Then
𝐅𝑛 = 𝑉1 ⊕ ⋯ ⊕ 𝑉𝑛 ,
as you should verify.
Sometimes nonexamples add to our understanding as much as examples.
1.44 example: a sum that is not a direct sum
Suppose
𝑉1 = {(𝑥, 𝑦, 0) ∈ 𝐅3 ∶ 𝑥, 𝑦 ∈ 𝐅},
𝑉2 = {(0, 0, 𝑧) ∈ 𝐅3 ∶ 𝑧 ∈ 𝐅},
𝑉3 = {(0, 𝑦, 𝑦) ∈ 𝐅3 ∶ 𝑦 ∈ 𝐅}.
Then 𝐅3 = 𝑉1 + 𝑉2 + 𝑉3 because every vector (𝑥, 𝑦, 𝑧) ∈ 𝐅3 can be written as
(𝑥, 𝑦, 𝑧) = (𝑥, 𝑦, 0) + (0, 0, 𝑧) + (0, 0, 0),
where the first vector on the right side is in 𝑉1 , the second vector is in 𝑉2 , and the
third vector is in 𝑉3 .
However, 𝐅3 does not equal the direct sum of 𝑉1 , 𝑉2 , 𝑉3 , because the vector
(0, 0, 0) can be written in more than one way as a sum 𝑣1 + 𝑣2 + 𝑣3 , with each
𝑣𝑘 ∈ 𝑉𝑘 . Specifically, we have
(0, 0, 0) = (0, 1, 0) + (0, 0, 1) + (0, −1, −1)
and, of course,
(0, 0, 0) = (0, 0, 0) + (0, 0, 0) + (0, 0, 0),
where the first vector on the right side of each equation above is in 𝑉1 , the second
vector is in 𝑉2 , and the third vector is in 𝑉3 . Thus the sum 𝑉1 + 𝑉2 + 𝑉3 is not a
direct sum.
The definition of direct sum requires The symbol ⊕, which is a plus sign
every vector in the sum to have a unique inside a circle, reminds us that we are
representation as an appropriate sum. dealing with a special type of sum of
The next result shows that when deciding subspaces—each element in the direct
whether a sum of subspaces is a direct sum can be represented in only one way
sum, we only need to consider whether 0 as a sum of elements from the specified
can be uniquely written as an appropriate subspaces.
sum.
Section 1C Subspaces 23
1.45 condition for a direct sum
Suppose 𝑉1 , …, 𝑉𝑚 are subspaces of 𝑉. Then 𝑉1 + ⋯ + 𝑉𝑚 is a direct sum if
and only if the only way to write 0 as a sum 𝑣1 + ⋯ + 𝑣𝑚 , where each 𝑣𝑘 ∈ 𝑉𝑘 ,
is by taking each 𝑣𝑘 equal to 0.
Proof First suppose 𝑉1 + ⋯ + 𝑉𝑚 is a direct sum. Then the definition of direct
sum implies that the only way to write 0 as a sum 𝑣1 + ⋯ + 𝑣𝑚 , where each 𝑣𝑘 ∈ 𝑉𝑘 ,
is by taking each 𝑣𝑘 equal to 0.
Now suppose that the only way to write 0 as a sum 𝑣1 + ⋯ + 𝑣𝑚 , where each
𝑣𝑘 ∈ 𝑉𝑘 , is by taking each 𝑣𝑘 equal to 0. To show that 𝑉1 + ⋯ + 𝑉𝑚 is a direct
sum, let 𝑣 ∈ 𝑉1 + ⋯ + 𝑉𝑚 . We can write
𝑣 = 𝑣1 + ⋯ + 𝑣𝑚
for some 𝑣1 ∈ 𝑉1 , …, 𝑣𝑚 ∈ 𝑉𝑚 . To show that this representation is unique,
suppose we also have
𝑣 = 𝑢 1 + ⋯ + 𝑢𝑚 ,
where 𝑢1 ∈ 𝑉1 , …, 𝑢𝑚 ∈ 𝑉𝑚 . Subtracting these two equations, we have
0 = (𝑣1 − 𝑢1 ) + ⋯ + (𝑣𝑚 − 𝑢𝑚 ).
Because 𝑣1 − 𝑢1 ∈ 𝑉1 , …, 𝑣𝑚 − 𝑢𝑚 ∈ 𝑉𝑚 , the equation above implies that each
𝑣𝑘 − 𝑢𝑘 equals 0. Thus 𝑣1 = 𝑢1 , …, 𝑣𝑚 = 𝑢𝑚 , as desired.
The next result gives a simple con- The symbol ⟺ used below means
dition for testing whether a sum of two “if and only if ”; this symbol could also
subspaces is a direct sum. be read to mean “is equivalent to”.
1.46 direct sum of two subspaces
Suppose 𝑈 and 𝑊 are subspaces of 𝑉. Then
𝑈 + 𝑊 is a direct sum ⟺ 𝑈 ∩ 𝑊 = {0}.
Proof First suppose that 𝑈 + 𝑊 is a direct sum. If 𝑣 ∈ 𝑈 ∩ 𝑊, then 0 = 𝑣 + (−𝑣),
where 𝑣 ∈ 𝑈 and −𝑣 ∈ 𝑊. By the unique representation of 0 as the sum of a
vector in 𝑈 and a vector in 𝑊, we have 𝑣 = 0. Thus 𝑈 ∩ 𝑊 = {0}, completing
the proof in one direction.
To prove the other direction, now suppose 𝑈 ∩ 𝑊 = {0}. To prove that 𝑈 + 𝑊
is a direct sum, suppose 𝑢 ∈ 𝑈, 𝑤 ∈ 𝑊, and
0 = 𝑢 + 𝑤.
To complete the proof, we only need to show that 𝑢 = 𝑤 = 0 (by 1.45). The
equation above implies that 𝑢 = −𝑤 ∈ 𝑊. Thus 𝑢 ∈ 𝑈 ∩ 𝑊. Hence 𝑢 = 0, which
by the equation above implies that 𝑤 = 0, completing the proof.
24 Chapter 1 Vector Spaces
The result above deals only with Sums of subspaces are analogous to
the case of two subspaces. When ask- unions of subsets. Similarly, direct
ing about a possible direct sum with sums of subspaces are analogous to
more than two subspaces, it is not disjoint unions of subsets. No two sub-
enough to test that each pair of the spaces of a vector space can be disjoint,
subspaces intersect only at 0. To see because both contain 0. So disjoint-
this, consider Example 1.44. In that ness is replaced, at least in the case
nonexample of a direct sum, we have of two subspaces, with the requirement
𝑉1 ∩ 𝑉2 = 𝑉1 ∩ 𝑉3 = 𝑉2 ∩ 𝑉3 = {0}. that the intersection equal {0}.
Exercises 1C
1 For each of the following subsets of 𝐅3, determine whether it is a subspace
of 𝐅3 .
(a) {(𝑥1 , 𝑥2 , 𝑥3 ) ∈ 𝐅3 ∶ 𝑥1 + 2𝑥2 + 3𝑥3 = 0}
(b) {(𝑥1 , 𝑥2 , 𝑥3 ) ∈ 𝐅3 ∶ 𝑥1 + 2𝑥2 + 3𝑥3 = 4}
(c) {(𝑥1 , 𝑥2 , 𝑥3 ) ∈ 𝐅3 ∶ 𝑥1 𝑥2 𝑥3 = 0}
(d) {(𝑥1 , 𝑥2 , 𝑥3 ) ∈ 𝐅3 ∶ 𝑥1 = 5𝑥3 }
2 Verify all assertions about subspaces in Example 1.35.
3 Show that the set of differentiable real-valued functions 𝑓 on the interval
(−4, 4) such that 𝑓 ′(−1) = 3 𝑓 (2) is a subspace of 𝐑(−4, 4).
4 Suppose 𝑏 ∈ 𝐑 . Show that the set of continuous real-valued functions 𝑓 on
the interval [0, 1] such that ∫01 𝑓 = 𝑏 is a subspace of 𝐑[0, 1] if and only if
𝑏 = 0.
5 Is 𝐑2 a subspace of the complex vector space 𝐂2 ?
6 (a) Is {(𝑎, 𝑏, 𝑐) ∈ 𝐑3 ∶ 𝑎3 = 𝑏3 } a subspace of 𝐑3 ?
(b) Is {(𝑎, 𝑏, 𝑐) ∈ 𝐂3 ∶ 𝑎3 = 𝑏3 } a subspace of 𝐂3 ?
7 Prove or give a counterexample: If 𝑈 is a nonempty subset of 𝐑2 such that
𝑈 is closed under addition and under taking additive inverses (meaning
−𝑢 ∈ 𝑈 whenever 𝑢 ∈ 𝑈), then 𝑈 is a subspace of 𝐑2.
8 Give an example of a nonempty subset 𝑈 of 𝐑2 such that 𝑈 is closed under
scalar multiplication, but 𝑈 is not a subspace of 𝐑2.
9 A function 𝑓 ∶ 𝐑 → 𝐑 is called periodic if there exists a positive number 𝑝
such that 𝑓 (𝑥) = 𝑓 (𝑥 + 𝑝) for all 𝑥 ∈ 𝐑 . Is the set of periodic functions
from 𝐑 to 𝐑 a subspace of 𝐑𝐑 ? Explain.
10 Suppose 𝑉1 and 𝑉2 are subspaces of 𝑉. Prove that the intersection 𝑉1 ∩ 𝑉2
is a subspace of 𝑉.
Section 1C Subspaces 25
11 Prove that the intersection of every collection of subspaces of 𝑉 is a subspace
of 𝑉.
12 Prove that the union of two subspaces of 𝑉 is a subspace of 𝑉 if and only if
one of the subspaces is contained in the other.
13 Prove that the union of three subspaces of 𝑉 is a subspace of 𝑉 if and only
if one of the subspaces contains the other two.
This exercise is surprisingly harder than Exercise 12, possibly because this
exercise is not true if we replace 𝐅 with a field containing only two elements.
14 Suppose
𝑈 = {(𝑥, −𝑥, 2𝑥) ∈ 𝐅3 ∶ 𝑥 ∈ 𝐅} and 𝑊 = {(𝑥, 𝑥, 2𝑥) ∈ 𝐅3 ∶ 𝑥 ∈ 𝐅}.
Describe 𝑈 + 𝑊 using symbols, and also give a description of 𝑈 + 𝑊 that
uses no symbols.
15 Suppose 𝑈 is a subspace of 𝑉. What is 𝑈 + 𝑈?
16 Is the operation of addition on the subspaces of 𝑉 commutative? In other
words, if 𝑈 and 𝑊 are subspaces of 𝑉, is 𝑈 + 𝑊 = 𝑊 + 𝑈?
17 Is the operation of addition on the subspaces of 𝑉 associative? In other
words, if 𝑉1 , 𝑉2 , 𝑉3 are subspaces of 𝑉, is
(𝑉1 + 𝑉2 ) + 𝑉3 = 𝑉1 + (𝑉2 + 𝑉3 )?
18 Does the operation of addition on the subspaces of 𝑉 have an additive
identity? Which subspaces have additive inverses?
19 Prove or give a counterexample: If 𝑉1 , 𝑉2 , 𝑈 are subspaces of 𝑉 such that
𝑉1 + 𝑈 = 𝑉2 + 𝑈,
then 𝑉1 = 𝑉2 .
20 Suppose
𝑈 = {(𝑥, 𝑥, 𝑦, 𝑦) ∈ 𝐅4 ∶ 𝑥, 𝑦 ∈ 𝐅}.
Find a subspace 𝑊 of 𝐅4 such that 𝐅4 = 𝑈 ⊕ 𝑊.
21 Suppose
𝑈 = {(𝑥, 𝑦, 𝑥 + 𝑦, 𝑥 − 𝑦, 2𝑥) ∈ 𝐅5 ∶ 𝑥, 𝑦 ∈ 𝐅}.
Find a subspace 𝑊 of 𝐅5 such that 𝐅5 = 𝑈 ⊕ 𝑊.
22 Suppose
𝑈 = {(𝑥, 𝑦, 𝑥 + 𝑦, 𝑥 − 𝑦, 2𝑥) ∈ 𝐅5 ∶ 𝑥, 𝑦 ∈ 𝐅}.
Find three subspaces 𝑊1 , 𝑊2 , 𝑊3 of 𝐅5, none of which equals {0}, such that
𝐅5 = 𝑈 ⊕ 𝑊1 ⊕ 𝑊2 ⊕ 𝑊3 .
26 Chapter 1 Vector Spaces
23 Prove or give a counterexample: If 𝑉1 , 𝑉2 , 𝑈 are subspaces of 𝑉 such that
𝑉 = 𝑉1 ⊕ 𝑈 and 𝑉 = 𝑉2 ⊕ 𝑈,
then 𝑉1 = 𝑉2 .
Hint: When trying to discover whether a conjecture in linear algebra is true
or false, it is often useful to start by experimenting in 𝐅2.
24 A function 𝑓 ∶ 𝐑 → 𝐑 is called even if
𝑓 (−𝑥) = 𝑓 (𝑥)
for all 𝑥 ∈ 𝐑 . A function 𝑓 ∶ 𝐑 → 𝐑 is called odd if
𝑓 (−𝑥) = − 𝑓 (𝑥)
for all 𝑥 ∈ 𝐑 . Let 𝑉e denote the set of real-valued even functions on 𝐑
and let 𝑉o denote the set of real-valued odd functions on 𝐑 . Show that
𝐑𝐑 = 𝑉e ⊕ 𝑉o .
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0
International License (https://creativecommons.org/licenses/by-nc/4.0), which permits any noncommercial use,
sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit
to original author and source, provide a link to the Creative Commons license, and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s
Creative Commons license, unless indicated otherwise in a credit line to the material. If
material is not included in the chapter’s Creative Commons license and your intended use
is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 2
Finite-Dimensional Vector Spaces
In the last chapter we learned about vector spaces. Linear algebra focuses not
on arbitrary vector spaces, but on finite-dimensional vector spaces, which we
introduce in this chapter.
We begin this chapter by considering linear combinations of lists of vectors.
This leads us to the crucial concept of linear independence. The linear dependence
lemma will become one of our most useful tools.
A list of vectors in a vector space that is small enough to be linearly independent
and big enough so the linear combinations of the list fill up the vector space is
called a basis of the vector space. We will see that every basis of a vector space
has the same length, which will allow us to define the dimension of a vector space.
This chapter ends with a formula for the dimension of the sum of two subspaces.
standing assumptions for this chapter
• 𝐅 denotes 𝐑 or 𝐂 .
• 𝑉 denotes a vector space over 𝐅.
The main building of the Institute for Advanced Study, in Princeton, New Jersey.
Paul Halmos (1916–2006) wrote the first modern linear algebra book in this building.
Halmos’s linear algebra book was published in 1942 (second edition published in 1958).
The title of Halmos’s book was the same as the title of this chapter.
© Sheldon Axler 2024
S. Axler, Linear Algebra Done Right, Undergraduate Texts in Mathematics,
27
https://doi.org/10.1007/978-3-031-41026-0_2
28 Chapter 2 Finite-Dimensional Vector Spaces
2A Span and Linear Independence
We have been writing lists of numbers surrounded by parentheses, and we will
continue to do so for elements of 𝐅𝑛 ; for example, (2, −7, 8) ∈ 𝐅3. However, now
we need to consider lists of vectors (which may be elements of 𝐅𝑛 or of other
vector spaces). To avoid confusion, we will usually write lists of vectors without
surrounding parentheses. For example, (4, 1, 6), (9, 5, 7) is a list of length two of
vectors in 𝐑3.
2.1 notation: list of vectors
We will usually write lists of vectors without surrounding parentheses.
Linear Combinations and Span
A sum of scalar multiples of the vectors in a list is called a linear combination of
the list. Here is the formal definition.
2.2 definition: linear combination
A linear combination of a list 𝑣1 , …, 𝑣𝑚 of vectors in 𝑉 is a vector of the form
𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 ,
where 𝑎1 , …, 𝑎𝑚 ∈ 𝐅.
2.3 example: linear combinations in 𝐑3
• (17, −4, 2) is a linear combination of (2, 1, −3), (1, −2, 4), which is a list of
length two of vectors in 𝐑3, because
(17, −4, 2) = 6(2, 1, −3) + 5(1, −2, 4).
• (17, −4, 5) is not a linear combination of (2, 1, −3), (1, −2, 4), which is a list
of length two of vectors in 𝐑3, because there do not exist numbers 𝑎1 , 𝑎2 ∈ 𝐅
such that
(17, −4, 5) = 𝑎1 (2, 1, −3) + 𝑎2 (1, −2, 4).
In other words, the system of equations
17 = 2𝑎1 + 𝑎2
−4 = 𝑎1 − 2𝑎2
5 = −3𝑎1 + 4𝑎2
has no solutions (as you should verify).
Section 2A Span and Linear Independence 29
2.4 definition: span
The set of all linear combinations of a list of vectors 𝑣1 , …, 𝑣𝑚 in 𝑉 is called
the span of 𝑣1 , …, 𝑣𝑚 , denoted by span(𝑣1 , …, 𝑣𝑚 ). In other words,
span(𝑣1 , …, 𝑣𝑚 ) = {𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 ∶ 𝑎1 , …, 𝑎𝑚 ∈ 𝐅}.
The span of the empty list ( ) is defined to be {0}.
2.5 example: span
The previous example shows that in 𝐅3,
• (17, −4, 2) ∈ span((2, 1, −3), (1, −2, 4));
• (17, −4, 5) ∉ span((2, 1, −3), (1, −2, 4)).
Some mathematicians use the term linear span, which means the same as
span.
2.6 span is the smallest containing subspace
The span of a list of vectors in 𝑉 is the smallest subspace of 𝑉 containing all
vectors in the list.
Proof Suppose 𝑣1 , …, 𝑣𝑚 is a list of vectors in 𝑉.
First we show that span(𝑣1 , …, 𝑣𝑚 ) is a subspace of 𝑉. The additive identity is
in span(𝑣1 , …, 𝑣𝑚 ) because
0 = 0𝑣1 + ⋯ + 0𝑣𝑚 .
Also, span(𝑣1 , …, 𝑣𝑚 ) is closed under addition because
(𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 ) + (𝑐1 𝑣1 + ⋯ + 𝑐𝑚 𝑣𝑚 ) = (𝑎1 + 𝑐1 )𝑣1 + ⋯ + (𝑎𝑚 + 𝑐𝑚 )𝑣𝑚 .
Furthermore, span(𝑣1 , …, 𝑣𝑚 ) is closed under scalar multiplication because
𝜆(𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 ) = 𝜆𝑎1 𝑣1 + ⋯ + 𝜆𝑎𝑚 𝑣𝑚 .
Thus span(𝑣1 , …, 𝑣𝑚 ) is a subspace of 𝑉 (by 1.34).
Each 𝑣𝑘 is a linear combination of 𝑣1 , …, 𝑣𝑚 (to show this, set 𝑎𝑘 = 1 and let
the other 𝑎’s in 2.2 equal 0). Thus span(𝑣1 , …, 𝑣𝑚 ) contains each 𝑣𝑘 . Conversely,
because subspaces are closed under scalar multiplication and addition, every sub-
space of 𝑉 that contains each 𝑣𝑘 contains span(𝑣1 , …, 𝑣𝑚 ). Thus span(𝑣1 , …, 𝑣𝑚 )
is the smallest subspace of 𝑉 containing all the vectors 𝑣1 , …, 𝑣𝑚 .
2.7 definition: spans
If span(𝑣1 , …, 𝑣𝑚 ) equals 𝑉, we say that the list 𝑣1 , …, 𝑣𝑚 spans 𝑉.
30 Chapter 2 Finite-Dimensional Vector Spaces
2.8 example: a list that spans 𝐅𝑛
Suppose 𝑛 is a positive integer. We want to show that
(1, 0, …, 0), (0, 1, 0, …, 0), …, (0, …, 0, 1)
spans 𝐅𝑛. Here the 𝑘 th vector in the list above has 1 in the 𝑘 th slot and 0 in all other
slots.
Suppose (𝑥1 , …, 𝑥𝑛 ) ∈ 𝐅𝑛. Then
(𝑥1 , …, 𝑥𝑛 ) = 𝑥1 (1, 0, …, 0) + 𝑥2 (0, 1, 0, …, 0) + ⋯ + 𝑥𝑛 (0, …, 0, 1).
Thus (𝑥1 , …, 𝑥𝑛 ) ∈ span((1, 0, …, 0), (0, 1, 0, …, 0), …, (0, …, 0, 1)), as desired.
Now we can make one of the key definitions in linear algebra.
2.9 definition: finite-dimensional vector space
A vector space is called finite-dimensional if some list of vectors in it spans
the space.
Example 2.8 above shows that 𝐅𝑛 is a Recall that by definition every list has
finite-dimensional vector space for every finite length.
positive integer 𝑛.
The definition of a polynomial is no doubt already familiar to you.
2.10 definition: polynomial, 𝒫(𝐅)
• A function 𝑝 ∶ 𝐅 → 𝐅 is called a polynomial with coefficients in 𝐅 if there
exist 𝑎0 , …, 𝑎𝑚 ∈ 𝐅 such that
𝑝(𝑧) = 𝑎0 + 𝑎1 𝑧 + 𝑎2 𝑧2 + ⋯ + 𝑎𝑚 𝑧𝑚
for all 𝑧 ∈ 𝐅.
• 𝒫(𝐅) is the set of all polynomials with coefficients in 𝐅.
With the usual operations of addition and scalar multiplication, 𝒫(𝐅) is a
vector space over 𝐅, as you should verify. Hence 𝒫(𝐅) is a subspace of 𝐅𝐅, the
vector space of functions from 𝐅 to 𝐅.
If a polynomial (thought of as a function from 𝐅 to 𝐅 ) is represented by two
sets of coefficients, then subtracting one representation of the polynomial from
the other produces a polynomial that is identically zero as a function on 𝐅 and
hence has all zero coefficients (if you are unfamiliar with this fact, just believe
it for now; we will prove it later—see 4.8). Conclusion: the coefficients of a
polynomial are uniquely determined by the polynomial. Thus the next definition
uniquely defines the degree of a polynomial.
Section 2A Span and Linear Independence 31
2.11 definition: degree of a polynomial, deg 𝑝
• A polynomial 𝑝 ∈ 𝒫(𝐅) is said to have degree 𝑚 if there exist scalars
𝑎0 , 𝑎1 , …, 𝑎𝑚 ∈ 𝐅 with 𝑎𝑚 ≠ 0 such that for every 𝑧 ∈ 𝐅, we have
𝑝(𝑧) = 𝑎0 + 𝑎1 𝑧 + ⋯ + 𝑎𝑚 𝑧𝑚.
• The polynomial that is identically 0 is said to have degree −∞.
• The degree of a polynomial 𝑝 is denoted by deg 𝑝.
In the next definition, we use the convention that −∞ < 𝑚, which means that
the polynomial 0 is in 𝒫𝑚 (𝐅).
2.12 notation: 𝒫𝑚 (𝐅)
For 𝑚 a nonnegative integer, 𝒫𝑚 (𝐅) denotes the set of all polynomials with
coefficients in 𝐅 and degree at most 𝑚.
If 𝑚 is a nonnegative integer, then 𝒫𝑚 (𝐅) = span(1, 𝑧, …, 𝑧𝑚 ) [here we slightly
abuse notation by letting 𝑧𝑘 denote a function]. Thus 𝒫𝑚 (𝐅) is a finite-dimensional
vector space for each nonnegative integer 𝑚.
2.13 definition: infinite-dimensional vector space
A vector space is called infinite-dimensional if it is not finite-dimensional.
2.14 example: 𝒫(𝐅) is infinite-dimensional.
Consider any list of elements of 𝒫(𝐅). Let 𝑚 denote the highest degree of the
polynomials in this list. Then every polynomial in the span of this list has degree
at most 𝑚. Thus 𝑧𝑚 + 1 is not in the span of our list. Hence no list spans 𝒫(𝐅).
Thus 𝒫(𝐅) is infinite-dimensional.
Linear Independence
Suppose 𝑣1 , …, 𝑣𝑚 ∈ 𝑉 and 𝑣 ∈ span(𝑣1 , …, 𝑣𝑚 ). By the definition of span, there
exist 𝑎1 , …, 𝑎𝑚 ∈ 𝐅 such that
𝑣 = 𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 .
Consider the question of whether the choice of scalars in the equation above is
unique. Suppose 𝑐1 , …, 𝑐𝑚 is another set of scalars such that
𝑣 = 𝑐1 𝑣1 + ⋯ + 𝑐𝑚 𝑣𝑚 .
Subtracting the last two equations, we have
0 = (𝑎1 − 𝑐1 )𝑣1 + ⋯ + (𝑎𝑚 − 𝑐𝑚 )𝑣𝑚 .
32 Chapter 2 Finite-Dimensional Vector Spaces
Thus we have written 0 as a linear combination of (𝑣1 , …, 𝑣𝑚 ). If the only way
to do this is by using 0 for all the scalars in the linear combination, then each
𝑎𝑘 − 𝑐𝑘 equals 0, which means that each 𝑎𝑘 equals 𝑐𝑘 (and thus the choice of
scalars was indeed unique). This situation is so important that we give it a special
name—linear independence—which we now define.
2.15 definition: linearly independent
• A list 𝑣1 , …, 𝑣𝑚 of vectors in 𝑉 is called linearly independent if the only
choice of 𝑎1 , …, 𝑎𝑚 ∈ 𝐅 that makes
𝑎1 𝑣1 + ⋯ + 𝑎𝑚 𝑣𝑚 = 0
is 𝑎1 = ⋯ = 𝑎𝑚 = 0.
• The empty list ( ) is also declared to be linearly independent.
The reasoning above shows that 𝑣1 , …, 𝑣𝑚 is linearly independent if and only if
each vector in span(𝑣1 , …, 𝑣𝑚 ) has only one representation as a linear combination
of 𝑣1 , …, 𝑣𝑚 .
2.16 example: linearly independent lists
(a) To see that the list (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) is linearly independent in
𝐅4, suppose 𝑎1 , 𝑎2 , 𝑎3 ∈ 𝐅 and
𝑎1 (1, 0, 0, 0) + 𝑎2 (0, 1, 0, 0) + 𝑎3 (0, 0, 1, 0) = (0, 0, 0, 0).
Thus
(𝑎1 , 𝑎2 , 𝑎3 , 0) = (0, 0, 0, 0).
Hence 𝑎1 = 𝑎2 = 𝑎3 = 0. Thus the list (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) is
linearly independent in 𝐅4.
(b) Suppose 𝑚 is a nonnegative integer. To see that the list 1, 𝑧, …, 𝑧𝑚 is linearly
independent in 𝒫(𝐅), suppose 𝑎0 , 𝑎1 , …, 𝑎𝑚 ∈ 𝐅 and
𝑎0 + 𝑎1 𝑧 + ⋯ + 𝑎𝑚 𝑧𝑚 = 0,
where we think of both sides as elements of 𝒫(𝐅). Then
𝑎0 + 𝑎1 𝑧 + ⋯ + 𝑎𝑚 𝑧𝑚 = 0
for all 𝑧 ∈ 𝐅. As discussed earlier (and as follows from 4.8), this implies
that 𝑎0 = 𝑎1 = ⋯ = 𝑎𝑚 = 0. Thus 1, 𝑧, …, 𝑧𝑚 is a linearly independent list in
𝒫(𝐅).
(c) A list of length one in a vector space is linearly independent if and only if the
vector in the list is not 0.
(d) A list of length two in a vector space is linearly independent if and only if
neither of the two vectors in the list is a scalar multiple of the other.
Random documents with unrelated
content Scribd suggests to you:
reposition in, 268
White’s repositor for, 269
irrigation after curettement, 210
ligaments of, 95
mechanism of support, 95, 96
perforation of, by curette, 210
position, 94, 119
prolapse of, 101
amputation of cervix in, 117
causes of, 97, 98, 102, 108
colpeurynter in, 118
cystocele and rectocele in, 107
diagnosis of, 110
Emmet’s operation for, 112
hysterectomy for, 117
LeFort’s operation, 112
pessaries in, 118
pregnancy as cause of, 108
sequelæ of, 111
Sims’ operation for, 115
structural changes in, 106
symptoms, 108
treatment, 110
ventro-fixation for, 113
rectal examination of, 27
relations of, 119
to bladder, 94
removal, 515.
See also Hysterectomy.
replacement, 135, 136
contraindications to, 289
retention in position, 142
retro-displacement, congenital, 129, 146
retroflexion of, 127
retroversion of, 127
causes, 129
degrees, 128
sarcoma of, 225
age of occurrence, 228
duration of life, 228
symptoms, 225, 226
treatment, 225
varieties, 225
septus, 397
Skene’s glands, 426
stitching to abdominal wall, 142
subinvolution of, 215
superinvolution after amputation of cervix, 217
supra-vaginal amputation, 518, 521
closure of cervical canal in, 522
sterilization of cervical canal in, 522
tuberculosis of, 261
unicornis, 396
vascular supply of, 437
Vagina, absence of, 398
angle of, 60
anterior wall, length, 60
atresia, 17, 52
carcinoma of, 52
cysts of, 51
development of, 395
dilator for, Sims’, 416
fibroid tumors of, 52
furrows of, 61
incision of, in hysterectomy, 524
inflammation of, 49
long axis of, 60
malformations of, 397
normal condition of, 96
ostium of, 57
posterior wall, length of, 60
preparation of, for operation, 472
prolapse of, 75
sarcoma of, 52
shape of, 60
subinvolution of, 92
sulci of, 60
unilateral, 398
Vaginal arteries, 504
cervix, elongation, 104, 178
drainage, 480, 487
examination, 23
cleansing for, 26
contraindications to, 28
hematocolpos, 53, 399
hysterectomy, 527
removal of tubes and ovaries, 531
pessaries, 133, 138, 140
retractor, 528
speculum, 28
bivalve, Goodell’s, 29
duck-bill, Sims’, 29
uses, 28, 30, 31
sulci, laceration of, 75
tumor, 51
treatment, 52
wall-depressor, 29, 31, 32
Vaginismus, 53
Vaginitis, 49
adhesive, 51
dangers of, 50
emphysematous, 49
epidemics of, 39
etiology, 49
gonorrheal, 453
granular, 49
in children, 49
in exanthemata, 49
senile, 49
simple, 49
symptoms, 50
treatment, 50, 51
Ventral hernia, 492
Ventro-fixation, 142, 143
in uterine prolapse, 113
Ventro-suspension, 142, 143
incision for, 487
Vermiform appendix, 21
Vesical applicator, 425
calculus, 447
in vesico-vaginal fistula, 416
probe, 425
speculum, 424
triangle, 436
mucous membrane of, 437
nerves of, 437
Vesico-urethral fissure, 431
Vesico-uterine fistula, 420
Vesico-vaginal fistula, 412
and calculus, 416
kolpokleisis in, 420
operation for, 417
treatment, 415
Vicarious diarrhea, 408
leucorrhea, 408
menstruation, 408
Vomiting after celiotomy, 497
Vulva, elephantiasis of, 47
gonorrhea of, 454
hematoma of, 46
neoplasms of, 46, 47
papilloma of, 46
pruritus of, 42
etiology, 42, 43
excision of mucous membranes, 44
treatment, 43
varicose tumors of, 46
Vulvitis, 36
causes of, 36, 37
epidemics of, 37
follicular, 36
gonorrhea as cause of, 36
in children, 37
late manifestations of, 37, 38
medico-legal examination in, 37
secondary, 36, 37
symptoms of, 36
treatment of, 37
Vulvo-vaginal glands, cysts of, 40
inflammation of, 38, 39
Water after celiotomy, 494
in gynecological operations, 467
sterilization of, 467
Werder’s combined hysterectomy, 532
White’s repositor, 270
Wolffian canal, 52
FOOTNOTES:
1 Diseases of the Ovaries, 1883, p. 6.
2 Heape, Trans. Obstet. Soc. of London, vols. xxxvi., xl.
3 New York Journal of Gynecology and Obstetrics, March,
1894, p. 282.
4 “The Ligature in Oöphorectomy,” read before the
Philadelphia Academy of Surgery, February 3, 1896.
Transcriber’s Note:
Inconsistent spelling and hyphenation are as in the original.
*** END OF THE PROJECT GUTENBERG EBOOK A TEXT-BOOK OF
DISEASES OF WOMEN ***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,
the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com