Linear Algebra Theorems
Theorem 1.1 Uniqueness of Reduced Echelon Form
Each matrix is row equivalent to one and only one reduced echelon matrix.
Theorem 1.2 Existence and Uniqueness Theorem
A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column – that is, if
and only if an echelon form of the augmented matrix has no row of the form
[0 ··· 0 b] with nonzero b.
If a linear system is consistent, then the solution set contains either (i) a unique solution, when there are no free variables,
or (ii) infinitely many solutions, when there is at least one free variable.
Theorem 1.3
If A is an m × n matrix, with columns a1 , a2 , . . . , an , and if b is in Rm , the matrix equation
Ax = b
has the same solution set as the vector equation
x1 a1 + x2 a2 + · · · + xn an = b
which, in turn, has the same solution set as the system of linear equations whose augmented matrix is
[a1 a2 ··· an b] .
Theorem 1.4
Let A be an m × n matrix. Then the following statements are logically equivalent. That is, for a particular A, either they
are all true statements or they are all false.
a. For each b in Rm , the equation Ax = b has a solution.
b. Each b in Rm is a linear combination of the columns of A.
c. The columns of A span Rm .
d. A has a pivot position in every row.
Theorem 1.5
If A is an m × n matrix, u and v are vectors in Rm , and c is a scalar, then:
a. A(u + v) = Au + Av
b. A(cu) = c(Au)
Theorem 1.6
Suppose the equation Ax = b is consistent for some given b, and let p be a solution. Then the solution set of Ax = b is the
set of all vectors of the form w = p + vh , where vh is any solution of the homogeneous equation Ax = 0.
Theorem A
Let S = {v1 , . . . , vp } be a set of vectors in Rn . The following statements are equivalent.
• S is a linearly independent set.
• The equation x1 v1 + · · · + xp vp = 0 has only the trivial solution.
• The HLS [v1 . . . vp ]x = 0 has a unique solution.
• In the matrix [v1 . . . vp ], every column is a pivot column.
• No vector vi lies in the span of the remaining vectors.
1
Theorem 1.7 Characterization of Linearly Dependent Sets
An indexed set S = {v1 , v2 , . . . , vp } of two or more vectors is linearly dependent if and only if at least one of the vectors in
S is a linear combination of the others. In fact, if S is linearly dependent and v1 6= 0, then some vj (with j > 1) is a linear
combination of the preceding vectors, v1 , v2 , . . . , vj−1 .
Theorem 1.8
If a set contains more vectors than there are entries in each vector, then the set is linearly dependent. That is, any set
{v1 , v2 , . . . , vp } in Rn is linearly dependent if p > n.
Theorem 1.9
If a set S = {v1 , v2 , . . . , vp } in Rn contains the zero vector, then the set is linearly dependent.
Theorem 1.10
Let T : Rn → Rm be a linear transformation. Then there exists a unique matrix A such that
T (x) = Ax for all x in R
In fact, A is the m × n matrix whose j th column is the vector T (ej ), where ej is the j th column of the identity matrix in Rn :
A = [T (e1 ) · · · T (en )] .
Theorem 1.11
Let T : Rn → Rm be a linear transformation. Then T is one-to-one if and only if the equation T (x) = 0 has only the trivial
solution.
Theorem 1.12
Let T : Rn → Rm be a linear transformation and let A be the standard matrix for T . Then
a. T maps Rn onto Rm if and only if the columns of A span Rm ;
b. T is one-to-one if and only if the columns of A are linearly independent.
Theorem 2.1
Let A, B and C be matrices of the same size, and let r and s be scalars.
a. A + B = B + A
b. (A + B) + C = A + (B + C)
c. A + 0 = A
d. r(A + B) = rA + rB
e. (r + s)A = rA + sA
f. r(sA) = (rs)A
Theorem 2.2
Let A be an m × n matrix, and let B and C have sizes for which the indicated sums and products are defined. For any scalar
r,
a. A(BC) = (AB)C
b. A(B + C) = AB + AC
c. (B + C)A = BA + CA
d. r(BA) = (rA)B = A(rB)
e. Im A = A = AIm
2
Theorem 2.3 Let A and B denote matrices whose sizes are appropriate for the following sums and products. For any scalar
r,
a. (AT )T = A
b. (A + B)T = AT + B T
c. (rA)T = rAT
d. (AB)T = B T AT
Theorem 2.4
a b
Let A = c d . If ad − bc 6= 0, then A is invertible and
1 d −b
A− 1 = .
ad − bc −c a
If ad − bc = 0, then A is not invertible.
Theorem 2.5
If A is an invertible n × n matrix, then for each b in Rn , the equation Ax = b has the unique solution x = A−1 b.
Theorem 2.6
a. If A is an invertible matrix, then A−1 is invertible and
(A−1 )−1 = A.
b. If A and B are n × n invertible matrices, then so is AB, and the inverse of AB is the product of the inverses of A and B
in the reverse order. That is,
(AB)−1 = B −1 A−1 .
c. If A is an invertible matrix, then so is AT , and the inverse of AT is the transpose of A−1 . That is,
(AT )−1 = (A−1 )T .
Theorem 2.7
An n × n matrix is invertible if and only if A is row equivalent to In , and in this case, any sequence of elementary row
operations that reduced A to In also transforms In into A− 1.
Theorem 2.8 The invertible Matrix Theorem
Let A be a square n × n matrix. Then the following statements are equivalent. That is, for a given A, the statements are
either all true or all false.
a. A is an invertable matrix.
b. A is row equivalent to the n × n identity matrix.
c. A has n pivot positions.
d. The equation Ax = 0 has only the trivial solution.
e. The columns of A form a linearly independent set.
f. The linear transformation x 7→ Ax is one to one.
g. The equation Ax = b has at least one solution for each b in Rn .
h. The columns of A span Rn .
i. The linear transformation x 7→ Ax maps Rn onto Rn .
j. There is an n × n matrix C such that CA = I.
3
k. There is an n × n matrix D such that AD = I.
l. AT is an invertible matrix.
m. The columns of A form a basis of Rn .
n. Col A = Rn .
o. dim Col A = n.
p. rank A = n.
q. Nul A = {0}.
r. dim Nul A = 0.
s. The number 0 is not an eigenvalue of A.
t. The determinant of A is not zero.
Theorem 2.9 Let T : Rn → Rn be a linear transformation and let A be the standard matrix for T . Then T is invertible if
and only if A is an invertible matrix. In that case, the linear transformation S given by S(x) = A−1 x is the unique function
satisfying
S(T (x)) = x, for all x ∈ Rn and
T (S(x)) = x for all x ∈ Rn .
Theorem 3.1
The determinant of an n × n matrix A can be computed by a cofactor expansion across any row or column. The expansion
across the ith row is
detA = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin .
The cofactor expansion down the jth column is
detA = a1i C1i + a2i C2i + · · · + ani Cni .
Theorem 3.2
If A is a triangular matrix, then det A is the product of the entries on the main diagonal of A.
Theorem 3.3
Let A be a square matrix.
a. If a multiple of one row of A is added to another row to produce matrix B, then det B = det A.
b. If two rows of A are interchanged to produce B, then det B = − det A.
c. If one row of A is multiplied by k to produce B, then det B = k·det A.
Theorem 3.4
A square matrix A is invertible if and only if det A 6= 0.
Theorem 3.5
If A is an n × n matrix, then det AT =det A.
Theorem 3.6 Multiplicative Property
If A and B are n × n matrices, then det AB = (det A)(det B).
Theorem 3.9
If A is a 2 × 2 matrix, the area of the parallelogram determined by the columns of A is |det A|.
Theorem 3.10
Let S be a region in the plane with a finite area. If T : R2 → R2 is a linear transformation with a standard matrix A, then
{area of T (S)} = |det A| · {area of S}.
4
Theorem 4.1
If v1 , . . . , vp are in a vector space V , then Span{v1 , . . . , vp } is a subspace of V .
Theorem 4.2
The null space of an m × n matrix A is a subspace of Rn . Equivalently, the set of all solutions to a system Ax = 0 of m
homogeneous linear equations in n unknowns is a subspace of Rn .
Theorem 4.3
The column space of an m × n matrix A is a subspace of Rm .
Theorem 4.4
An indexed set {v1 , . . . , vp } of two or more vectors, with v1 6= 0, is linearly dependent if and only if some vj (with j > 0) is
a linear combination of the preceding vectors, {v1 , . . . , vj−1 }.
Theorem 4.5 The Spanning Set Theorem
Let S = {v1 , . . . , vp } be a set in V and let H = span{v1 , . . . , vp .
a. If one of the vectors in S–say, vk –is a linear combination of the remaining vectors in S by removing vk still spans H.
b. If H 6= {0}, some subset of S is a basis for H.
Theorem 4.6
The pivot columns of a matrix A forms a basis for Col A.
Theorem 4.7 The Unique Representation Theorem
Let B = {b1 , . . . , bn } be a basis for vector space V . Then for each x in V , there exists unique scalars c1 , . . . , cn such that
x = c1 b1 + · · · + cn bn .
Theorem 4.8
Let B = {b1 , . . . , bn } be a basis for a vector space V . Then the coordinate mapping x 7→ [x]B is a one-to-one linear
transformation from V onto Rn .
Theorem 4.9
If a vector space V has a basis B = {b1 , . . . , bn }, then any set in V containing more than n vectors must be linearly
dependent.
Theorem 4.10
If a vector space V has a basis of n vectors, then every basis of V must consist of exactly n vectors.
Theorem 4.11
Let H be a subspace of a finite-dimensional vector space V . Any linearly independent set in H can be expanded, if necessary,
to a basis for H. Also, H is finite dimensional and
dim H ≤ dim V.
Theorem 4.12 The Basis Theorem
Let V be a p-dimensional vector space, p ≥ 1. Any linearly independent set of exactly p elements in V is automatically a
basis for V . Any set of exactly p elements that spans V is automatically a basis for V .
Theorem 4.13
If two matrices A and B are row equivalent, then their row spaces are the same. If B is in echelon form, the nonzero rows
of B form a basis for the row space of A as well as that of B.
Theorem 4.14 The Rank Theorem
The dimensions of the column space and the row space of an m × n matrix A are equal. This common dimension, the rank
of A, also equals the number of pivot positions in A and satisfies the equation
rank A + dim Nul A = n.
5
Theorem 5.1
The eigenvalues of a triangular matrix are the entries on its main diagonal.
Theorem 5.2
If v1 , . . . , vr are eigenvectors that correspond to distinct eigenvalues λ1 , . . . , λr of an n×n matrix A, then the set {v1 , . . . , vr }
is linearly independent.
Theorem 5.4
if n × n matrices A and B are similar, then they have the same characteristic polynomial and hence the same eigenvalues
(with the same multiplicities).
Theorem 5.5 Diagonalization Theorems
An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
In fact, A = P DP −1 , with D a diagonal matrix, if and only if the columns of P are linearly independent eigenvectors of
A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively, to the eigenvectors in P .
Theorem 5.6
An n × n matrix with n distinct eigenvalues is diagonizable.
Theorem 5.7
Let A be an n × n matrix whose distinct eigenvalues are λ1 , . . . , λp .
a. For 1 ≤ k ≤ p, the dimension of the eigenspace for λk is less than or equal to the multiplicity of the eigenvalue λk .
b. The matrix A is diagonizable if and only if the sum of the dimensions of the eigenspaces equals n, and this happens if and
only if (i) the characteristic polynomial factors completely into linear factors, and (ii) the dimension of the eigenspace for
each λk equals the multiplicity of λk .
c. If A is diagonizable and Bk is a basis for the eigenspace corresponding to λk for each k, then the total collection of vectors
in the sets B1 , . . . , Bp forms an eigenvector basis for Rn .
Theorem 5.8 Diagonal Matrix Theorem
Suppose A = P DP −1 , where D is a diagonal n × n matrix. If B is the basis for Rn formed from the columns of P , then D
is the B matrix for the transformation x → Ax.
Theorem 5.9
Let A be a real 2 × 2 matrix with a complex eigenvalue λ = a − bi (b 6= 0) and an associated eigenvector v in C2 . Then,
a −b
A = P CP −1 , where P = [Rev Imv] and C = b a .