0% found this document useful (0 votes)
90 views128 pages

LA Lecture 6 7 PDF

The document discusses key concepts in linear algebra including linear combinations, span, subspaces, and linear dependence. It defines linear combinations as a vector being written as a sum of other vectors with coefficients. The span of vectors is the set of all possible linear combinations. A subspace is a non-empty set of vectors closed under addition and scalar multiplication. Linear dependence means one vector can be written as a linear combination of the others.

Uploaded by

Sarit Burman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views128 pages

LA Lecture 6 7 PDF

The document discusses key concepts in linear algebra including linear combinations, span, subspaces, and linear dependence. It defines linear combinations as a vector being written as a sum of other vectors with coefficients. The span of vectors is the set of all possible linear combinations. A subspace is a non-empty set of vectors closed under addition and scalar multiplication. Linear dependence means one vector can be written as a linear combination of the others.

Uploaded by

Sarit Burman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

Linear Algebra

Department of Mathematics
Indian Institute of Technology Guwahati

January – May 2019

MA 102 (RA, RKS, MGPP, KVK)

1 / 24
Basis and dimension

Topics:
Linear span
Subspaces
Linear independence
Basis, Dimension & Rank

2 / 24
Linear combination

Definition: A vector v in Rn is a linear combination of the vectors


v1 , v2 , . . . , vk in Rn if there exist real numbers c1 , c2 , . . . , ck such
that
v = c1 v1 + c2 v2 + . . . + ck vk .

3 / 24
Linear combination

Definition: A vector v in Rn is a linear combination of the vectors


v1 , v2 , . . . , vk in Rn if there exist real numbers c1 , c2 , . . . , ck such
that
v = c1 v1 + c2 v2 + . . . + ck vk .

The numbers c1 , c2 , . . . , ck are called the coefficients of the


linear combination.

3 / 24
Linear combination

Definition: A vector v in Rn is a linear combination of the vectors


v1 , v2 , . . . , vk in Rn if there exist real numbers c1 , c2 , . . . , ck such
that
v = c1 v1 + c2 v2 + . . . + ck vk .

The numbers c1 , c2 , . . . , ck are called the coefficients of the


linear combination.

Question: Is the vector [1, 2, 3]> a linear combination of [1, 0, 3]>


and [−1, 1, −3]> ?

3 / 24
Linear combination

Definition: A vector v in Rn is a linear combination of the vectors


v1 , v2 , . . . , vk in Rn if there exist real numbers c1 , c2 , . . . , ck such
that
v = c1 v1 + c2 v2 + . . . + ck vk .

The numbers c1 , c2 , . . . , ck are called the coefficients of the


linear combination.

Question: Is the vector [1, 2, 3]> a linear combination of [1, 0, 3]>


and [−1, 1, −3]> ?

Theorem: A system of linear equations with augmented matrix


[A | b] is consistent if and only if b is a linear combination of the
columns of A.

3 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

4 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

Thus

span(S) = {v ∈ Rn | v = c1 v1 +. . .+ck vk for some c1 , . . . , ck ∈ R}.

4 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

Thus

span(S) = {v ∈ Rn | v = c1 v1 +. . .+ck vk for some c1 , . . . , ck ∈ R}.

Convention: span(∅) = {0}.

4 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

Thus

span(S) = {v ∈ Rn | v = c1 v1 +. . .+ck vk for some c1 , . . . , ck ∈ R}.

Convention: span(∅) = {0}.

If span(S) = Rn , then S is called a spanning set for Rn .

4 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

Thus

span(S) = {v ∈ Rn | v = c1 v1 +. . .+ck vk for some c1 , . . . , ck ∈ R}.

Convention: span(∅) = {0}.

If span(S) = Rn , then S is called a spanning set for Rn .

R2 = span(e1 , e2 ), where e1 = [1, 0]> and e2 = [0, 1]> .

4 / 24
Span of vectors
Definition: Let S = {v1 , . . . , vk } ⊆ Rn . Then the collection of all
linear combinations of the vectors v1 , . . . , vk is called the span of S
(or span of the vectors v1 , . . . , vk ), and is denoted by span(S) (or
span(v1 , . . . , vk )).

Thus

span(S) = {v ∈ Rn | v = c1 v1 +. . .+ck vk for some c1 , . . . , ck ∈ R}.

Convention: span(∅) = {0}.

If span(S) = Rn , then S is called a spanning set for Rn .

R2 = span(e1 , e2 ), where e1 = [1, 0]> and e2 = [0, 1]> .

Exercise: Let u = [1, 2, 3]> and v = [−1, 1, −3]> . Describe


span(u, v) geometrically.
4 / 24
Subspaces of Rn

Definition: A set U (6= ∅) ⊆ Rn is called a subspace of Rn if


au + bv ∈ U for every u, v ∈ U and for every a, b ∈ R.

5 / 24
Subspaces of Rn

Definition: A set U (6= ∅) ⊆ Rn is called a subspace of Rn if


au + bv ∈ U for every u, v ∈ U and for every a, b ∈ R.

U = {0} and U = Rn are subspaces of Rn , called trivial


subspaces of Rn .
Any subspace contains 0.

5 / 24
Subspaces of Rn

Definition: A set U (6= ∅) ⊆ Rn is called a subspace of Rn if


au + bv ∈ U for every u, v ∈ U and for every a, b ∈ R.

U = {0} and U = Rn are subspaces of Rn , called trivial


subspaces of Rn .
Any subspace contains 0.
U is a subspace iff U is closed under addition and scalar
multiplication.

5 / 24
Subspaces of Rn

Definition: A set U (6= ∅) ⊆ Rn is called a subspace of Rn if


au + bv ∈ U for every u, v ∈ U and for every a, b ∈ R.

U = {0} and U = Rn are subspaces of Rn , called trivial


subspaces of Rn .
Any subspace contains 0.
U is a subspace iff U is closed under addition and scalar
multiplication.

For any finite subset S of Rn , span(S) is a subspace of Rn .

5 / 24
Subspaces of Rn

Definition: A set U (6= ∅) ⊆ Rn is called a subspace of Rn if


au + bv ∈ U for every u, v ∈ U and for every a, b ∈ R.

U = {0} and U = Rn are subspaces of Rn , called trivial


subspaces of Rn .
Any subspace contains 0.
U is a subspace iff U is closed under addition and scalar
multiplication.

For any finite subset S of Rn , span(S) is a subspace of Rn .

Exercise: Examine whether the sets


S = {[x, y , z]> ∈ R3 : x = y + 1}, V = {[x, y , z]t ∈ R3 : x = 5y }
and U = {[x, y , z]t ∈ R3 : x = z 2 } are subspaces of R3 .

5 / 24
Direct sum of subspaces
Fact: Let A be an m × n matrix. Then U := {x ∈ Rn : Ax = 0} is
a subspace of Rn , called the nullspace of A.

6 / 24
Direct sum of subspaces
Fact: Let A be an m × n matrix. Then U := {x ∈ Rn : Ax = 0} is
a subspace of Rn , called the nullspace of A.

Definition: Let U and V be two subspaces of Rn . Then

U + V := {u + v : u ∈ U, v ∈ V }

is called the sum of the subspaces U and V .

6 / 24
Direct sum of subspaces
Fact: Let A be an m × n matrix. Then U := {x ∈ Rn : Ax = 0} is
a subspace of Rn , called the nullspace of A.

Definition: Let U and V be two subspaces of Rn . Then

U + V := {u + v : u ∈ U, v ∈ V }

is called the sum of the subspaces U and V .

Definition: Let U and V be two subspaces of Rn . If U ∩ V = {0}


then the sum U + V is called the direct sum of U and V and is
denoted by U ⊕ V . Thus

U ⊕ V = U + V and U ∩ V = {0}.

6 / 24
Direct sum of subspaces
Fact: Let A be an m × n matrix. Then U := {x ∈ Rn : Ax = 0} is
a subspace of Rn , called the nullspace of A.

Definition: Let U and V be two subspaces of Rn . Then

U + V := {u + v : u ∈ U, v ∈ V }

is called the sum of the subspaces U and V .

Definition: Let U and V be two subspaces of Rn . If U ∩ V = {0}


then the sum U + V is called the direct sum of U and V and is
denoted by U ⊕ V . Thus

U ⊕ V = U + V and U ∩ V = {0}.

Fact: Let U and V be subspaces of Rn . Then U + V and U ⊕ V


are subspaces of Rn . If z ∈ U ⊕ V then there exist unique u ∈ U
and v ∈ V such that z = u + v.
6 / 24
Linear dependence
Definition: A set {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly dependent if one of the vectors vi is a linear combination
of the rest,

7 / 24
Linear dependence
Definition: A set {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly dependent if one of the vectors vi is a linear combination
of the rest, i.e., if there are real numbers c1 , c2 , . . . , ck of which at
least one is nonzero such that

c1 v1 + c2 v2 + . . . + ck vk = 0.

7 / 24
Linear dependence
Definition: A set {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly dependent if one of the vectors vi is a linear combination
of the rest, i.e., if there are real numbers c1 , c2 , . . . , ck of which at
least one is nonzero such that

c1 v1 + c2 v2 + . . . + ck vk = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly dependent,


to mean that the set {v1 , v2 , . . . , vk } is linearly dependent.

7 / 24
Linear dependence
Definition: A set {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly dependent if one of the vectors vi is a linear combination
of the rest, i.e., if there are real numbers c1 , c2 , . . . , ck of which at
least one is nonzero such that

c1 v1 + c2 v2 + . . . + ck vk = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly dependent,


to mean that the set {v1 , v2 , . . . , vk } is linearly dependent.

Any set of vectors containing the 0 is linearly dependent.

7 / 24
Linear dependence
Definition: A set {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly dependent if one of the vectors vi is a linear combination
of the rest, i.e., if there are real numbers c1 , c2 , . . . , ck of which at
least one is nonzero such that

c1 v1 + c2 v2 + . . . + ck vk = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly dependent,


to mean that the set {v1 , v2 , . . . , vk } is linearly dependent.

Any set of vectors containing the 0 is linearly dependent.

Exercise: Examine whether the sets


U := {[1, 2, 0]> , [1, 1, −1]> , [1, 4, 2]> } and S := {[1, 4]> , [−1, 2]> }
are linearly dependent.

7 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒c1 = c2 = . . . = ck = 0.

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒c1 = c2 = . . . = ck = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly independent,


to mean that the set {v1 , v2 , . . . , vk } is linearly independent.

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒c1 = c2 = . . . = ck = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly independent,


to mean that the set {v1 , v2 , . . . , vk } is linearly independent.

Question: Let ei ∈ Rn be the i-th column of the identity matrix In .

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒c1 = c2 = . . . = ck = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly independent,


to mean that the set {v1 , v2 , . . . , vk } is linearly independent.

Question: Let ei ∈ Rn be the i-th column of the identity matrix In .


Is {e1 , e2 , . . . , en } linearly independent?

8 / 24
Linear independence
Definition: A set S = {v1 , v2 , . . . , vk } of vectors in Rn is said to be
linearly independent if S is NOT linearly dependent.

S is linearly independent iff


c1 v1 + c2 v2 + . . . + ck vk = 0 ⇒c1 = c2 = . . . = ck = 0.

We say that the vectors v1 , v2 , . . . , vk are linearly independent,


to mean that the set {v1 , v2 , . . . , vk } is linearly independent.

Question: Let ei ∈ Rn be the i-th column of the identity matrix In .


Is {e1 , e2 , . . . , en } linearly independent?

Fact: Let S := {v1 , v2 , . . . , vm } ⊆ Rn . Consider the n × m matrix


A := [v1 v2 . . . vm ]. Then S is linearly dependent iff the system
Ax = 0 has a non-trivial solution.

8 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A.

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A.

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A. Thus, for any
c ∈ Rm , c> A is a linear combination of rows of A.

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A. Thus, for any
c ∈ Rm , c> A is a linear combination of rows of A.
The rows of A are linearly dependent iff
c> A = c1 A1 + . . . cm Am = 0 (zero row) for some nonzero
c ∈ Rm .

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A. Thus, for any
c ∈ Rm , c> A is a linear combination of rows of A.
The rows of A are linearly dependent iff
c> A = c1 A1 + . . . cm Am = 0 (zero row) for some nonzero
c ∈ Rm .

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A. Thus, for any
c ∈ Rm , c> A is a linear combination of rows of A.
The rows of A are linearly dependent iff
c> A = c1 A1 + . . . cm Am = 0 (zero row) for some nonzero
c ∈ Rm .
The rows of A are linearly dependent iff A> >
1 , . . . , Am are
n
linearly dependent in R ,

9 / 24
Linear combinations of rows
 
A1
Let A :=  ...  be an m × n matrix. Then
 

Am
For ci ∈ R, a := c1 A1 + . . . cm Am is a linear combination of
the rows of A. Note that a is an 1 × n matrix and a> ∈ Rn .
Note: c1 A1 + . . . cm Am = [c1 , . . . , cm ]A. Thus, for any
c ∈ Rm , c> A is a linear combination of rows of A.
The rows of A are linearly dependent iff
c> A = c1 A1 + . . . cm Am = 0 (zero row) for some nonzero
c ∈ Rm .
The rows of A are linearly dependent iff A> >
1 , . . . , Am are
n >
linearly dependent in R , i.e., the columns of A are linearly
dependent.

9 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.
5 rank(A> ) < m.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.
5 rank(A> ) < m.
6 rref(A> ) has a zero row.
Proof: (1) ⇒ (2) ⇒ (3) trivial. Suppose (3) holds. Then

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.
5 rank(A> ) < m.
6 rref(A> ) has a zero row.
Proof: (1) ⇒ (2) ⇒ (3) trivial. Suppose (3) holds. Then
x> A> = 0 ⇒ x1 A1 + · · · + xm Am = 0 ⇒ (4) holds.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.
5 rank(A> ) < m.
6 rref(A> ) has a zero row.
Proof: (1) ⇒ (2) ⇒ (3) trivial. Suppose (3) holds. Then
x> A> = 0 ⇒ x1 A1 + · · · + xm Am = 0 ⇒ (4) holds.
Suppose (4) holds. Then rref(A> ) has a zero row ⇒ (5) holds.
Now (5) ⇒ (6) is immediate.

10 / 24
Linearly dependent rows
Theorem: Let S := {v1 , v2 , . . . , vm } ⊆ Rn and A := [v1 · · · vm ].
Then the following are equivalent.
1 S is linearly dependent.
2 Columns of A are linearly dependent.
3 Ax = 0 has a nontrivial solution.
4 Rows of A> are linearly dependent.
5 rank(A> ) < m.
6 rref(A> ) has a zero row.
Proof: (1) ⇒ (2) ⇒ (3) trivial. Suppose (3) holds. Then
x> A> = 0 ⇒ x1 A1 + · · · + xm Am = 0 ⇒ (4) holds.
Suppose (4) holds. Then rref(A> ) has a zero row ⇒ (5) holds.
Now (5) ⇒ (6) is immediate.
Suppose (6) holds. Then EA> = rref(A> ) for some invertible
matrix E . Now e> > >
m rref(A ) = 0 ⇒ Ay = 0, where y := E em .
10 / 24
Basis
Corollary: If m > n then any set of m vectors in Rn is linearly
dependent.

11 / 24
Basis
Corollary: If m > n then any set of m vectors in Rn is linearly
dependent.
Definition: Let S be a subspace of Rn and B ⊆ S. Then B is said
to be a basis for S iff B is linearly independent and span(B) = S.

11 / 24
Basis
Corollary: If m > n then any set of m vectors in Rn is linearly
dependent.
Definition: Let S be a subspace of Rn and B ⊆ S. Then B is said
to be a basis for S iff B is linearly independent and span(B) = S.

The set {1} is a basis for R1 (= R).

11 / 24
Basis
Corollary: If m > n then any set of m vectors in Rn is linearly
dependent.
Definition: Let S be a subspace of Rn and B ⊆ S. Then B is said
to be a basis for S iff B is linearly independent and span(B) = S.

The set {1} is a basis for R1 (= R).

The standard unit vector ei ∈ Rn is the i-th column of the


identity matrix In . The set {e1 , e2 , . . . , en } is a basis for Rn
and is called the standard basis.

11 / 24
Basis
Corollary: If m > n then any set of m vectors in Rn is linearly
dependent.
Definition: Let S be a subspace of Rn and B ⊆ S. Then B is said
to be a basis for S iff B is linearly independent and span(B) = S.

The set {1} is a basis for R1 (= R).

The standard unit vector ei ∈ Rn is the i-th column of the


identity matrix In . The set {e1 , e2 , . . . , en } is a basis for Rn
and is called the standard basis.

Exercise: Find a basis for the subspace S := {x ∈ R4 : Ax = 0},


where  
1 −1 −1 2
A =  2 −2 −1 3 .
−1 1 −1 0

11 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }.

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


   
a11 · · · a1r A1
Let A := 
 .. . . ..  =  .. .
. . .   . 
am1 · · · amr Am

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent.

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent. Then there exist
αj ∈ R, j = 1 : m, such that α1 A1 + · · · + αm Am = 01×r .

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent. Then there exist
αj ∈ R, j = 1 : m, such that α1 A1 + · · · + αm Am = 01×r . Hence

m
X
αi ui =
i=1

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent. Then there exist
αj ∈ R, j = 1 : m, such that α1 A1 + · · · + αm Am = 01×r . Hence
 
m m v1
αi Ai  ...  =
X X
αi ui =
 
i=1 i=1 vr

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent. Then there exist
αj ∈ R, j = 1 : m, such that α1 A1 + · · · + αm Am = 01×r . Hence
   
m m v1 v1
αi Ai  ...  = 01×r  ...  =
X X
αi ui =
   
i=1 i=1 vr vr

12 / 24
Basis
Theorem: Let S := {v1 , . . . , vr } ∈ Rn and U ⊆ span(S) such that
m := #(U) > r . Then U is linearly dependent.
Proof. Let U := {u1 , . . . , um }. Then

ui = ai1 v1 + ai2 v2 + · · · + air vr , 1 ≤ i ≤ m.


     
a11 · · · a1r A1 v1
Let A := 
 .. . . ..  =  .. . Then u = A  .. .
. . .   .  i i . 
am1 · · · amr Am vr
Since m > r , the rows of A are linearly dependent. Then there exist
αj ∈ R, j = 1 : m, such that α1 A1 + · · · + αm Am = 01×r . Hence
   
m m v1 v1
αi Ai  ...  = 01×r  ...  = 0.
X X
αi ui =
   
i=1 i=1 vr vr

12 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) =

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) =

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) = 0, since span({}) = {0}).

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) = 0, since span({}) = {0}).

If v1 , . . . , vm are linearly independent, then


dim(span(v1 , . . . , vm )) =

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) = 0, since span({}) = {0}).

If v1 , . . . , vm are linearly independent, then


dim(span(v1 , . . . , vm )) = m.

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) = 0, since span({}) = {0}).

If v1 , . . . , vm are linearly independent, then


dim(span(v1 , . . . , vm )) = m.

A set S := {v1 , . . . , vn } ⊆ Rn is a basis of Rn ⇐⇒ S is


linearly independent

13 / 24
Basis

Theorem: Let U be a subspace of Rn . Then U has a basis and any


two bases of U have the same number of elements.
Dimension: The number of elements in a basis of a subspace U of
Rn is called the dimension of U and is denoted by dim(U).

dim(Rn ) = n.

dim({0}) = 0, since span({}) = {0}).

If v1 , . . . , vm are linearly independent, then


dim(span(v1 , . . . , vm )) = m.

A set S := {v1 , . . . , vn } ⊆ Rn is a basis of Rn ⇐⇒ S is


linearly independent ⇐⇒ span(S) = Rn .

13 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A.

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A. In other words,
row(A) := {x> A | x ∈ Rm }
[Here, elements of row(A) are row vectors. How can they be
elements of Rn .

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A. In other words,
row(A) := {x> A | x ∈ Rm }
[Here, elements of row(A) are row vectors. How can they be
elements of Rn . In strict sense, row(A) := col(A> ).]

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A. In other words,
row(A) := {x> A | x ∈ Rm }
[Here, elements of row(A) are row vectors. How can they be
elements of Rn . In strict sense, row(A) := col(A> ).]
3 The null space of A, denoted null(A), is the subspace of Rn
consisting of the solutions of the homogeneous linear system
Ax = 0.

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A. In other words,
row(A) := {x> A | x ∈ Rm }
[Here, elements of row(A) are row vectors. How can they be
elements of Rn . In strict sense, row(A) := col(A> ).]
3 The null space of A, denoted null(A), is the subspace of Rn
consisting of the solutions of the homogeneous linear system
Ax = 0. In other words, null(A) := {x ∈ Rn | Ax = 0}

14 / 24
Fundamental subspaces associated to a matrix
Definition: Let A be an m × n matrix.
1 The column space / range space of A, denoted col(A), is the
subspace of Rm spanned by the columns of A.
In other words, col(A) := {Ax | x ∈ Rn }.
2 The row space of A, denoted row(A), is the subspace of Rn
spanned by the rows of A. In other words,
row(A) := {x> A | x ∈ Rm }
[Here, elements of row(A) are row vectors. How can they be
elements of Rn . In strict sense, row(A) := col(A> ).]
3 The null space of A, denoted null(A), is the subspace of Rn
consisting of the solutions of the homogeneous linear system
Ax = 0. In other words, null(A) := {x ∈ Rn | Ax = 0}
4 The null space of A> : null(A> ) = {x ∈ Rm | A> x = 0}.

14 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

15 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

. Proof. A and B are row equivalent ⇒ B = PA, for some


invertible P.

15 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

. Proof. A and B are row equivalent ⇒ B = PA, for some


invertible P. Thus,

row(B) = {x> B | x ∈ Rn } = {(x> P)A | x ∈ Rn } ⊆ row(A).

15 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

. Proof. A and B are row equivalent ⇒ B = PA, for some


invertible P. Thus,

row(B) = {x> B | x ∈ Rn } = {(x> P)A | x ∈ Rn } ⊆ row(A).

Similarly, row(A) ⊆ row(B), since A = P −1 B. 

15 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

. Proof. A and B are row equivalent ⇒ B = PA, for some


invertible P. Thus,

row(B) = {x> B | x ∈ Rn } = {(x> P)A | x ∈ Rn } ⊆ row(A).

Similarly, row(A) ⊆ row(B), since A = P −1 B. 

Corollary: For any A, row(A) = row(rref(A)).

15 / 24
Bases of row spaces

Theorem: If two matrices A and B are row equivalent, then

row(B) = row(A)

. Proof. A and B are row equivalent ⇒ B = PA, for some


invertible P. Thus,

row(B) = {x> B | x ∈ Rn } = {(x> P)A | x ∈ Rn } ⊆ row(A).

Similarly, row(A) ⊆ row(B), since A = P −1 B. 

Corollary: For any A, row(A) = row(rref(A)).

Corollary: For any matrix A, the non-zero rows of rref(A) forms a


basis of row(A).

15 / 24
Bases of column spaces

Question: (a) Suppose A and B are row-equivalent. Are col(A)


and col(B) equal?

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are row-equivalent. Are col(A)


and col(B) equal? No.

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are


 row-equivalent.
  Are col(A)

1 1 1 1
and col(B) equal? No. Take A = and B = .
1 1 0 0

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are


 row-equivalent.
  Are col(A)

1 1 1 1
and col(B) equal? No. Take A = and B = .
1 1 0 0
(b) Suppose A and B are row-equivalent. Do col(A) and col(B)
have same dimension?

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are


 row-equivalent.
  Are col(A)

1 1 1 1
and col(B) equal? No. Take A = and B = .
1 1 0 0
(b) Suppose A and B are row-equivalent. Do col(A) and col(B)
have same dimension? Yes. We will see soon.

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are


 row-equivalent.
  Are col(A)

1 1 1 1
and col(B) equal? No. Take A = and B = .
1 1 0 0
(b) Suppose A and B are row-equivalent. Do col(A) and col(B)
have same dimension? Yes. We will see soon.

Theorem: Let P be an invertible matrix. Then a set


{v1 , v2 , . . . , vm } in Rn is linearly independent iff the set
{Pv1 , Pv2 , . . . , Pvm } is linearly independent.

16 / 24
Bases of column spaces

Question: (a) Suppose A and B are


 row-equivalent.
  Are col(A)

1 1 1 1
and col(B) equal? No. Take A = and B = .
1 1 0 0
(b) Suppose A and B are row-equivalent. Do col(A) and col(B)
have same dimension? Yes. We will see soon.

Theorem: Let P be an invertible matrix. Then a set


{v1 , v2 , . . . , vm } in Rn is linearly independent iff the set
{Pv1 , Pv2 , . . . , Pvm } is linearly independent.

Corollary: Let A := [a1 a2 · · · an ] and rref(A) = [b1 b2 · · · bn ]. If


are bj1 , bj2 , . . . , bjr are pivot columns of rref(A), then
{aj1 , aj2 , . . . , ajr } is a basis of col(A).

16 / 24
Algorithm for computing bases of null spaces

Input: An m × n matrix A.
Output: A matrix X whose columns form a basis of the null
space of A.

17 / 24
Algorithm for computing bases of null spaces

Input: An m × n matrix A.
Output: A matrix X whose columns form a basis of the null
space of A.
1. Compute R = rref(A).

17 / 24
Algorithm for computing bases of null spaces

Input: An m × n matrix A.
Output: A matrix X whose columns form a basis of the null
space of A.
1. Compute R = rref(A).

2. Suppose that R has p-nonzero rows. So it has p-pivot


columns. Interchange columns of R (i.e., choose a
permutation matrix P) so that
 
Ip F
RP = = column interchanged form of R,
0 0

where Ip is the identity matrix of size p.

17 / 24
Bases of null spaces

 
−F
3. Set Y := , where In−p is the identity matrix of size
In−p
n − p.

18 / 24
Bases of null spaces

 
−F
3. Set Y := , where In−p is the identity matrix of size
In−p
n − p.

4. Now interchange rows of Y according to the permutation P.


This means compute
X := PY .

18 / 24
Bases of null spaces

 
−F
3. Set Y := , where In−p is the identity matrix of size
In−p
n − p.

4. Now interchange rows of Y according to the permutation P.


This means compute
X := PY .
Then rank(X ) = n − p and RX = RPY = 0. Thus columns of
X span the null space of R and hence the null space of A.

18 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4

19 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4
 
1 3 0 −1
We have R = rref(A) =  0 0 1 1 .
0 0 0 0

19 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4
 
1 3 0 −1
We have R = rref(A) =  0 0 1 1 . Therefore
0 0 0 0

{[1, 3, 0, −1], [0, 0, 1, 1]} is a basis for the row space of A.

19 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4
 
1 3 0 −1
We have R = rref(A) =  0 0 1 1 . Therefore
0 0 0 0

{[1, 3, 0, −1], [0, 0, 1, 1]} is a basis for the row space of A.


   
 1 3 
 2  ,  9  is a basis for the column space of A.
−1 3
 

19 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4
 
1 3 0 −1
We have R = rref(A) =  0 0 1 1 . Therefore
0 0 0 0

{[1, 3, 0, −1], [0, 0, 1, 1]} is a basis for the row space of A.


   
 1 3 
 2  ,  9  is a basis for the column space of A.
−1 3
 

Solve Rx = 0 to find a basis of null(R),

19 / 24
Example
Compute bases of the null space, row space and the column space
of the matrix  
1 3 3 2
A :=  2 6 9 7 .
−1 −3 3 4
 
1 3 0 −1
We have R = rref(A) =  0 0 1 1 . Therefore
0 0 0 0

{[1, 3, 0, −1], [0, 0, 1, 1]} is a basis for the row space of A.


   
 1 3 
 2  ,  9  is a basis for the column space of A.
−1 3
 

Solve Rx = 0 to find a basis of null(R), or use the previous


algorithm.
19 / 24
Example (cont.)
Interchanging 2nd and 3rd columns of R, we have
 
1 0 3 −1  
I2 F
RP =  0 1 0 1  = .
0 0
0 0 0 0
Now define  
  −3 1
−F  0 −1 
Y := = ,
In−p  1 0 
0 1
where p = 2 and n = 4.

20 / 24
Example (cont.)
Interchanging 2nd and 3rd columns of R, we have
 
1 0 3 −1  
I2 F
RP =  0 1 0 1  = .
0 0
0 0 0 0
Now define  
  −3 1
−F  0 −1 
Y := = ,
In−p  1 0 
0 1
where p = 2 and n = 4.
Finally, interchange 2nd and 3rd row of Y to obtain X , that is,
 
−3 1
 1 0 
X = PY =   0 −1  ,

0 1
which gives a basis of the null space of A.
20 / 24
Rank of a matrix

Theorem: The row space and the column space of a matrix A have
the same dimension, and dim(row(A)) = dim(col(A)) = rank(A).

21 / 24
Rank of a matrix

Theorem: The row space and the column space of a matrix A have
the same dimension, and dim(row(A)) = dim(col(A)) = rank(A).
Proof: Let R := rref(A). Then dim(row(A)) = dim(row(R)) =
number of nonzero rows of R = rank(A).

21 / 24
Rank of a matrix

Theorem: The row space and the column space of a matrix A have
the same dimension, and dim(row(A)) = dim(col(A)) = rank(A).
Proof: Let R := rref(A). Then dim(row(A)) = dim(row(R)) =
number of nonzero rows of R = rank(A).
Also A = ER for some m × m invertible matrix E . Hence
dim(col(A)) = dim(col(R)) = number of pivot columns of R
= rank(A). 

21 / 24
Rank of a matrix

Theorem: The row space and the column space of a matrix A have
the same dimension, and dim(row(A)) = dim(col(A)) = rank(A).
Proof: Let R := rref(A). Then dim(row(A)) = dim(row(R)) =
number of nonzero rows of R = rank(A).
Also A = ER for some m × m invertible matrix E . Hence
dim(col(A)) = dim(col(R)) = number of pivot columns of R
= rank(A). 

Theorem: For any matrix A, we have rank(A> ) = rank(A).

21 / 24
Rank of a matrix

Theorem: The row space and the column space of a matrix A have
the same dimension, and dim(row(A)) = dim(col(A)) = rank(A).
Proof: Let R := rref(A). Then dim(row(A)) = dim(row(R)) =
number of nonzero rows of R = rank(A).
Also A = ER for some m × m invertible matrix E . Hence
dim(col(A)) = dim(col(R)) = number of pivot columns of R
= rank(A). 

Theorem: For any matrix A, we have rank(A> ) = rank(A).

Definition: The nullity of a matrix A is the dimension of its null


space and is denoted by nullity(A).

21 / 24
Rank-nullity theorem

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

Proof: Suppose that rank(A) = r . Claim: nullity(A) = n − r .

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

Proof: Suppose that rank(A) = r . Claim: nullity(A) = n − r .

Let R := rref(A). Then R has r nonzero rows. Equivalently, A has


r pivot columns and n − r non-pivot columns.

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

Proof: Suppose that rank(A) = r . Claim: nullity(A) = n − r .

Let R := rref(A). Then R has r nonzero rows. Equivalently, A has


r pivot columns and n − r non-pivot columns.

Hence there are n − r free variables in the solution to the system


Ax = 0.

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

Proof: Suppose that rank(A) = r . Claim: nullity(A) = n − r .

Let R := rref(A). Then R has r nonzero rows. Equivalently, A has


r pivot columns and n − r non-pivot columns.

Hence there are n − r free variables in the solution to the system


Ax = 0. Thus nullity(A) = n − r . (WHY?)

22 / 24
Rank-nullity theorem

Theorem: (Rank-nullity theorem) Let A be an m × n matrix. Then

rank(A) + nullity(A) = n.

Proof: Suppose that rank(A) = r . Claim: nullity(A) = n − r .

Let R := rref(A). Then R has r nonzero rows. Equivalently, A has


r pivot columns and n − r non-pivot columns.

Hence there are n − r free variables in the solution to the system


Ax = 0. Thus nullity(A) = n − r . (WHY?)

If x is a solution with n − r free parameters, then setting all but


one parameter to zero at a time results in n − r linearly
independent solutions. 

22 / 24
The fundamental theorem of invertible matrices

Theorem: Let A be an n × n matrix. Then the following


statements are equivalent.
1. A is invertible.

2. Ax = b has a unique solution for every b in Rn .

3. Ax = 0 has only the trivial solution.

4. The reduced row echelen form of A is In .

5. A is a product of elementary matrices.

23 / 24
The fundamental theorem of invertible matrices

Theorem: Let A be an n × n matrix. Then the following


statements are equivalent.
1. A is invertible.

2. Ax = b has a unique solution for every b in Rn .

3. Ax = 0 has only the trivial solution.

4. The reduced row echelen form of A is In .

5. A is a product of elementary matrices.

6. rank(A) = n.

23 / 24
The fundamental theorem of invertible matrices

7. nullity(A) = 0.

24 / 24
The fundamental theorem of invertible matrices

7. nullity(A) = 0.

8. The column vectors of A are linearly independent.

9. The column vectors of A span Rn .

10. The column vectors of A form a basis for Rn .

24 / 24
The fundamental theorem of invertible matrices

7. nullity(A) = 0.

8. The column vectors of A are linearly independent.

9. The column vectors of A span Rn .

10. The column vectors of A form a basis for Rn .

11. The row vectors of A are linearly independent.

12. The row vectors of A span Rn .

13. The row vectors of A form a basis for Rn .

***

24 / 24

You might also like