0% found this document useful (0 votes)
2 views

Lecture6_D3

This lecture focuses on the concepts of linear dependence and independence of vectors, defining the row and column ranks of a matrix and establishing that they are equal. It introduces the notions of vector subspaces, bases, and dimensions, providing definitions and properties related to these concepts. Additionally, it discusses the null space and column space of a matrix, along with the unique representation of elements in a subspace in terms of a basis.

Uploaded by

kb1494585
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture6_D3

This lecture focuses on the concepts of linear dependence and independence of vectors, defining the row and column ranks of a matrix and establishing that they are equal. It introduces the notions of vector subspaces, bases, and dimensions, providing definitions and properties related to these concepts. Additionally, it discusses the null space and column space of a matrix, along with the unique representation of elements in a subspace in terms of a basis.

Uploaded by

kb1494585
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

MA 110:

Lecture 06

Saurav Bhaumik
Department of Mathematics
IIT Bombay

Spring 2025

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


Let S be a set of vectors (in R1×n or in Rn×1 ). Recall:
S is linearly dependent if a nontrivial linear combination
of finitely many vectors in S is zero, i.e., there are
(distinct) vectors a1 , . . . , am in S and scalars α1 , . . . , αm ,
not all zero, such that α1 a1 + · · · + αm am = 0.
S is linearly independent if it is not linearly dependent.
Crucial Result: If S has s vectors, and each of them is a
linear combination of a (fixed) set of r vectors, with
s > r , then S is linearly dependent.
In particular, if S has more than n vectors, then S is
linearly dependent.
Let A ∈ Rm×n . The row rank of A is the maximum
number of linearly independent row vectors of A.
Elementary row operations do not alter the row rank.
row-rank(A) = number of nonzero rows in a REF of A
= the number of pivots in a REF of A.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Example
In a previous lecture, we had seen that the matrix
 
3 2 2 −5
A :=  0.6 1.5 1.5 −5.4 
1.2 −0.3 −0.3 2.4

can be transformed to a row echelon form


 
3 2 2 −5

A :=  0 1.1 1.1 −4.4 
0 0 0 0

by elementary row transformations of type I and type II. Since


the number of nonzero rows of A′ is 2, we see that the row
rank of A is 2. This shows that the 3 row vectors of A are
linearly dependent.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Column Rank of a Matrix
Definition
Let A ∈ Rm×n . The column rank of A is the maximum
number of linearly independent column vectors of A.
Clearly, column-rank(A) = row-rank(AT ).
Proposition
The column rank of a matrix is equal to its row rank.
Proof. Let A := [ajk ] ∈ Rm×n . Let r = row-rank(A) and
s :=column-rank(A). We will show that s ≤ r and r ≤ s.
Let a1 , . . . , am denote the row vectors of A, so that
 
aj := aj1 · · · ajk · · · ajn for j = 1, . . . , m.
Among these m row vectors of A, there are r row vectors
aj1 , . . . , ajr which are linearly independent, and each row vector
of A is a linear combination of these r row vectors.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
For simplicity, let b1 := aj1 , . . . , br := ajr . Then for
j = 1, . . . , m and ℓ = 1, . . . , r , there is αjℓ ∈ R such that

aj = αj1 b1 + · · · + αjℓ bℓ + · · · + αjr br for j = 1, . . . , m.


 
If bℓ := bℓ1 · · · bℓk · · · bℓn for ℓ = 1, . . . , r , then the
above equations can be written componentwise as follows:

a1k = α11 b1k + ··· + α1ℓ bℓk + ··· + α1r brk


.. .. .. .. ..
. . . . .
ajk = αj1 b1k + ··· + αjℓ bℓk + ··· + αjr brk
.. .. .. .. ..
. . . . .
amk = αm1 b1k + · · · + αmℓ bℓk + · · · + αmr brk

for k = 1, . . . , n. These m equations show that the kth


column of A can be written as follows:

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


       
a1k α11 α12 α1r
 ..   ..   ..   .. 
 .   .   .   . 
 jk  = b1k  j1  + b2k  j2  + · · · + brk  αjr  .
a α α
       
 .   .   .   . 
 ..   ..   ..   .. 
amk αm1 αm2 αmr

for k = 1, . . . , n. Thus each of the n columns of A is a linear


 T
combination of the r column vectors α11 · · · αj1 · · · αm1 ,
 T  T
α12 · · · αj2 · · · αm2 , . . . , α1r · · · αjr · · · αmr .
By a crucial result we have proved earlier, no more than r
columns of A can be linearly independent. Hence s ≤ r , i.e.,
column-rank(A) ≤ row-rank(A).
Applying the above result to AT in place of A, we obtain
r ≤ s. Thus s = r , as desired.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
In view of the above result, we define the rank of a matrix A
to be the common value of the row rank of A and the column
rank of A, and we denote it by rank A . Consequently,
rank AT = rank A .
Example  
1 2 3 4 5 6
0 0 0 7 8 9 
Let A′ :=   0 0 0 0 0 10 .

0 0 0 0 0 0
Since A′ is in REF, its row rank is equal to the nonzero rows,
namely 3. Also, we can easily check that the columns c1 , c4
and c6 are linearly independent, and so the column rank of A′
is at least 3. Further, each of the columns c2 = 2c1 , c3 = 3c1
and c5 = (3/7)c1 + (8/7)c4 is linear combination of the
columns c1 , c4 and c6 , and so by a crucial result, the column
rank of A′ is at most 3. Hence the column rank of A′ is also 3.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Let us restate some results proved earlier in terms of the newly
introduced notion of the rank of a matrix.
Let A ∈ Rm×n .
1. If A is transformed to A′ by EROs, then rank A′ = rank A.
2. If A is transformed to A′ by EROs and A′ is in REF, then

rank A = number of nonzero rows of A′


= number of pivotal columns of A′ .

(Note: Each nonzero row has exactly one pivot, and a zero
row has no pivot.)
3. Let m = n. Then

A is invertible ⇐⇒ rank A = n.

(Recall: A square matrix is invertible if and only if it can be


transformed to the identity matrix by EROs.)
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Subspace, Basis, Dimension
From now on we shall consider only the column vectors .
Let n ∈ N. We have denoted the set of all column vectors
whose entries are real numbers and whose length is n by Rn×1 .
Definition
A nonempty subset V of Rn×1 is called a vector subspace, or
simply a subspace of Rn×1 if
(i) a, b ∈ V =⇒ a + b ∈ V , and
(ii) α ∈ R, a ∈ V =⇒ α a ∈ V .

Note: The zero column vector 0 is in every subspace V since


V ̸= ∅ and 0 = a + (−1)a for each a ∈ V .
Each linear combination of elements of a subspace V is in V .
{0} is the smallest and Rn×1 is the largest subspace of Rn×1 .
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Two Important Examples of Subspaces Related to a Matrix
Let A ∈ Rm×n .

Definition
The null space of A is

N (A) := {x ∈ Rn×1 : Ax = 0}.

Definition
The column space of A is

C(A) := the set of all linear combinations of columns of A.

We shall later find an interesting relationship between the null


space N (A) and the column space C(A).

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


Basis and dimension of a subspace

Let V be a subspace of Rn×1 . Since every element of Rn×1 is


a linear combination of the vectors e1 , . . . , en , we see that no
linearly independent subset of V contains more than n vectors.
Definition
A subset S of V is called a basis of V if S is linearly
independent and S has maximum possible number of elements
among linearly independent subsets of V .

Clearly, a basis of V has at most n elements.


Definition
Let S ⊂ Rn×1 . The set of all linear combinations of elements
of S is denoted by span S and called the span of S.

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


Proposition
Let V be a subspace of Rn×1 , and let S ⊂ V . Then S is a
basis for V ⇐⇒ S is linearly independent and span S = V .

Proof. Suppose S is a basis for V . By the definition of a


basis, S is linearly independent. Let S have r elements. If
x ∈ V , but x is not a linear combination of elements of S,
then S∪{x} would be a linearly independent subset of V
containing r + 1 elements, which would be a contradiction.
Conversely, suppose S is linearly independent and span S = V .
Let S have r elements. Since every element of V is a linear
combination of elements of S, no more than r elements of V
can be linearly independent. Hence S is a basis for V .

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


Corollary
Let V be a subspace of Rn×1 and let S, S ′ be two bases of V .
Then S and S ′ have the same cardinality.

Proof. It is enough to prove that |S| ≤ |S ′ |, because we can


reverse the roles of S and S ′ and prove the other inequality.
By the last proposition, every element of S is a linear
combination of vectors from S ′ . If |S| > |S ′ |, then S would be
linearly dependent. As S is a basis, it is linearly independent,
which implies |S| ≤ |S ′ |.
Definition
The dimension of V is defined as the number of elements in a
basis of V . It is denoted by dim V .

Example: Clearly, dim Rn×1 = n and dim{0} = 0.

Saurav Bhaumik, IIT Bombay MA 110: Lecture 06


Corollary
Let V be a subspace of Rn×1 . Every linearly independent
subset of V can be enlarged to a basis for V .
Proof. Let S be a linearly independent subset of V . If
span S = V , then by the previous result, S is a basis for V .
Suppose span S ̸= V . Then there is x1 ∈ V such that x1 is not
a linear combination of elements of S. Now S1 := S ∪ {x1 } is
a linearly independent subset of V . If span S1 = V , then as
before, S1 is a basis for V . If span S1 ̸= V , then there is
x2 ∈ V such that x2 is not a linear combination of elements of
S1 . Now S2 := S1 ∪{x2 } is a linearly independent subset of V .
This process must end after a finite number of steps since the
number of elements of any linearly independent subset of V is
less than or equal to dim V , and dim V ≤ n.
Remark: All things we have defined above for column vectors
can also be defined for row vectors.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Another immediate consequence of the earlier result is the
unique representation of every element of a subspace in terms
of the elements belonging to a basis of that subspace.
Proposition
Let S := {c1 , . . . , cr } be a basis for a subspace V of Rn×1 ,
and let x ∈ V . Then there are unique α1 , . . . , αr ∈ R such
that x = α1 c1 + · · · + αr cr .
Proof. Since V is a basis for V , we obtain V = span S, and
so the vector x is a linear combination of vectors in S, that is,
there are scalars α1 , . . . , αr such that x = α1 c1 + · · · + αr cr .
Now suppose x = β1 c1 + · · · + βr cr for some β1 , . . . , βr ∈ R.
Then
(α1 − β1 )c1 + · · · + (αr − βr )cr = 0.
Since the set S is linearly independent,
α1 − β1 = · · · = αr − βr = 0, that is, β1 = α1 , . . . , βr = αr .
This proves the uniqueness.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Let us recall some definitions and results from the last lecture.
Let A ∈ Rm×n . The row rank of A is the maximum number of
linearly independent row vectors of A. The column rank of A
is the maximum number of linearly independent column vectors
of A. The two are equal, and we denote both by rank A .
Further, if A′ is any REF of A, then the number of nonzero
rows of A′ is equal to rank A′ and it is equal to rank A.
The column space of A is denoted by C(A) and the null space
of A by N (A).
Proposition
Let A ∈ Rm×n , and let rank A = r . Then dim C(A) = r and
dim N (A) = n − r .

Proof. Since the column rank of A is equal to r , there are r


linearly independent columns ck1 , . . . , ckr of A, and any other
column of A is a linear combination of these r columns.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
Let x ∈ C(A). Then x is a linear combination of columns of
A, each of which in turn is a linear combination of ck1 , . . . , ckr .
Thus x is a linear combination of ck1 , . . . , ckr . This shows that
span{ck1 , . . . , ckr } = C(A). Hence {ck1 , . . . , ckr } is a basis for
C(A) and dim C(A) = r .
To find the dimension of N (A), let us transform A to a REF
A′ by EROs of type I and type II. Since the row rank of A is
equal to r , the matrix A′ has exactly r nonzero rows and
exactly r pivotal columns.
Let the n − r nonpivotal columns be denoted by cℓ1 , . . . , cℓn−r .
Then xℓ1 , . . . , xℓn−r are the free variables. For each
ℓ ∈ {ℓ1 , . . . , ℓn−r }, there is a basic solution sℓ of the
homogeneous equation Ax = 0, and every solution of this
homogeneous equation is a linear combination of these n − r
basic solutions. Let S denote the set of these n − r basic
solutions. Then span S = N (A).
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06
We claim that the set S of the n − r basic solutions is linearly
independent. To see this, we note that each basic solution is
equal to 1 in one of the free variables and it is equal to 0 in
the other free variables. Let α1 , . . . , αn−r ∈ R be such that
x := α1 sℓ1 + · · · + αn−r sℓn−r = 0.
For j = 1, . . . , n, let xj denote the jth entry of x. Then for
each ℓ ∈ {ℓ1 , . . . , ℓn−r }, we see that αℓ · 1 = xℓ = 0. Hence S
is linearly independent. Thus S is a basis for N (A) and
dim N (A) = n − r , the number of elements in S.

The dimension of the null space N (A) of A is called the


nullity of A. Since r + (n − r ) = n, we have proved the
following Rank-Nullity Theorem.

Theorem
Let A ∈ Rm×n . Then rank A + nullity A = n.
Saurav Bhaumik, IIT Bombay MA 110: Lecture 06

You might also like