0% found this document useful (0 votes)
16 views

Lecture11 Student

(1) The document discusses subspaces and linear independence in vector spaces. (2) A subspace is a subset of a vector space that is also a vector space under the same operations. The span of a set of vectors is the set of all their linear combinations and is always a subspace. (3) Examples are given of subspaces, including the solution space of a homogeneous system of linear equations and the subspace of polynomials of degree less than or equal to n. It is shown that the span of a set of vectors satisfies the properties to be a subspace.

Uploaded by

112304025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture11 Student

(1) The document discusses subspaces and linear independence in vector spaces. (2) A subspace is a subset of a vector space that is also a vector space under the same operations. The span of a set of vectors is the set of all their linear combinations and is always a subspace. (3) Examples are given of subspaces, including the solution space of a homogeneous system of linear equations and the subspace of polynomials of degree less than or equal to n. It is shown that the span of a set of vectors satisfies the properties to be a subspace.

Uploaded by

112304025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

≻≻≻≻—≻≻≻≻—Lecture Eleven —≺≺≺≺—≺≺≺≺

Chapter Six: Subspace and Linear Independence

Recall. A (real) vector space (over R) is a set V together with two operations vector
addition ⊕ : V × V −→ V and scalar multiplication ⊙ : R × V −→ V satisfying
(Vα) (closure of addition) V is closed under the operation of addition. Namely, u ⊕ v ∈ V
for all u, v ∈ V .
(Va) (commutativity of addition) u ⊕ v = v ⊕ u for all u, v ∈ V .
(Vb) (associativity of addition) (u ⊕ v) ⊕ w = u ⊕ (v ⊕ w) for all u, v, w ∈ V .
(Vc) (identity element of addition) there is an element in V , denoted by 0, such that
u ⊕ 0 = u for all u ∈ V .
(Vd) (inverse elements of addition) for each u in V , there is an element in V , denoted by
−u, such that u ⊕ −u = 0.

Linear Algebra lecture 11−1


(Vβ) (closure of scalar multiplication) V is closed under the operation of scalar multiplication.
Namely, c ⊙ u ∈ V for all c ∈ R and u ∈ V .
(Ve) (distributivity of scalar multiplication with respect to vector addition) for all u, v ∈ V
and c ∈ R, we have c ⊙ (u ⊕ v) = (c ⊙ u) ⊕ (c ⊙ v).
(Vf) (distributivity of scalar multiplication with respect to field addition) for all u ∈ V
and c, d ∈ R, we have (c + d) ⊙ u = (c ⊙ u) ⊕ (d ⊙ u).
(Vg) for all u ∈ V and c, d ∈ R, we have (c · d) ⊙ u = c ⊙ (d ⊙ u).
(Vh) (identity element of scalar multiplication) for all u ∈ V , we have 1 ⊙ u = u.

Remark. With no ambiguity, we may refer to a real vector space simply as a vector space
and also write u ⊕ v and c ⊙ u simply as u + v and cu respectively.

Linear Algebra lecture 11−2


Recall. Last lecture we showed that R3 together with the addition defined by u ⊕ v =
   
u1 + v 1 cu1
u2 + v2 and scalar multiplication defined by c ⊙ u = cu2 is a real vector space.
   
u3 + v 3   cu3 
 x
 

Furthermore, it is verified that the subset W = x : x ∈ R of R3 is also a vector
 

 x 

space with respect to the same addition and scalar multiplication operations. We say W is
a subspace of R3 as defined below.

Definition. Let V be a vector space and W a nonempty subset of V . If W is a vector


space with respect to the same operations as those in V , then W is called a subspace of V .

Example 6.2.6. Let Pn, n = 0, 1, 2, . . ., denote the vector space consisting all polynomials
of degree smaller than or equal to n and the zero polynomial and P denote the vector space
of all polynomials. We have shown in last lecture that Pn, n = 0, 1, 2, . . ., and P are all
S
vector spaces. Therefore, Pr is a subspace of P for all r ∈ N {0} and Pr is a subspace of
S
Ps for any r ≤ s and r, s ∈ N {0}.
Linear Algebra lecture 11−3
Example 6.2.1. Any vector space V 6= {0} has at least (two) subspaces itself V and the
zero subspace {0}.

Fact. Let V be any vector space. Then V is the largest subspace of V and {0} is the
smallest subspace of V . Namely, if W is a subspace of V , then {0} ⊆ W ⊆ V .

Corollary. If a subset W of a vector space V does not contain the zero vector 0, then
W cannot be a subspace of V .

Example 6.2.7. Let W be the set of all polynomials of degree exactly equal to 2. Then
W is a subset of P2 but not a subspace of P2.

Linear Algebra lecture 11−4


Theorem 6.2.2. Let V be a vector space with operations ⊕ and ⊙ and let W be a
nonempty subset of V . Then W is a subspace of V if and only if the following conditions
hold:
(Vα) W is closed under the operation of addition. Namely, u ⊕ v ∈ W for all u, v ∈ W .
(Vβ) W is closed under the operation of scalar multiplication. Namely, c ⊙ u ∈ W for all
c ∈ R and u ∈ W .

Proof. Skip.

Linear Algebra lecture 11−5


a b 0
 
Example 6.2.3&12. Consider W consisting of all 2 × 3 of the form , where
0 c d
a, b, c, and d are arbitrary real numbers. Then W is a subspace of the vector space M23
defined in last lecture.

Example 6.2.7. Let W be the set of all polynomials of degree exactly equal to 2. Then
W is a subset of P2 but not a subspace of P2.

Linear Algebra lecture 11−6


Example 6.2.9. Consider the homogeneous system Ax = 0, where A is an m × n matrix.
Let W be the subset of Rn consisting of all solutions to the homogeneous system. Namely,
W is the solution space of Ax = 0 or we may write W = {x ∈ Rn : Ax = 0}. Then W is
a subspace of Rn. This subspace W is called the null space of A

Remark. Note the solution space of a linear system Ax = b, where A is m × n, is not a


subspace of Rn if b 6= 0.

Linear Algebra lecture 11−7


Definition. Given S = {v1, v2, . . . , vk } a set of vectors in a vector space V , the (linear)
span of S is the subset of V consisting of all linear combinations of v1, v2, . . . , vk . Namely
Span S = Span{v1, v2, . . . , vk} = {c1v1 + c2v2 + · · · + ckvk : c1, c2, . . . , ck ∈ R}. Further-
more, if W = Span S, we say that W is spanned by S or that S spans W .

(" # " # " #)


1 0 0 1 0 0
Example 6.3.2. Consider S = , , . Then the span of S con-
0 0 1 0 0 1
" # " # " # " #
1 0 0 1 0 0 a b
sists all 2 × 2 matrix of the form a +b +c = , a, b, c ∈ R.
0 0 1 0 0 1 b c
Namely, S spans the subset of all symmetric matrices in M22.

Linear Algebra lecture 11−8


Theorem 6.2.3. Let S = {v1, v2, . . . , vk } be a set of vectors in a vector space V .
Then the subset W of V consisting of all linear combinations of v1, v2, . . . , vk , namely
W = Span S = Span{v1, v2, . . . , vk} = {c1v1 + c2v2 + · · · + ckvk : c1, c2, . . . , ck ∈ R}, is
a subspace of V .
Proof. 1◦ Note S ⊆ W and hence W is not empty.
Furthermore, since V is a vector space, c1v1 + c2v2 + · · · + ck vk ∈ V .
Hence W is a subset of V .
2◦ ”(Vα)”
Given any u, v ∈ W , we have, for some scalars c1, . . . , ck , d1, . . . , dk ,
u = c1v1 + c2v2 + · · · + ck vk and v = d1v1 + d2v2 + · · · + dk vk .
Then u + v = (c1v1 + c2v2 + · · · + ck vk ) + (d1v1 + d2v2 + · · · + dk vk )
= (c1 + d1)v1 + · · · + (ck + dk )vk
and we obtain u + v ∈ W .
3◦ ”(Vβ)”
Given any c ∈ R and u ∈ W ,
one has u = c1v1 + c2v2 + · · · + ck vk for some scalars c1, . . . , ck .
It is easy to see cu = (cc1)v1 + · · · + (cck )vk ∈ W .
4◦ We verify that W ⊆ V satisfies (Vα) and (Vβ) and hence is a subspace of V .
Linear Algebra lecture 11−9
    
 1
 0  
Example. Let S =  0  ,  1  in R3. Then the subspace spanned by S is the x-y
   
 
0 0
 
        

 1 0  a
 
 

plane as span S = a  0  + b  1  : a, b ∈ R =  b  : a, b ∈ R .
     
  
 
0 0 0
  

Example 6.2.3&12. Consider the set


(" # " # " # " # )
1 0 0 0 1 0 0 0 0 0 0 0
S= , , , , .
0 0 0 0 0 0 0 1 0 0 0 1
" # " #
1 0 0 0 1 0
Then span S, consisting of all 2 × 3 matrices of the form a +b +
0 0 0 0 0 0
" # " # " #
0 0 0 0 0 0 a b 0
c +d = for a, b, c, d ∈ R, is a subspace of M23.
0 1 0 0 0 1 0 c d

Example. In P2, let S = {1, t, t2}. Then span S = {c0 + c1t + c2t2 : c0, c1, c2 ∈ R} is a
subspace of P2. Note indeed span S = P2 (S spans P2).

Linear Algebra lecture 11−10


Theorem 6.2.T5. Let S = {v1, . . . , vk } be a set of vectors in a vector space V . Then
span S is the smallest subspace of V containing S. In other words, let W be any sub-
space of V containing S. Then span S ⊆ W .

Proof. 1◦ If W is a subspace containing S,


then any linear combinations of vectors in S, and hence in W ,
must be in W as well.
Consequently, span S ⊆ W .
Corollary. Let S = {v1, . . . , vk } be a set of vectors in a vector space V . Then span S
is the intersection of all subspaces containing S.
Proof. 1◦ Denote U = span S and U ∗ the intersection of all subspaces containing S.
because U is a subspace containing S by Theorem 6.2.3,
the intersection U ∗ ⊆ U .
2◦ On the other hand, Theorem 6.2.T5 says that
any subspace of V containing S contains U .
Hence U ⊆ U ∗. The proof is complete.
Linear Algebra lecture 11−11
Definition. Vectors v1, v2, . . . , vk in a vector space V are said to be linear dependent if
there exists real numbers c1, c2, . . . , ck , not all zero, such that c1v1 + c2v2 + · · · + ck vk = 0.
Otherwise, v1, v2, . . . , vk are called linear independent. That is, we say v1, v2, . . . , vk are
linear independent provided that whenever c1v1 + c2v2 + · · · + ck vk = 0, we must have
c1 = c2 = · · · = ck = 0.
     
1 0 0
Example. The three vectors e1 = 0, e2 = 1, and e3 = 0 in R3 are linear
     
0 0 1
independent.        
1 0 0 1
Example. The four vectors e1 = 0, e2 = 1, e3 = 0, and u = 1 in R3 are
       
0 0 1 1
linear dependent.        
1 0 0 1
Example. The four vectors e1 = 0, e2 = 1, e3 = 0, and v = 1 in R3 are
       
0 0 1 0
linear dependent.
Linear Algebra lecture 11−12
Example 6.3.12. Every set of vectors containing the zero vector is linear dependent.
More precisely, if v1, v2, . . . , vk are vectors in any vector space such that vj = 0 for some
1 ≤ j ≤ k. Then S = {v1, v2, . . . , vk } is linear dependent.

Example 6.3.11. Let p1(t) = t2 + t + 2, p2(t) = 2t2 + t, and p3(t) = 3t2 + 2t + 2. Then
S = {p1(t), p2(t), p3(t)} is linear dependent.

Example. Let p1(t) = t2 + t + 2, p2(t) = 2t2 + t, and p3(t) = 3t2 + 2. Then S =


{p1(t), p2(t), p3(t)} is linear independent.

Linear Algebra lecture 11−13


Theorem 6.3.T3. Let v1, v2, . . . , vk be vectors in a vector space V . Then v1, v2, . . . , vk
are linear dependent if and only if there exists some subscript j such that vj is a linear
combination of the other vectors v1, . . . , vj−1, vj+1, . . . , vk .
Note. This theorem does not say that every vector vj is a linear combination of the other
vectors.        
1 0 0 1
Example. Let v1 = 0, v2 = 1, v3 = 0, and v4 = 1. Then v1, v2, v3 and v4
       
0 0 1 0
are linear dependent. Note v1 = v4 − v2, a linear combination of v2 and v4, v2 = v4 − v1,
a linear combination of v1 and v4, and v4 = v1 + v2, a linear combination of v1 and v2.
However, we cannot write v3 as a linear combination of v1, v2, v4.

Linear Algebra lecture 11−14


Proof. 1◦ ”⇒”
Since v1, v2, . . . , vk are linear dependent,
there exists real numbers c1, c2, . . . , ck , not all zero,
such that c1v1 + c2v2 + · · · + ck vk = 0.
Let j be any subscript for which cj 6= 0.
c cj+1 ck
Then vj = (− cc1j )v1 + · · · + (− j−1cj )vj−1 + (− cj )v j+1 + · · · + (− cj )vk .
Namely, vj is a linear combination of v1, v2, . . . , vj−1, vj+1 . . . , vk .
2◦ ”⇐”
Suppose vj is a linear combination of the other vectors
v1, v2, . . . , vj−1, vj+1 . . . , vk .
Then there exist real numbers c1, . . . , cj−1, cj+1, . . . , ck
such that vj = c1v1 + · · · + cj−1vj−1 + cj+1vj+1 + · · · + ck vk .
Take cj = −1. Then the above equation becomes
c1v1 + · · · + cj−1vj−1 + cj vj + cj+1vj+1 + · · · + ck vk = 0.
Note cj = −1 6= 0. Hence v1, v2, . . . , vk are linear dependent.

Linear Algebra lecture 11−15


Corollary. If S = {v1, v2, . . . , vk } is linear dependent, then there exists some j such
that the subspace spanned by S \ {vj }, consisting of S with vj deleted, is the same as
the subspace spanned by S.
Proof. 1◦ Since S is linear dependent, by Theorem 6.3.T3, there exists some j
such that vj is a linear combination of v1, v2, . . . , vj−1, vj+1, . . . , vk ,
say, vj = c1v1 + · · · + cj−1vj−1 + cj+1vj+1 + · · · + ck vk
for some real numbers c1, . . . , cj−1, cj+1, . . . , ck .
Denote S ∗ = S \ {vj }, W = span S and W ∗ = span S ∗.
2◦ It is obvious that W ∗ ⊆ W because S ∗ ⊆ S.
3◦ Now given any w ∈ W , we can write
w = d1v1 + · · · + dj−1vj−1 + dj vj + dj+1vj+1 + · · · + dk vk
for some scalars d1, . . . , dk . Then 1◦ gives
w = d1v1 + · · · + dj−1vj−1 + dj (c1v1 + · · · + cj−1vj−1 + cj+1vj+1 + · · · + ck vk )
+dj+1vj+1 + · · · + dk vk
= (d1 + dj c1)v1 + · · · + (dj−1 + dj cj−1)vj−1 + (dj+1 + dj cj+1)vj+1 + · · ·
+(dk + dj ck )vk , a linear combination of vectors in S ∗.
Therefore W ⊆ W ∗ . We complete the proof.
Linear Algebra lecture 11−16
Theorem 6.3.4. The nonzero vectors v1, v2, . . . , vk in a vector space V are linear
dependent if and only if one of the vectors vj , j ≥ 2, is a linear combination of the
preceding vectors v1, v2, . . . , vj−1.

Corollary Let v1, v2, . . . , vk are nonzero vectors in a vector space V . Then the fol-
lowing statements are equivalent.
(a) v1, v2, . . . , vk are linear dependent.
(b) There exists j ≥ 2 such that vj is a linear combination of the preceding vectors
v1, v2, . . . , vj−1.
(c) There exists j such that vj is a linear combination of the other vectors v1, v2, . . . , vj−1,
vj+1, . . . , vk .

Linear Algebra lecture 11−17


Proof. 1◦ ”⇒” Since v1, v2, . . . , vk are linear dependent,
there exists real numbers c1, c2, . . . , ck , not all zero,
such that c1v1 + c2v2 + · · · + ck vk = 0.
Let j be the largest subscript for which cj 6= 0.
That is cj 6= 0 and cj+1 = · · · = ck = 0.
Then c1v1 + c2v2 + · · · + cj vj = 0.
c
It following vj = (− cc1j )v1 + · · · + (− j−1
cj
)vj−1.
Thus vj is a linear combination of v1, v2, . . . , vj−1.
Also note j ≥ 2 because if j = 1,
we get c1v1 = 0, along with c1 6= 0, which implies v1 = 0.
2◦ ”⇐” Suppose, for some j ≥ 2 and real numbers c1, c2, . . . , cj−1,
vj = c1v1 + c2v2 + · · · + cj−1vj−1.
Take cj = −1, cj+1 = 0, . . . , cn = 0.
Then the above equation becomes
c1v1 + · · · + cj−1vj−1 + cj vj + cj+1vj+1 + · · · + ck vk = 0.
Note cj = −1 6= 0. Hence v1, v2, . . . , vk are linear dependent.

Linear Algebra lecture 11−18


Theorem 6.3.T2. Let S1 and S2 be finite subsets of a vector space V and S1 ⊆ S2.
(a) If S1 is linear dependent, then S2 is linear dependent as well.
(b) If S2 is linear independent, then S1 is linear independent as well.

Proof. Skip.
Linear Algebra lecture 11−19

You might also like