0% found this document useful (0 votes)
18 views

Module-4-Vector-Spaces-1

Module 4 covers the concepts of vector spaces, subspaces, spanning sets, linear independence, basis, dimension, and the rank of a matrix. It outlines learning objectives such as determining vector spaces and subspaces, writing vectors as linear combinations, and identifying spanning sets. The module provides definitions, properties, and examples to illustrate these concepts in detail.

Uploaded by

martinezdannah7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Module-4-Vector-Spaces-1

Module 4 covers the concepts of vector spaces, subspaces, spanning sets, linear independence, basis, dimension, and the rank of a matrix. It outlines learning objectives such as determining vector spaces and subspaces, writing vectors as linear combinations, and identifying spanning sets. The module provides definitions, properties, and examples to illustrate these concepts in detail.

Uploaded by

martinezdannah7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Module 4

Vector Spaces
Subspaces
Spanning Sets and Linear Independence
Basis and Dimension
Rank of a Matrix

Learning Objectives:
At the end of this unit, you are expected to:

✓ Determine whether a set of vectors with two operations is a vector


space.

✓ Give examples of vector spaces.

✓ Determine whether a subset W of a vector space V is a subspace


of V.

✓ Write a vector as a linear combination of a set of vectors in a vector


space.

✓ Determine whether a set of vectors in a vector space V is a


spanning set of V.

1|Module 4
4.1 Vector Spaces
4.1.1 Definition and Examples of Vector Spaces
4.1.2 Properties of the zero vector in vector spaces.

4.2 Subspaces
4.2.1 Definition of a Subspace
4.2.2 Examples of Subspaces

4.3 Spanning Sets and Linear Independence


4.3.1 Definitions of Linear Combination, Spanning Sets and Linear
Independence with Examples

4.4 Basis and Dimension


4.4.1 Definitions and Examples

4.5 Rank of a Matrix


4.5.1 Definition and Examples

2|Module 4
4.1 Vector Spaces

In this section, we shall study structures with two operations: (1) addition and (2) scalar
multiplication, that are subject to some conditions. We will reflect more on the ten conditions and
consider how reasonable they are.

DEFINITION: VECTOR SPACE

A vector space (V, , ∙ ) is a set V together with two operations vector addition  and
scalar multiplication ∙ satisfying the following properties: for all 𝓊, 𝜐, 𝓌 ∈ 𝑉 and 𝒸, 𝒹 ∈ ℝ:
(i) (Additive closure) 𝓊  𝜐 ∈ 𝑉, where adding two vectors gives a vector in V.
(ii) (Additive Commutativity) 𝓊  𝜐 = 𝜐  𝓊, where order of addition does not matter.
(iii)(Additive Associativity) (𝓊  𝜐)  𝓌 = 𝓊  (𝜐  𝓌), where grouping of adding many
vectors does not matter.
(iv) (Existence of the Zero Vector) There is a special vector 0v ∈ V such that
𝓊  0v = 0v  𝓊 = 𝓊 for all 𝓊 ∈ V.
(v) (Existence of Additive Inverse) For every 𝓊 ∈ V, there exists 𝓌 ∈ V such that
𝓊  𝓌 = 0v.
( ∙ i) (Multiplicative Closure) c ∙ 𝓊 ∈V, where scalar times a vector is a vector in V.
( ∙ ii) (Distributivity) (c+d) ∙ 𝓊 = (c ∙ 𝓊)  (d ∙ 𝓊), where scalar multiplication distributes
over addition of scalars.
( ∙ iii) (Distributivity) c ∙ (𝓊  𝜐) = (c ∙ 𝓊)  (c ∙ 𝜐), where scalar multiplication distributes
over addition of vectors.
( ∙ iv) (Multiplicative Associativity) (cd) ∙ 𝜐 = c ∙ (d ∙ 𝜐).
( ∙ v) (Unity) 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈V.

REMARK: Rather than writing (V, , ∙ ), we will often say “let V be a vector space over
ℝn". If it is obvious that the numbers used are real numbers, then we say, “let V be a vector space"
suffices. The elements of V are called vectors. Hence, in the definition, 𝓊, 𝜐, 𝓌 are vectors while
𝒸, 𝒹 ∈ ℝ are called scalars. Also, don't confuse the usual addition + with the defined vector

3|Module 4
addition  in the vector space and the usual scalar multiplication with the defined scalar
multiplication ∙ in the vector space.
Let us now use the definition to identify whether a set, together with two operations is a vector
space or not.

Example 1: Consider 𝒫3 = {𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + 𝑎3 𝑥 3 |𝑎0 , … , 𝑎3 ∈ ℝ} be the set of polynomials


of degree three or less. It is a vector space under the operations:
(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 )
= (𝑎10 + 𝑎20 ) + (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3
and
𝑟 ∙ (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) = (𝑟𝑎10 ) + (𝑟𝑎11 )𝑥 + (𝑟𝑎12 )𝑥 2 + (𝑟𝑎13 )𝑥 3

So here, our vectors are polynomials of degree three or less. We can say that 3𝑎3 +
2𝑎2 − 5𝑎 − 2 ∈ 𝒫3 . Similarly, the polynomial 𝑎2 + 7𝑎 ∈ 𝒫3 . But 9𝑎4 + 8𝑎2 + 2 is not an
element of 𝒫3 . Why?
Observe too that in this example, the usual addition and usual scalar multiplication of
polynomials were defined and used here.
We are now ready to show that it is really a vector space. We do this by satisfying all the
properties or conditions stated in the definition. Let us study them one by one.

First we need to show: (i) (Additive closure) For any 𝓊, 𝜐 ∈ 𝒫3 , 𝓊  𝜐 ∈ 𝒫3 , that is, for any
two polynomials of degree three or less, their sum must still be a polynomial of degree three or
less. Let us take two arbitrary elements 𝓊, 𝜐 ∈ 𝒫3 where 𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with
𝑎10 , … , 𝑎13 ∈ ℝ and 𝜐 = 𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 with 𝑎20 , … , 𝑎23 ∈ ℝ
and perform 𝓊  𝜐:

𝓊  𝜐 = (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 ) (substituting the
values of 𝓊 and 𝜐.)

= (𝑎10 + 𝑎20 )+ (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3 (applying the
definition of )

Note that 𝑎10 , 𝑎11 , 𝑎12 , 𝑎13 , 𝑎20 , 𝑎21 , 𝑎22 , 𝑎23 ∈ ℝ.

4|Module 4
Therefore, (𝑎10 + 𝑎20 ), (𝑎11 + 𝑎21 ), (𝑎12 + 𝑎22 ), 𝑎𝑛𝑑 (𝑎13 + 𝑎23 ) ∈ ℝ (by closure property
for addition). Hence, for any 𝓊, 𝜐 ∈ 𝒫3 , 𝓊  𝜐 ∈ 𝒫3 .

Next we need to show:


(ii) (Additive Commutativity) For any 𝓊, 𝜐 ∈ 𝒫3 , 𝓊  𝜐 = 𝜐  𝓊, that is, the order of addition
does not matter. Again, let us take two arbitrary elements 𝓊, 𝜐 ∈ 𝒫3 where
𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and
𝜐 = 𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 with 𝑎20 , … , 𝑎23 ∈ ℝ

and perform 𝓊  𝜐:

𝓊  𝜐 = (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 ) (substituting the
values of 𝓊 and 𝜐.)

= (𝑎10 + 𝑎20 )+ (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3 (applying the
definition of )

Note that 𝑎10 , 𝑎11 , 𝑎12 , 𝑎13 , 𝑎20 , 𝑎21 , 𝑎22 , 𝑎23 ∈ ℝ.

Therefore, (𝑎10 + 𝑎20 ) = (𝑎20 + 𝑎10 ), (𝑎11 + 𝑎21 ) = (𝑎21 + 𝑎11 ), (𝑎12 + 𝑎22 ) = (𝑎22 +
𝑎12 ) and (𝑎13 + 𝑎23 ) = (𝑎23 + 𝑎13 ) (by commutative property for addition).
Hence, 𝓊  𝜐 = (𝑎10 + 𝑎20 )+ (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3
= (𝑎20 + 𝑎10 )+ (𝑎21 + 𝑎11 )𝑥 + (𝑎22 + 𝑎12 )𝑥 2 + (𝑎23 + 𝑎13 )𝑥 3
= 𝜐  𝓊 (applying the definition of )
Hence, for any 𝓊, 𝜐 ∈ 𝒫3 , 𝓊  𝜐 = 𝜐  𝓊

We also need to show:


(iii)(Additive Associativity) For any 𝓊, 𝜐, 𝓌 ∈ 𝒫3 , (𝓊  𝜐)  𝓌 = 𝓊  (𝜐  𝓌), that is,
the groupings does not matter.
From the previous two arbitrary elements 𝓊, 𝜐 ∈ 𝒫3
𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and
𝜐 = 𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 with 𝑎20 , … , 𝑎23 ∈ ℝ
let us define one more polynomial 𝓌 ∈ 𝒫3 ,
𝓌 = 𝑎30 + 𝑎31 𝑥 + 𝑎32 𝑥 2 + 𝑎33 𝑥 3 with 𝑎30 , … , 𝑎33 ∈ ℝ

5|Module 4
and perform (𝓊  𝜐)  𝓌:

(𝓊  𝜐)  𝓌 = ((𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 ))

 (𝑎30 + 𝑎31 𝑥 + 𝑎32 𝑥 2 + 𝑎33 𝑥 3 ) (substituting the values of 𝓊, 𝜐 and 𝓌.)

= ((𝑎10 + 𝑎20 )+ (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3 )

 (𝑎30 + 𝑎31 𝑥 + 𝑎32 𝑥 2 + 𝑎33 𝑥 3 ) (applying the definition of  on (𝓊  𝜐))

= ((𝑎10 + 𝑎20 )+ 𝑎30 ) + ((𝑎11 + 𝑎21 ) + 𝑎31 )𝑥 + ((𝑎12 + 𝑎22 ) + 𝑎32 ) 𝑥 2 +


((𝑎13 + 𝑎23 )+ 𝑎33 )𝑥 3 ) (applying the definition of  on (𝓊  𝜐)  𝓌)

But since 𝑎10 , 𝑎11 , 𝑎12 , 𝑎13 , 𝑎20 , 𝑎21 , 𝑎22 , 𝑎23 , 𝑎30 , 𝑎31 , 𝑎32 , 𝑎33 ∈ ℝ, then we can use the
associative property for addition so that:
(𝑎10 + 𝑎20 )+ 𝑎30 = 𝑎10 + (𝑎20 + 𝑎30 ); (𝑎11 + 𝑎21 )+ 𝑎31 = 𝑎11 + (𝑎21 + 𝑎31 );
(𝑎12 + 𝑎22 )+ 𝑎32 = 𝑎12 + (𝑎22 + 𝑎32 ) and (𝑎13 + 𝑎23 )+ 𝑎33 = 𝑎13 + (𝑎23 + 𝑎33 )

So,

(𝓊  𝜐)  𝓌 = ((𝑎10 + 𝑎20 )+ 𝑎30 + ((𝑎11 + 𝑎21 ) + 𝑎31 )𝑥 + ((𝑎12 + 𝑎22 ) + 𝑎32 ) 𝑥 2 +


((𝑎13 + 𝑎23 )+ 𝑎33 )𝑥 3

= (𝑎10 + (𝑎20 + 𝑎30 )) + (𝑎11 + (𝑎21 + 𝑎31 ))𝑥 + (𝑎12 + (𝑎22 + 𝑎32 ))𝑥 2 +
(𝑎13 + (𝑎23 + 𝑎33 ))𝑥 3

= 𝓊  (𝜐  𝓌) (applying the definition of )

Therefore, for any 𝓊, 𝜐, 𝓌 ∈ 𝒫3 , (𝓊  𝜐)  𝓌 = 𝓊  (𝜐  𝓌).

We also need to show that the zero vector is in 𝒫3 , that is, (iv) (Existence of the Zero Vector)
there is a special vector 0𝑃3 ∈ 𝒫3 such that 𝓊  0𝑃3 = 0𝑃3  𝓊 = 𝓊 for all u ∈ 𝒫3 .

Again, let us take an arbitrary element 𝓊 ∈ 𝒫3 where


𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and identify what is the zero vector 0𝑃3 ,
satisfying 𝓊  0𝑃3 = 0𝑃3  𝓊 = 𝓊.

6|Module 4
Since 𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ, and if 0𝑃3 ∈ 𝒫3 , then we can
represent 0𝑃3 as: 0𝑃3 = 𝑏10 + 𝑏11 𝑥 + 𝑏12 𝑥 2 + 𝑏13 𝑥 3 with 𝑏10 , … , 𝑏13 ∈ ℝ
So that 𝓊  0𝑃3 = (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑏10 + 𝑏11 𝑥 + 𝑏12 𝑥 2 + 𝑏13 𝑥 3 ) (substituting
the values of
𝓊 and 0v.)

= (𝑎10 + 𝑏10 ) + (𝑎11 + 𝑏11 )𝑥 + (𝑎12 + 𝑏12 )𝑥 2 + (𝑎13 + 𝑏13 )𝑥 3 (applying the
definition of )

and equate this to 𝓊

(𝑎10 + 𝑏10 ) + (𝑎11 + 𝑏11 )𝑥 + (𝑎12 + 𝑏12 )𝑥 2 + (𝑎13 + 𝑏13 )𝑥 3 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3
which implies:
(𝑎10 + 𝑏10 ) = 𝑎10 and since 𝑎10 ∈ ℝ, −𝑎10 ∈ ℝ then
−𝑎10 + (𝑎10 + 𝑏10 ) = −𝑎10 + 𝑎10 adding −𝑎10 to both sides of the equation
(−𝑎10 + 𝑎10 ) + 𝑏10 = 0 LHS: by associative property for addition;
RHS: simplify −𝑎10 + 𝑎10 (definition of additive inverse)
0 + 𝑏10 = 0 LHS: simplify – 𝑎10 + 𝑎10 (definition of additive inverse)
𝑏10 = 0 LHS: simplify 0 + 𝑏10(definition of additive identity)
Similarly, (𝑎11 + 𝑏11 ) = 𝑎11 → 𝑏11 = 0; (𝑎12 + 𝑏12 ) = 𝑎12 → 𝑏12 = 0 and
(𝑎13 + 𝑏13 ) = 𝑎13 → 𝑏13 = 0
Therefore, 0𝑃3 = 𝑏10 + 𝑏11 𝑥 + 𝑏12 𝑥 2 + 𝑏13 𝑥 3 with 𝑏10 = ⋯ = 𝑏13 = 0 ∈ ℝ
= 0 + 0𝑥 + 0𝑥 2 + 0𝑥 3 (substituting the values of 𝑏10 = ⋯ = 𝑏13 = 0)
and 0𝑃3 ∈ 𝒫3 . Since 𝓊, 0𝑃3 ∈ 𝒫3 , then 𝓊  0𝑃3 = 0𝑃3  𝓊 (by property (ii))

Finally, for the vector addition, we need to show that the additive inverse of an element of 𝒫3 is
also in 𝒫3 , i.e., (v) (Existence of Additive Inverse) For every 𝓊 ∈ 𝒫3 , there exists 𝓌 ∈ 𝒫3
such that 𝓊  𝓌 = 0𝑃3 .

Again, let us consider 𝓊 ∈ 𝒫3 , 𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ
and since we have found 0𝑃3 = 0 + 0𝑥 + 0𝑥 2 + 0𝑥 3 ∈ 𝒫3 , let us find 𝓌 such that u  𝓌 = 0𝑃3
and show that 𝓌 ∈ 𝒫3 .

If 𝓌 ∈ 𝒫3 , then it must be of the form

7|Module 4
𝓌 = 𝑐10 + 𝑐11 𝑥 + 𝑐12 𝑥 2 + 𝑐13 𝑥 3 with 𝑐10 , … , 𝑐13 ∈ ℝ

Now, we let 𝓊  𝓌 = 0𝑃3 .

(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) (𝑐10 + 𝑐11 𝑥 + 𝑐12 𝑥 2 + 𝑐13 𝑥 3 ) = 0 + 0𝑥 + 0𝑥 2 + 0𝑥 3


(substituting the values of 𝓊, 𝓌 and 0𝑃3 .)

(𝑎10 + 𝑐10 ) + (𝑎11 + 𝑐11 )𝑥 + (𝑎12 + 𝑐12 )𝑥 2 + (𝑎13 + 𝑐13 )𝑥 3 = 0 + 0𝑥 + 0𝑥 2 + 0𝑥 3

Since 𝑎10 , … , 𝑎13 , 𝑐10 , … , 𝑐13 ∈ ℝ, then

𝑎10 + 𝑐10 = 0 → 𝑐10 = −𝑎10 , −𝑎10 = −1(𝑎10 ) = 𝑐10 ∈ ℝ;

𝑎11 + 𝑐11 = 0 → 𝑐11 = −𝑎11 , −𝑎11 = −1(𝑎11 ) = 𝑐11 ∈ ℝ;

𝑎12 + 𝑐12 = 0 → 𝑐12 = −𝑎12 , −𝑎12 = −1(𝑎12 ) = 𝑐12 ∈ ℝ; and

𝑎13 + 𝑐13 = 0 → 𝑐13 = −𝑎13 , −𝑎13 = −1(𝑎13 ) = 𝑐13 ∈ ℝ

(by closure property for multiplication)

Hence, 𝓌 = 𝑐10 + 𝑐11 𝑥 + 𝑐12 𝑥 2 + 𝑐13 𝑥 3 ∈ 𝒫3 with 𝑐10 , … , 𝑐13 ∈ ℝ.

We now prove properties involving the scalar multiplication ∙

First, we prove: ( ∙ i) (Multiplicative Closure) For any𝓊 ∈ 𝒫3 , c ∙ 𝜐 ∈ 𝒫3 , that is, scalar times
a vector is a vector.
Let 𝓊 ∈ 𝒫3 , say, 𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and c ∈ ℝ
We perform c ∙ 𝓊:

c ∙ 𝓊 = c ∙ 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 (substituting the value of 𝓊.)

= (𝑐𝑎10 ) + (𝑐𝑎11 )𝑥 + (𝑐𝑎12 )𝑥 2 + (𝑐𝑎13 )𝑥 3(applying the definition of ∙ )

Since 𝑎10 , … , 𝑎13 ∈ ℝ and c ∈ ℝ, then 𝑐𝑎10, 𝑐𝑎11 , 𝑐𝑎12 , 𝑐𝑎13 ∈ ℝ (by closure property for
multiplication)

and so c ∙ 𝓊 ∈ 𝒫3 .

Next we need to show:


8|Module 4
( ∙ ii) (Distributivity) (c+d) ∙ 𝓊 = (c ∙ 𝓊)  (d ∙ 𝓊), where scalar multiplication distributes
over addition of scalars.

Let 𝓊 ∈ 𝒫3 , say, 𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and c, 𝑑 ∈ ℝ

We perform (c+d) ∙ 𝓊:

(c+d) ∙ 𝓊= (c+d) ∙ (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) (substituting the value of 𝓊.)

= (c+d) (𝑎10 ) + (c+d) (𝑎11 )𝑥 + (c+d) (𝑎12 )𝑥 2 + (c+d) (𝑎13 )𝑥 3 (applying the
definition of ∙ )

= (c𝑎10 + d𝑎10 ) + (𝑐𝑎11 + 𝑑𝑎11 )𝑥 + (𝑐𝑎12 + 𝑑𝑎12 )𝑥 2 + (𝑐𝑎13 + d𝑎13 )𝑥 3 (since c,


d, 𝑎10 , 𝑎11 , 𝑎12 , 𝑎13 ∈ ℝ, then distributive property for multiplication over addition
is possible)

= c𝑎10 + (d𝑎10 + 𝑐𝑎11 𝑥) + 𝑑𝑎11 𝑥 + (𝑐𝑎12 𝑥 2 + 𝑑𝑎12 𝑥 2 ) + (𝑐𝑎13 𝑥 3 + d𝑎13 𝑥 3 ) (by


associative property for addition)

= c𝑎10 + (𝑐𝑎11 𝑥 + 𝑑𝑎10 ) + 𝑑𝑎11 𝑥 + (𝑑𝑎12 𝑥 2 + 𝑐𝑎12 𝑥 2 ) + (𝑐𝑎13 𝑥 3 + d𝑎13 𝑥 3 ) (by


commutative property for addition)

= (c𝑎10 + 𝑐𝑎11 𝑥) + (𝑑𝑎10 + 𝑑𝑎11 𝑥 + 𝑑𝑎12 𝑥 2 ) + (𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 )+ d𝑎13 𝑥 3 (by


associative property for addition)

= (c𝑎10 + 𝑐𝑎11 𝑥) + (𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 ) + (𝑑𝑎10 + 𝑑𝑎11 𝑥 + 𝑑𝑎12 𝑥 2 ) + d𝑎13 𝑥 3 (by


commutative property for addition)

= (c𝑎10 + 𝑐𝑎11 𝑥 + 𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 ) + (𝑑𝑎10 + 𝑑𝑎11 𝑥 + 𝑑𝑎12 𝑥 2 + d𝑎13 𝑥 3 ) (by


associative property for addition)

= c(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) + 𝑑(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) (factor out c
and d)

= (c ∙ 𝓊)  (d ∙ 𝓊) (by definition of ∙ and  and substituting the value of

𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 by 𝓊)

Next, we will be proving a similar property:

( ∙ iii) (Distributivity) c ∙ (𝓊  𝜐) = (c ∙ 𝓊)  (c ∙ 𝜐), where scalar multiplication distributes


over addition of vectors.

9|Module 4
Let us take two arbitrary elements 𝓊, 𝜐 ∈ 𝒫3 where
𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and
𝜐 = 𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 with 𝑎20 , … , 𝑎23 ∈ ℝ and a scalar c ∈ ℝ
and perform c ∙ (𝓊  𝜐):

c ∙ (𝓊  𝜐) = c ∙ ((𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )  (𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 ))


(substituting the values of 𝓊 and 𝜐.)

= c ∙ ((𝑎10 + 𝑎20 ) + (𝑎11 + 𝑎21 )𝑥 + (𝑎12 + 𝑎22 )𝑥 2 + (𝑎13 + 𝑎23 )𝑥 3 ) (applying the
definition of )

= c(𝑎10 + 𝑎20 ) + 𝑐(𝑎11 + 𝑎21 )𝑥 + 𝑐(𝑎12 + 𝑎22 )𝑥 2 + 𝑐(𝑎13 + 𝑎23 )𝑥 3 (applying the
definition of ∙ )

= (𝑐𝑎10 + 𝑐𝑎20 ) + (𝑐𝑎11 𝑥 + 𝑐𝑎21 𝑥) + (𝑐𝑎12 𝑥 2 + 𝑐𝑎22 𝑥 2 ) + (𝑐𝑎13 𝑥 3 + 𝑐𝑎23 𝑥 3 ) (by


distributive property for multiplication over addition)

= 𝑐𝑎10 + (𝑐𝑎20 + 𝑐𝑎11 𝑥) + 𝑐𝑎21 𝑥 + (𝑐𝑎12 𝑥 2 + 𝑐𝑎22 𝑥 2 ) + (𝑐𝑎13 𝑥 3 + 𝑐𝑎23 𝑥 3 ) (by


associative property for addition)

= 𝑐𝑎10 + (𝑐𝑎11 𝑥 + 𝑐𝑎20 ) + 𝑐𝑎21 𝑥 + (𝑐𝑎22 𝑥 2 + 𝑐𝑎12 𝑥 2 ) + (𝑐𝑎13 𝑥 3 + 𝑐𝑎23 𝑥 3 ) (by


commutative property for addition)

= 𝑐𝑎10 + 𝑐𝑎11 𝑥 + (𝑐𝑎20 + 𝑐𝑎21 𝑥 + 𝑐𝑎22 𝑥 2 ) + (𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 ) + 𝑐𝑎23 𝑥 3 (by


associative property for addition)

= 𝑐𝑎10 + 𝑐𝑎11 𝑥 + (𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 ) + (𝑐𝑎20 + 𝑐𝑎21 𝑥 + 𝑐𝑎22 𝑥 2 ) + 𝑐𝑎23 𝑥 3 (by


commutative property for addition)

= (𝑐𝑎10 + 𝑐𝑎11 𝑥 + 𝑐𝑎12 𝑥 2 + 𝑐𝑎13 𝑥 3 ) + (𝑐𝑎20 + 𝑐𝑎21 𝑥 + 𝑐𝑎22 𝑥 2 + 𝑐𝑎23 𝑥 3 ) (by


associative property for addition)

= 𝑐(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) + 𝑐(𝑎20 + 𝑎21 𝑥 + 𝑎22 𝑥 2 + 𝑎23 𝑥 3 ) (factor out c)

= (c ∙ 𝓊)  (c ∙ 𝜐) (by substituting values of 𝓊 and 𝜐)

( ∙ iv) (Multiplicative Associativity) (cd) ∙ 𝓊 = c ∙ (d ∙ 𝓊).


Let us take an arbitrary element 𝑣 ∈ 𝒫3 where
𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and scalars c, d ∈ ℝ
and perform (cd) ∙ 𝓊:

10 | M o d u l e 4
(cd) ∙ 𝓊 = (cd) ∙ (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) (substituting the value of 𝓊.)

= (cd) 𝑎10 + (𝑐𝑑)𝑎11 𝑥 + (𝑐𝑑)𝑎12 𝑥 2 + (𝑐𝑑)𝑎13 𝑥 3 (applying the definition of ∙ )

= c(d𝑎10 ) + 𝑐(𝑑𝑎11 𝑥) + 𝑐(𝑑𝑎12 𝑥 2 ) + 𝑐(𝑑𝑎13 𝑥 3 ) (by associative property for


multiplication )

= c((d𝑎10 + 𝑑𝑎11 𝑥 + 𝑑𝑎12 𝑥 2 + 𝑑𝑎13 𝑥 3 ) (factor out c)

= c ∙ (d𝑎10 + 𝑑𝑎11 𝑥 + 𝑑𝑎12 𝑥 2 + 𝑑𝑎13 𝑥 3 ) (by definition of ∙ )

= c ∙ (d(𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )) (factor out d)

= c ∙ (d ∙ (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 )) (by definition of ∙ )

= c ∙ (d ∙ 𝓊) (substituting 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 by 𝓊)

( ∙ v) (Unity) 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈V.


Let us take an arbitrary element 𝑣 ∈ 𝒫3 where
𝓊 = 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 with 𝑎10 , … , 𝑎13 ∈ ℝ and the scalar 1 ∈ ℝ
and perform 1 ∙ 𝜐:

1 ∙ 𝓊 = 1 (𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 ) (substituting the value of 𝓊.)

= 1(𝑎10 ) + 1(𝑎11 𝑥) + 1(𝑎12 𝑥 2 ) + 1(𝑎13 𝑥 3 ) (applying the definition of ∙ )

= 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 (multiplying 1 by each of the terms of 𝓊)

= 𝓊 (substituting 𝑎10 + 𝑎11 𝑥 + 𝑎12 𝑥 2 + 𝑎13 𝑥 3 by 𝓊)

This vector space is worthy of attention because these are the polynomial operations familiar from
high school algebra. For instance,
1
3 ∙ (1 − 2𝑥 + 3𝑥 2 − 4𝑥 3 ) − 2 ∙ (2 − 3𝑥 + 𝑥 2 − (2) 𝑥 3 ) = −1 + 7𝑥 2 − 11𝑥 3 .

Although this space is not a subset of any ℝ𝑛 , there is a sense in which we can think of 𝒫3 as “the
same” as ℝ4 . If we identify these two space’s elements in this way
𝑎0
𝑎
𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + 𝑎3 𝑥 3 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑠 𝑡𝑜 (𝑎1 )
2
𝑎3

11 | M o d u l e 4
then the operations also corresponds. Here is an example of corresponding additions.

1−2𝑥+0𝑥 2 +1𝑥 3
1 2 3
+ 2+3𝑥+7𝑥 2 −4𝑥 3
corresponds to (−2) + ( 3 ) = ( 1 )
3+1𝑥+7𝑥 2 −3𝑥 3 0 7 7
1 −4 −3
Example 2: Consider the set V of all ordered triples of real numbers (x, y, z) and define the
operations  and ∙ by:

(x, y, z)  (x’, y’, z’) = (x + x’, y + y’, z + z’) where (x, y, z), (x’, y’, z’) ∈ V

c ∙ (x, y, z) = (cx, y, z) where (x, y, z) ∈ V and c ∈ ℝ

Which among the ten properties of a vector space is/are satisfied?

Solution:
In example 2, our vectors are vectors in ℝ3 . We can say that (0, 0, 0) ∈ 𝑉. Similarly,
the vector (−5, 10, 1⁄2) ∈ 𝑉. But (4, 0) is not an element of 𝑉. Why?
Observe too that in this example, the usual addition of vectors were defined and used
here. But the scalar multiplication is different from the usual scalar multiplication of vectors.
Say for example, if c = 2 and we have a vector v = (1, 2, 3), then c ∙ v = 2 (1, 2, 3) = (2, 2, 3)
which is not the same as c ∙ v = 2(1, 2, 3) = (2, 4, 6).
We are now ready to examine if this set, together with the two operations is a vector space.
We do this by satisfying all the properties or conditions stated in the definition. Again, let us study
them one by one.
(i) (Additive closure) 𝓊  𝜐 ∈ 𝑉, where adding two vectors gives a vector.
Let 𝓊 = (x, y, z), 𝜐 = (x’, y’, z’) ∈ 𝑉 where x, x’, y, y’, z, z’ ∈ ℝ
Then, 𝓊  𝜐 = (x, y, z)  (x’, y’, z’) (substituting values of 𝓊 and 𝜐)
= (x + x’, y + y’, z + z’) (by applying the definition of )
Note that x, x’, y, y’, z, z’ ∈ ℝ. Therefore, x + x’, y + y’, z + z’ ∈ ℝ (by closure property
for addition).
Hence, 𝓊  𝜐 = (x + x’, y + y’, z + z’) ∈ 𝑉.

(ii) (Additive Commutativity) 𝓊  𝜐 = 𝜐  𝓊, where order of addition does not matter.


Again, we consider 𝓊 = (x, y, z) and 𝜐 = (x’, y’, z’) ∈ 𝑉 where x, x’, y, y’, z, z’ ∈ ℝ

12 | M o d u l e 4
Then, 𝓊  𝜐 = (x, y, z)  (x’, y’, z’) (substituting values of 𝓊 and 𝜐)
= (x + x’, y + y’, z + z’) (by applying the definition of )
= (x’ + x, y’ + y, z’ + z) (by commutative property for addition)
= 𝜐  𝓊 (by definition of )
Therefore, 𝓊  𝜐 = 𝜐  𝓊.

(iii)(Additive Associativity) (𝓊  𝜐)  𝓌 =  (𝜐  𝓌), where order of adding many vectors


does not matter.
Again, we consider 𝓊 = (x, y, z), 𝜐 = (x’, y’, z’) and 𝓌 = (x’’, y’’, z’’) ∈ 𝑉 where x, x’,
x’’, y, y’, y’’, z, z’, z’’ ∈ ℝ
Then,
(𝓊  𝜐)  𝓌 = ((x, y, z)  (x’, y’, z’))  (x’’, y’’, z’’) (substituting values of 𝓊,
𝜐 𝑎𝑛𝑑 𝓌)
= (x + x’, y + y’, z + z’)  (x’’, y’’, z’’) (performing (𝓊  𝜐))
= ((x + x’) + x’’, (y + y’) + y’’, (z + z’) + z’’) (performing (𝓊  𝜐)  𝓌)
= (x + (x’ + x’’), y + (y’ + y’’), z + (z’ + z’’)) (by associative property for
addition)
= 𝓊  (𝜐  𝓌) (by definition of )

(iv) (Existence of the Zero Vector) There is a special vector 0v ∈ V such that
𝓊  0v = 0v  𝓊 = 𝓊 for all 𝓊 ∈ V.
Again, we consider 𝓊 = (x, y, z) and 0v = (a, b, c) ∈ 𝑉 where x, y, z, a, b, c ∈ ℝ
and observe what happens with 𝓊  0v = 𝓊:
𝓊  0v = 𝓊 → (x, y, z)  (a, b, c) = (x, y, z) (substituting values of 𝓊 and 0v)
(x + a, y + b, z + c) = (x, y, z) (performing (𝓊  0v)
→x+a=x
→y+b=y
→z+c=z
} (by definition of equal vectors)

from x+a=x
– x + (x + a) = – x + x (add – x to both sides of the equation)

13 | M o d u l e 4
(– x + x) + a = 0 (LHS: Associative Property for Addition; RHS: definition of
additive inverse)
0 + a = 0 (definition of additive inverse)
a = 0 (definition of additive identity)
Similarly, y + b = y → b = 0 and z + c = z → c = 0. Hence, 0v = (a, b, c) = (0, 0, 0) ∈ V (why?)
Now, using property(ii), since 𝓊, 0v ∈ V, then 𝓊  0v = 0v  𝓊 = 𝓊.

(v) (Existence of Additive Inverse) For every 𝓊 ∈ V, there exists 𝓌 ∈ V such that
𝓊  𝓌 = 0v.
Again, we consider 𝓊 = (x, y, z) and use the previous result, 0v = (0, 0, 0) ∈ 𝑉 where x, y, z,
a, b, c ∈ ℝ to find the value of 𝓌 ∈ V by observing what happens with 𝓊  𝓌 = 0v :
If 𝓌 ∈ V, then we can assume 𝓌 = (d, e, f) where 𝑑, 𝑒, 𝑓 ∈ ℝ. So that
𝓊  𝓌 = 0v → (x, y, z)  (d, e, f) = (0, 0, 0) (by substituting values of 𝓊, 𝓌 and 0v)
(x + d, y + e, z + f) = (0, 0, 0) (performing (𝓊  𝓌)
→x+d=0
→y+e=0
→z+f=0
} (by definition of equal vectors)

from x+d=0
– x + (x + d) = – x + 0 (add – x to both sides of the equation)
(– x + x) + d = – x (LHS: Associative Property for Addition; RHS: definition of
additive identity)
0 + d = – x (definition of additive inverse)
d = – x (definition of additive identity)
since x ∈ ℝ, then d = – x = (– 1)(x) ∈ ℝ (by closure property for multiplication)
Similarly, y + e = 0 → e = – y and z + f = 0 → c = – z where e, f ∈ ℝ
Hence, 𝓌 = (d, e, f) = (– x, – y, – z) ∈ V (why?)

( ∙ i) (Multiplicative Closure) c ∙ 𝓊 ∈V, where scalar times a vector is a vector.


Let 𝓊 = (x, y, z) ∈ 𝑉 where x, y, z ∈ ℝ. Also, we let c ∈ ℝ.
Then, c ∙ 𝓊 = c ∙ (x, y, z) (by substituting value of 𝓊)

14 | M o d u l e 4
= (cx, y, z) (by definition of ∙ )
Since c, x ∈ ℝ, then by closure property for multiplication, cx ∈ ℝ.
So, c ∙ 𝓊 = (cx, y, z) ∈V

( ∙ ii) (Distributivity) (c+d) ∙ 𝓊 = (c ∙ 𝓊)  (d ∙ 𝓊), where scalar multiplication distributes


over addition of scalars.
Let 𝓊= (x, y, z) ∈ V where x, y, z ∈ ℝ and scalars c, d ∈ ℝ
Then (c+d) ∙ 𝓊 = ((c+d)x, y, z) (by definition of ∙ )
= (cx+dx, y, z) (by distributive property for multiplication over addition)
Now, let us consider (c ∙ 𝓊)  (d ∙ 𝓊)
(c ∙ 𝓊)  (d ∙ 𝓊) = (cx, y, z)  (dx, y, z) (by definition of ∙ )
= (cx + dx, y + y, z + z) (by definition of )
= (cx + dx, 2y, 2z) (by simplifying y + y, z + z)
Observe that (c+d) ∙ 𝓊 ≠ (c ∙ 𝓊)  (d ∙ 𝓊) (because y ≠ 2y and z ≠ 2z)
Hence, V, together with the defined operations  and ∙ , is not a vector space since property ( ∙ ii)
does not hold.

Let us continue identifying what other properties are not satisfied:


( ∙ iii) (Distributivity) c ∙ (𝓊  𝜐) = (c ∙ 𝓊)  (c ∙ 𝜐), where scalar multiplication distributes
over addition of vectors.
Let 𝓊= (x, y, z) and 𝜐 = (x’, y’, z’) ∈ V where x, x’, y, y’, z, z’ ∈ ℝ and scalar c∈ ℝ
Then, c ∙ (𝓊  𝜐) = c ∙ ((x, y, z)  (x’, y’, z’)) (by substituting values of 𝓊 and 𝜐)
= c ∙ (x + x’, y + y’, z + z’) (by definition of )
= (c(x + x’), y + y’, z + z’) (by definition of ∙ )
= (cx + cx’, y + y’, z + z’) (by distributive property for multiplication over
addition)
We now consider (c ∙ 𝓊)  (c ∙ 𝜐)
(c ∙ 𝓊)  (c ∙ 𝜐) = (c ∙ (x, y, z))  (c ∙ (x’, y’, z’)) (by substituting values of 𝓊 and 𝜐)
= ((cx, y, z))  ((cx’, y’, z’)) (by definition of ∙ )
= (cx + cx’, y + y’, z + z’)) (by definition of )

15 | M o d u l e 4
Therefore, c ∙ (𝓊  𝜐) = (c ∙ 𝓊)  (c ∙ 𝜐)

( ∙ iv) (Multiplicative Associativity) (cd) ∙ 𝓊 = c ∙ (d ∙ 𝓊).


Let 𝓊= (x, y, z) ∈ V where x, y, z ∈ ℝ and scalars c, d ∈ ℝ
Then, (cd) ∙ 𝓊 = (cd) ∙ (x, y, z) (by substituting value of 𝓊)
= (cdx, y, z) (by definition of ∙ )
Also, c ∙ (d ∙ 𝓊) = c ∙ (d ∙ (x, y, z) (by substituting value of 𝓊)
= c ∙ (dx, y, z) (by definition of ∙ )
= (cdx, y, z) (by definition of ∙ )
Therefore, (cd) ∙ 𝓊 = c ∙ (d ∙ 𝓊).

( ∙ v) (Unity) 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈V.


Let 𝓊= (x, y, z) ∈ V where x, y, z ∈ ℝ and scalar 1 ∈ ℝ
Then 1 ∙ 𝓊 = 1 ∙ (x, y, z) (by substituting value of 𝓊)

= (1x, y, z) (by definition of ∙ )


= (x, y, z) (simplifying 1x)
=𝓊
Therefore, 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈V. But since property ( ∙ ii) does not hold, then V,
together with the defined operations  and ∙ , is not a vector space.

Example 3: Consider the set M23 of all 2x3 matrices under the usual operations of matrix
addition and scalar multiplication. Show that M23 is a vector space.

Solution:
In this example, the usual matrix addition and scalar multiplication were defined and used
𝑎 𝑏 𝑐
here. Say for example, if we have vectors 𝓊 and 𝜐 in M23 and a scalar x, where 𝓊 = [ ],
𝑑 𝑒 𝑓
𝑔 ℎ 𝑖 𝑎+𝑔 𝑏+ℎ 𝑐+𝑖 𝑥𝑎 𝑥𝑏 𝑥𝑐
𝜐=[ ], then 𝓊𝜐 = [ ], and x ∙ 𝓊 = [ ]
𝑗 𝑘 𝑙 𝑑+𝑗 𝑒+𝑘 𝑓+𝑙 𝑥𝑑 𝑥𝑒 𝑥𝑓

16 | M o d u l e 4
We are now ready to examine if this set, together with the two operations is a vector space.
We do this by satisfying all the properties or conditions stated in the definition. Again, let us study
them one by one.
(i) (Additive closure)  𝜐 ∈ 𝑉, where adding two vectors gives a vector.
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Let 𝓊 =[ ], 𝜐 = [ ] ∈ M23 where a, b, c, d, e, f, g, h, i, j, k, l ∈ ℝ
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Then, 𝓊  𝜐 = [ ][ ] (substituting values of 𝓊 and 𝜐)
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖
=[ ] (by applying the definition of )
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙
Note that a, b, c, d, e, f, g, h, i ∈ ℝ. Therefore, a + g, b + h, c + i, d + j, e + k, and f + l,
∈ ℝ (by closure property for addition).
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖
Hence, 𝓊  𝜐 = [ ] ∈ M23.
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙

(ii) (Additive Commutativity) 𝓊  𝜐 = 𝜐  𝓊, where order of addition does not matter.


𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Again, we consider 𝓊 = [ ] and 𝜐 = [ ] ∈ M23
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
where a, b, c, d, e, f, g, h, i, j, k, l ∈ ℝ
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Then, 𝓊  𝜐 = [ ][ ] (substituting values of 𝓊 and 𝜐)
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖
=[ ] (by applying the definition of )
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙
𝑔+𝑎 ℎ+𝑏 𝑖+𝑐
=[ ] (by commutative property for addition)
𝑗+𝑑 𝑘+𝑒 𝑙+𝑓
= 𝜐  𝓊 (by definition of )
Therefore, 𝓊  𝜐 = 𝜐  𝓊.

(iii)(Additive Associativity) (𝓊  𝜐)  𝓌 = 𝓊  (𝜐  𝓌), where order of adding many vectors


does not matter.
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖 𝑚 𝑛 𝑜
Again, we consider 𝓊 = [ ], 𝜐 = [ ] and 𝓌 = [ 𝑝 𝑞 𝑟 ] ∈ M23
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
where a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r ∈ ℝ
Then,

17 | M o d u l e 4
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖 𝑚 𝑛 𝑜
(𝓊  𝜐)  𝓌 = ([ ][ ])  [ 𝑝 𝑞 𝑟 ] (substituting values of 𝓊,
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
𝜐 𝑎𝑛𝑑 𝓌)
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖 𝑚 𝑛 𝑜
= ([
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙
]) [ 𝑝 𝑞 𝑟 ] (performing (𝓊  𝜐))
(𝑎 + 𝑔) + 𝑚 (𝑏 + ℎ) + 𝑛 (𝑐 + 𝑖) + 𝑜
=[ ] (performing (𝓊  𝜐)  𝓌)
(𝑑 + 𝑗) + 𝑝 (𝑒 + 𝑘) + 𝑞 (𝑓 + 𝑙) + 𝑟
𝑎 + (𝑔 + 𝑚) 𝑏 + (ℎ + 𝑛) 𝑐 + (𝑖 + 𝑜)
=[ ] (by associative property for
𝑑 + (𝑗 + 𝑝) 𝑒 + (𝑘 + 𝑞) 𝑓 + (𝑙 + 𝑟)
addition)
= 𝓊  (𝜐  𝓌) (by definition of )

(iv) (Existence of the Zero Vector) There is a special vector 0v ∈ M23 such that
𝓊  0v = 0v  𝓊 = 𝓊 for all 𝓊 ∈ M23.
𝑎 𝑏 𝑐 𝑠 𝑡 𝑢
Again, we consider 𝓊 = [ ] and 0v = [𝑣 𝑤 𝑦] ∈ M23 where a, b, c, d, e, f, s, t,
𝑑 𝑒 𝑓
u, v, w, y ∈ ℝ
and observe what happens with 𝓊  0v = 𝓊:
𝑎 𝑏 𝑐 𝑠 𝑡 𝑢 𝑎 𝑏 𝑐
𝓊  0v = 𝓊 → [ ] [𝑣 𝑤 𝑦 ] = [𝑑 ] (substituting values of 𝓊 and 0v)
𝑑 𝑒 𝑓 𝑒 𝑓
𝑎+𝑠 𝑏+𝑡 𝑐+𝑢 𝑎 𝑏 𝑐
[ ]=[ ] (performing (𝓊  0v)
𝑑+𝑣 𝑒+𝑤 𝑓+𝑦 𝑑 𝑒 𝑓
→a+s=a d+v=d
→b+t=b
→c+u=c
e+w=e
f+y=f
} (by definition of equal matrices)

from a+s=a
– a + (a + s) = – a + a (add – x to both sides of the equation)
(– a + a) + s = 0 (LHS: Associative Property for Addition; RHS: definition of
additive inverse)
0 + s = 0 (definition of additive inverse)
s = 0 (definition of additive identity)
Similarly, b + t = b → t = 0; c + u = c → u = 0; d + v = d → v = 0; e + w = e → w = 0; and

18 | M o d u l e 4
𝑠 𝑡 𝑢 0 0 0
f + y = f → y = 0 and 0v = [𝑣 𝑤 𝑦 ] = [0 0 ] ∈ M23
0
Now, using property(ii), since 𝓊, 0v ∈ V, then 𝓊  0v = 0v  𝓊 = 𝓊.

(v) (Existence of Additive Inverse) For every 𝓊 ∈ V, there exists 𝓌 ∈ V such that
𝓊  𝓌 = 0v.
𝑎 𝑏 𝑐 0 0 0
Again, we consider 𝓊 = [ ]and use the previous result, 0v = [ ] ∈ M23
𝑑 𝑒 𝑓 0 0 0
where a, b, c, d, e, f, ∈ ℝ to find the value of 𝓌 ∈ M23 by observing what happens with
𝓊  𝓌 = 0v :
𝑔 ℎ 𝑖
If 𝓌 ∈ M23, then we can assume 𝓌 = [ ] where 𝑔, ℎ, 𝑖, 𝑗, 𝑘, 𝑙 ∈ ℝ. So that
𝑗 𝑘 𝑙
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖 0 0 0
𝓊  𝓌 = 0v → [ ][ ]=[ ] (by substituting values of 𝓊, 𝓌 and
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙 0 0 0
0v)
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖 0 0 0
[ ]= [ ] (performing (𝓊  𝓌)
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙 0 0 0
→ a + g = 0; d+j=0
→ b + h = 0;
→ c + i = 0;
e+k=0
f+l=0
} (by definition of equal vectors)

from a+g=0
– a + (a + g) = – a + 0 (add – x to both sides of the equation)
(– a + a) + g = – a (LHS: Associative Property for Addition; RHS: definition of
additive identity)
0 + g = – a (definition of additive inverse)
g = – a (definition of additive identity)
since a ∈ ℝ, then g = – a = (– 1)(a) ∈ ℝ (by closure property for multiplication)
Similarly, b + h = 0 → h = – b; c + i = 0 → i = – c; d + j = 0 → j = – d;
e + k = 0 → k = – e; and f + l = 0 → l = – f where g, h, i, j, k, l ∈ ℝ

𝑔 ℎ 𝑖 −𝑎 −𝑏 −𝑐
Hence, 𝓌 = [ ]=[ ] ∈ M23
𝑗 𝑘 𝑙 −𝑑 −𝑒 −𝑓

19 | M o d u l e 4
( ∙ i) (Multiplicative Closure) x ∙ 𝓊 ∈ M23, where scalar times a vector is a vector.
𝑎 𝑏 𝑐
Let 𝓊 = [ ] ∈ M23 where a, b, c, d, e, f ∈ ℝ. Also, we let x ∈ ℝ.
𝑑 𝑒 𝑓
𝑎 𝑏 𝑐
Then, x ∙ 𝓊 = x ∙ [ ] (by substituting value of 𝓊)
𝑑 𝑒 𝑓
𝑥𝑎 𝑥𝑏 𝑥𝑐
=[ ] (by definition of ∙ )
𝑥𝑑 𝑥𝑒 𝑥𝑓
Since a, b, c, d, e, f, x ∈ ℝ, then by closure property for multiplication, xa, xb, xc, xd, xe,
xf ∈ ℝ.
𝑥𝑎 𝑥𝑏 𝑥𝑐
So, x ∙ 𝓊 = [ ] ∈ M23.
𝑥𝑑 𝑥𝑒 𝑥𝑓

( ∙ ii) (Distributivity) (x+y) ∙ 𝓊 = (x ∙ 𝓊)  (y ∙ 𝓊), where scalar multiplication distributes


over addition of scalars.
𝑎 𝑏 𝑐
Let 𝓊= [ ] ∈ M23 where a, b, c, d, e, f ∈ ℝ and scalars x, y ∈ ℝ
𝑑 𝑒 𝑓
𝑎 𝑏 𝑐
Then (x+y) ∙ 𝓊 = (𝑥 + 𝑦) ) ∙ [ ] (by substituting the value of 𝓊)
𝑑 𝑒 𝑓
(𝑥 + 𝑦)𝑎 (𝑥 + 𝑦)𝑏 (𝑥 + 𝑦)𝑐
=[ ] (by performing ∙ )
(𝑥 + 𝑦)𝑑 (𝑥 + 𝑦)𝑒 (𝑥 + 𝑦)𝑓
𝑥𝑎 + 𝑦𝑎 𝑥𝑏 + 𝑦𝑏 𝑥𝑐 + 𝑦𝑐
=[ ](by distributive property for
𝑥𝑑 + 𝑦𝑑 𝑥𝑒 + 𝑦𝑒 𝑥𝑓 + 𝑦𝑓
multiplication over addition)
𝑥𝑎 𝑥𝑏 𝑥𝑐 𝑦𝑎 𝑦𝑏 𝑦𝑐
=[ ][ ] (by definition of )
𝑥𝑑 𝑥𝑒 𝑥𝑓 𝑦𝑑 𝑦𝑒 𝑦𝑓
𝑎 𝑏 𝑐 𝑎 𝑏 𝑐
=(𝑥 ∙ [ ])  (𝑦 ∙ [ ]) (by definition of ∙ )
𝑑 𝑒 𝑓 𝑑 𝑒 𝑓
=(𝑥 ∙ 𝓊) (𝑦 ∙ 𝓊) (by substituting the value of 𝓊)

( ∙ iii) (Distributivity) x ∙ (𝓊  𝜐) = (x ∙ 𝓊)  (x ∙ 𝜐), where scalar multiplication distributes


over addition of vectors.

20 | M o d u l e 4
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Let 𝓊= [ ] and 𝜐 = [ ] ∈ M23 where a, b, c, d, e, f, g, h, i, j, k, l ∈ ℝ and
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
scalar x ∈ ℝ
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
Then, x ∙ (𝓊  𝜐) = x ∙ ([ ] [ ] ) (by substituting values of 𝓊 and 𝜐)
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
𝑎+𝑔 𝑏+ℎ 𝑐+𝑖
= x ∙ ([ ])(by definition of )
𝑑+𝑗 𝑒+𝑘 𝑓+𝑙

𝑥(𝑎 + 𝑔) 𝑥(𝑏 + ℎ) 𝑥(𝑐 + 𝑖)


= ([ ]) (by definition of ∙ )
𝑥(𝑑 + 𝑗) 𝑥(𝑒 + 𝑘) 𝑥(𝑓 + 𝑙)
𝑥𝑎 + 𝑥𝑔 𝑥𝑏 + 𝑥ℎ 𝑥𝑐 + 𝑥𝑖
=[ ] (by distributive property for
𝑥𝑑 + 𝑥𝑗 𝑥𝑒 + 𝑥𝑘 𝑥𝑓 + 𝑥𝑙
multiplication over addition)
𝑥𝑎 𝑥𝑏 𝑥𝑐 𝑥𝑔 𝑥ℎ 𝑥𝑖
=[ ] [ ] (by definition of )
𝑥𝑑 𝑥𝑒 𝑥𝑓 𝑥𝑗 𝑥𝑘 𝑥𝑙
𝑎 𝑏 𝑐 𝑔 ℎ 𝑖
=(𝑥 ∙ [ ])  (𝑥 ∙ [ ])(by definition of ∙ )
𝑑 𝑒 𝑓 𝑗 𝑘 𝑙
= (x ∙ 𝓊)  (x ∙ 𝜐) (by substituting the values of 𝓊 and 𝜐)

( ∙ iv) (Multiplicative Associativity) (xy) ∙ 𝓊 = x ∙ (y ∙ 𝓊).


𝑎 𝑏 𝑐
Let 𝓊= [ ] ∈ M23 where a, b, c, d, e, f ∈ ℝ and scalars x, y ∈ ℝ
𝑑 𝑒 𝑓
𝑎 𝑏 𝑐
Then, (xy) ∙ 𝓊 = (xy) ∙ [ ] (by substituting value of 𝓊)
𝑑 𝑒 𝑓
(𝑥𝑦)𝑎 (𝑥𝑦)𝑏 (𝑥𝑦)𝑐
=[ ] (by definition of ∙ )
(𝑥𝑦)𝑑 (𝑥𝑦)𝑒 (𝑥𝑦)𝑓
𝑥(𝑦𝑎) 𝑥(𝑦𝑏) 𝑥(𝑦𝑐)
=[ ] (by associative property for multiplication)
𝑥(𝑦𝑑) 𝑥(𝑦𝑒) 𝑥(𝑦𝑓)
𝑦𝑎 𝑦𝑏 𝑦𝑐
=𝑥∙[ ] (by definition of ∙ )
𝑦𝑑 𝑦𝑒 𝑦𝑓
𝑎 𝑏 𝑐
= 𝑥 ∙ (𝑦 ∙ [ ]) (by definition of ∙ )
𝑑 𝑒 𝑓
Therefore, (xy) ∙ 𝓊 = x ∙ (y ∙ 𝓊).

21 | M o d u l e 4
( ∙ v) (Unity) 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈ M23.
𝑎 𝑏 𝑐
Let 𝓊= [ ] ∈ M23 where a, b, c, d, e, f ∈ ℝ and scalar 1 ∈ ℝ
𝑑 𝑒 𝑓
𝑎 𝑏 𝑐
Then 1 ∙ 𝓊 = 1 ∙ [ ] (by substituting value of 𝓊)
𝑑 𝑒 𝑓

1𝑎 1𝑏 1𝑐
=[ ] (by definition of ∙ )
1𝑑 1𝑒 1𝑓
𝑎 𝑏 𝑐
=[ ] (simplifying 1a, 1b, 1c, 1d, 1e, 1f)
𝑑 𝑒 𝑓
=𝓊 (by substituting value of 𝓊)
Therefore, 1 ∙ 𝓊 = 𝓊 for all 𝓊 ∈ M23. Hence, M23, together with the defined operations 
and ∙ , is a vector space.

These three examples should have presented you with a clear understanding of the ten properties
of a vector space. In addition, the following theorem presents several useful properties common
to all vector spaces:

THEOREM: In any vector space V,

(1) 0 ∙ 𝓊 = 0𝑉 for every 𝓊 ∈ 𝑉


(2) 𝑐 ∙ 0𝑉 = 0𝑉 for every scalar c.
(3) If 𝑐 ∙ 𝓊 = 0𝑉 , then 𝑐 = 0 or 𝓊 = 0𝑉
(4) −1 ∙ 𝓊 = −𝓊 for every 𝓊 ∈ 𝑉.

PROOF:

For (1), note that 𝓊 = 1 ∙ 𝓊 (by property ( ∙ v))

= (1 + 0) ∙ 𝓊 (by rewriting 1 as 1 + 0)

= (1 ∙ 𝓊) (0 ∙ 𝓊) (by property ( ∙ ii))

=𝓊0∙𝓊 (by property ( ∙ v))

−𝓊  𝓊 = −𝓊  (𝓊  0 ∙ 𝓊) (by adding the additive inverse (property


(v)) of 𝓊 to both sides of the equation)

0𝑉 = (−𝓊  𝓊)  0 ∙ 𝓊 (LHS: by property (v); RHS: by property


(iii))

0𝑉 = 0𝑉  0 ∙ 𝓊 (by property (v))

22 | M o d u l e 4
0𝑉 = 0 ∙ 𝓊 (by property (iv))

For (2), 𝑐 ∙ 0𝑉 = 𝑐 ∙ (0 ∙ 0𝑉 ) (using the result in Theorem (1))

= (𝑐0) ∙ 0𝑉 (by property ( ∙ iv))

= 0 ∙ 0𝑉 (simplifying c0)

= 0𝑉 (by Theorem (1))

1
For (3), it is easy to see that 𝑐 ∙ 𝓊 = 0𝑉 implies 𝑐 = 0. (Why?) If 𝑐 ≠ 0, then ∈ℝ.
𝑐

1 1 1
So that 𝑐 ∙ (𝑐 ∙ 𝓊) = 𝑐 ∙ 0𝑉 (by multiplying to both sides of the equation)
𝑐

1
( 𝑐) ∙ 𝓊 = 0𝑉 (LHS: by property( ∙ iv) ; RHS: by Theorem (2))
𝑐

1 ∙ 𝓊 = 0𝑉 (by definition of multiplicative inverse)

𝓊 = 0𝑉 (by by property( ∙ v))

Finally, for (4), observe that


−1 ∙ 𝓊  𝓊 = −1 ∙ 𝓊  1 ∙ 𝓊 (by by property( ∙ v))

= (−1 + 1) ∙ 𝓊 (by by property ( ∙ ii))

= (0) ∙ 𝓊 (by definition of additive inverse)

= 0𝑉 (by Theorem (1))

(−1 ∙ 𝓊  𝓊) − 𝓊 = 0𝑉  − 𝓊 (by adding the additive inverse (property


(v)) of 𝓊 to both sides of the equation)

−1 ∙ 𝓊  (𝓊 − 𝓊) = −𝓊 (LHS: by property (iii); RHS: by property


(iv))

−1 ∙ 𝓊  0𝑉 = −𝓊 (by property (v))

−1 ∙ 𝓊 = −𝓊 (by property (iv))

23 | M o d u l e 4
Remark: The sets ℝ𝑛 , 𝑃𝑛 and 𝑀𝑚𝑛 where m,n ∈ ℤ+ together with the associated usual addition
and scalar multiplication on the given set of vectors satisfy all the ten properties as
indicated in the definition of a vector space. Hence, ℝ𝑛 , 𝑃𝑛 and 𝑀𝑚𝑛 where m,n ∈ ℤ+
together with the associated binary operations are vector spaces. These are the common
examples of vector spaces.

24 | M o d u l e 4
4.2 Subspaces

In this section, we shall study subsets of a vector space V that satisfy two, out of the ten
properties of the said vector space. We will reflect more on these two properties to show that the
subset of V is a subspace of V.

DEFINITION: SUBSPACE of a Vector Space

Let V be a vector space and W a nonempty subset of V. If W is a vector space with


respect to the operations in V, then W is called a subspace of V.

REMARK: We can think of a subspace as a vector space within a bigger vector space. Since a
subspace is itself a vector space, then a subspace must satisfy all the ten (10)
properties of a vector space. But the following theorem will provide an easier way
of identifying whether a subset of a vector space V is a subspace of V or not.

THEOREM: Let V be a vector space with operations  and ∙ and let W be a nonempty subset
of V. Then W is called a subspace of V if and only if the following conditions hold:
(i) (Additive Closure) If 𝓊, 𝜐 ∈ 𝑊, then 𝓊  𝜐 ∈ 𝑊.
( ∙ i) (Multiplicative Closure) If c ∈ ℝ, 𝓊 ∈ 𝑊, then c ∙ 𝓊 ∈W.

Example 1: Which of the following subsets of ℝ4 are subspaces of ℝ4 ? The set of all vectors of
the form (a) (a, b, c, d) where a – b = 2
(b) (a, b, c, d) where c = a + 2b and d = a – 3b
(c) (a, b, c, d) where a = 0 and b = – d

Observe that in this example, our vector space is ℝ4 with the usual addition and usual scalar
multiplication of vectors. Given the subsets of ℝ4 , we use the theorem to identify which among
them is/are subspaces of ℝ4 .

Solution: (a) We are given the set of all vectors of the form (a, b, c, d) where a – b = 2.

25 | M o d u l e 4
If we let W1 be this set, then (a, b, c, d) = (6, 4, –1, 0) ∈ W1 since a – b = 6 – 4 = 2.
3
Also, (–6, –8, 4, 5.25) ∈ W1 but (1, 2, 3, 4) ∈/ W1. Why?

Now that we know the elements of W1, let us verify if W1 is a subspace of ℝ4 .


(i) (Additive closure) If 𝓊, 𝜐 ∈ W1, then 𝓊  𝜐 ∈ W1.
Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ); 𝜐 = (𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) ∈ W1
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ∈ ℝ and 𝑎1 − 𝑏1 = 2; 𝑎2 − 𝑏2 = 2
We observe  𝜐:
𝓊  𝜐 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 )(𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) (by substituting the values of 𝓊 and 𝜐)
= (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) (by performing )

To satisfy condition(i), we must have to show that 𝓊  𝜐 ∈ 𝑊 1. To do this, we


must show: (𝑎1 + 𝑎2 ) − (𝑏1 + 𝑏2 ) = 2.
Since 𝓊  𝜐 = (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) and
𝑎1 − 𝑏1 = 2; 𝑎2 − 𝑏2 = 2
Then
(𝑎1 + 𝑎2 ) − (𝑏1 + 𝑏2 ) = 𝑎1 + 𝑎2 + −𝑏1 + −𝑏2 (by definition of
subtraction)
= 𝑎1 + (𝑎2 + −𝑏1 ) + −𝑏2 (by associative
property for addition)
= 𝑎1 + (−𝑏1 + 𝑎2 ) + −𝑏2 (by commutative
property for addition)
= (𝑎1 + −𝑏1 ) + (𝑎2 + −𝑏2 ) (by associative
property for addition)
= (𝑎1 −𝑏1 ) + (𝑎2 −𝑏2 ) (by definition of
subtraction)
= 2 + 2 (by substituting the values of (𝑎1 −𝑏1 ) and
(𝑎2 −𝑏2 ))
= 4.
But since (𝑎1 + 𝑎2 ) − (𝑏1 + 𝑏2 ) = 4 ≠ 2, then 𝓊  𝜐 ∈/ W1. This means
that W1 is not a subspace of ℝ4 .

26 | M o d u l e 4
For the sake of discussion, let us look into the second condition and see whether closure
property for ∙ holds.

( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W1, then x ∙ 𝓊 ∈W1.


Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) ∈ W1 where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ∈ ℝ and 𝑎1 − 𝑏1 = 2 and scalar x ∈ ℝ
We observe x ∙ 𝓊:
x ∙ 𝓊 = x ∙ (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) (by substituting the value of 𝓊)
= (𝑥𝑎1 , 𝑥𝑏1 , 𝑥𝑐1 , 𝑥𝑑1 ) (by performing ∙ )

To satisfy condition ( ∙ i), we must have to show that 𝑥 ∙ 𝓊 ∈W1. To do this, we must
show: 𝑥𝑎1 − 𝑥𝑏1 = 2.
Since x ∙ 𝓊 = (𝑥𝑎1 , 𝑥𝑏1 , 𝑥𝑐1 , 𝑥𝑑1 ) and 𝑎1 − 𝑏1 = 2, then
𝑥𝑎1 − 𝑥𝑏1 = 𝑥(𝑎1 − 𝑏1 ) (by distributive property for multiplication over addition)
= 𝑥2 (by substituting the value of 𝑎1 − 𝑏1)
= 2𝑥 (by commutative property for multiplication)
Hence, 𝑥𝑎1 − 𝑥𝑏1 = 2𝑥 = 2 only when 𝑥 = 1. So 𝑥 ∙ 𝓊 ∈W
/ 1. In this example, both
conditions were not satisfied making W1 not a subspace of ℝ4 .

(b) This time we let W2 be a subset of ℝ4 with (a, b, c, d) ∈W2 if c = a + 2b & d = a – 3b.
Here, the third and fourth components of the elements of W2 are dependent on the
first two components. In particular, (a, b, c, d) = (1, 2, 5, –5) ∈W2 since
c = a + 2b or 5 = 1 + 2(2) and d = a – 3b or –5 = 1 – 3(2). But (3, 1, 0, 2) ∈W2. Why?
/

Now that we know the elements of W2, let us verify if W2 is a subspace of ℝ4 .


(i) (Additive closure) If 𝓊, 𝜐 ∈ W2, then 𝓊  𝜐 ∈ W2.
Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ); 𝜐 = (𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) ∈ W2
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ∈ ℝ and
c1 = a1 + 2b1, d1 = a1 – 3b1 c2 = a2 + 2b2 and d2 = a2 – 3b2
We observe 𝓊  𝜐:
𝓊  𝜐 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 )(𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) (by substituting the values of 𝓊 and 𝜐)

27 | M o d u l e 4
= (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) (by performing )
To satisfy condition(i), we must have to show that 𝓊  𝜐 ∈ W2. To do this, we
must show: 𝑐1 + 𝑐2 = (𝑎1 + 𝑎2 ) + 2(𝑏1 + 𝑏2 ) and
d1 + d2 = (𝑎1 + 𝑎2 ) – 3(𝑏1 + 𝑏2 )
Since 𝓊  𝜐 = (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) and
c1 = a1 + 2b1, d1 = a1 – 3b1
c2 = a2 + 2b2 and d2 = a2 – 3b2
Then
𝑐1 + 𝑐2 = (𝑎1 + 2𝑏1 ) + (𝑎2 + 2𝑏2 ) (by substituting the values of c1 and c2)
= 𝑎1 + (2𝑏1 + 𝑎2 ) + 2𝑏2 (by associative property for addition)
= 𝑎1 + (𝑎2 + 2𝑏1 ) + 2𝑏2 (by commutative property for addition)
= (𝑎1 + 𝑎2 ) + (2𝑏1 + 2𝑏2 ) (by associative property for addition)
= (𝑎1 + 𝑎2 ) + 2(𝑏1 + 𝑏2 ) (by factoring out 2 (Distributive
property for multiplication over
addition))
Also,
d1 + d2 = (a1 – 3b1) + (a2 – 3b2) (by substituting the values of d1 and d2)
= (𝑎1 + −3𝑏1 ) + (𝑎2 + −3𝑏2 ) (by definition of subtraction)
= 𝑎1 + (−3𝑏1 + 𝑎2 ) + −3𝑏2 (by associative property for addition)
= 𝑎1 + (𝑎2 + −3𝑏1 ) + −3𝑏2 (by commutative property for addition)
= (𝑎1 + 𝑎2 ) + (−3𝑏1 + −3𝑏2 ) (by associative property for addition)
= (𝑎1 + 𝑎2 ) + −3(𝑏1 + 𝑏2 ) (by factoring out −3)
= (𝑎1 + 𝑎2 )−3(𝑏1 + 𝑏2 ) (by definition of subtraction)

Therefore, 𝓊  𝜐 ∈ W2.

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W2, then x ∙ 𝓊 ∈W2.
Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) ∈ W2
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ∈ ℝ and c1 = a1 + 2b1 and d1 = a1 – 3b1
We observe x ∙ 𝓊:

28 | M o d u l e 4
x ∙ 𝓊 = x ∙ (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) (by substituting the value of 𝓊)
= (𝑥𝑎1 , 𝑥𝑏1 , 𝑥𝑐1 , 𝑥𝑑1 ) (by performing ∙ )

To satisfy condition ( ∙ i), we must have to show that 𝑥 ∙ 𝓊 ∈W2. To do this, we must
show: 𝑥𝑐1 = xa1 + 2xb1 and xd1 = xa1 – 3xb1
Since x ∙ 𝓊 = (𝑥𝑎1 , 𝑥𝑏1 , 𝑥𝑐1 , 𝑥𝑑1 ) and c1 = a1 + 2b1 and d1 = a1 – 3b1, then
𝑥𝑐1 = 𝑥(a1 + 2b1) (by substituting the value of c1)
= 𝑥a1+ 2𝑥b1 (by distributive property for multiplication over addition)
and
xd1 = x(a1 – 3b1) (by substituting the value of d1)
= x(a1 + – 3b1) (by definition of subtraction)
= xa1 + – 3xb1 (by distributive property for multiplication over addition)
= xa1 – 3xb1 (by definition of subtraction)

Hence, 𝑥 ∙ 𝓊 ∈W2. In this example, both conditions were satisfied making W2 a subspace
of ℝ4 .

(c) We let W3 be the subset of ℝ4 with (a, b, c, d) ∈W3 if a = 0 and b = –d.


Here, the first component (a) elements of W3 is a constant, and the second (b)
component (b) of the elements of W3 is the additive inverse of the fourth component.
In particular, (a, b, c, d) = (0, –2, 5, 2) ∈W3. Also, (a, b, c, d) = (0, ½, 0, - ½) ∈W3.
But (3, 1, 0, 2) ∈/ W3. Why?
Now that we know the elements of W3, let us verify if W3 is a subspace of ℝ4 .
(i) (Additive closure) If 𝓊, 𝜐 ∈ W3, then 𝓊  𝜐 ∈ W3.
Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ); 𝜐 = (𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) ∈ W3
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ∈ ℝ and 𝑎1 = 0; 𝑏1 = −𝑑1 ; 𝑎2 = 0; 𝑏2 = −𝑑2
We observe 𝓊  𝜐:
𝓊  𝜐 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 )(𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 ) (by substituting the values of 𝓊 and 𝜐)
= (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) (by performing )

29 | M o d u l e 4
To satisfy condition(i), we must have to show that 𝓊  𝜐 ∈ W3. To do this, we
must show: 𝑎1 + 𝑎2 = 0; and
𝑏1 + 𝑏2 = −(𝑑1 + 𝑑2 )
Since 𝓊  𝜐 = (𝑎1 + 𝑎2 , 𝑏1 + 𝑏2 , 𝑐1 + 𝑐2 , 𝑑1 + 𝑑2 ) and
𝑎1 + 𝑎2 = 0 + 0 = 0
𝑏1 + 𝑏2 = −𝑑1 + −𝑑2
= −(𝑑1 + 𝑑2 )
Therefore, 𝓊  𝜐 ∈ W3.

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W3, then x ∙ 𝓊 ∈W3.
Let 𝓊 = (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) ∈ W3
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ∈ ℝ and 𝑎1 = 0; 𝑏1 = −𝑑1
We observe x ∙ 𝓊:
x ∙ 𝓊 = x ∙ (𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 ) (by substituting the value of 𝓊)
= (𝑥𝑎1 , 𝑥𝑏1 , 𝑥𝑐1 , 𝑥𝑑1 ) (by performing ∙ )
To satisfy condition ( ∙ i) , we must have to show that 𝑥 ∙ 𝓊 ∈W3. To do this, we must
show: 𝑥𝑎1 = 0; 𝑥𝑏1 = −𝑥𝑑1
𝑥𝑎1 = 𝑥(0) = 0
𝑥𝑏1 = 𝑥(−𝑑1 ) = −𝑥𝑑1
Hence, 𝑥 ∙ 𝓊 ∈W3. In this example, both conditions were satisfied making W3 a subspace
of ℝ4 .

Example 2: Which of the following subsets are subspaces of P2? The set of all polynomials of
the form (a) a2t2 + a1t + a0 where a0 = 0.
(b) a2t2 + a1t + a0 where a0 = 2.
(c) a2t2 + a1t + a0 where a0 = a2 + a1.

Solution: (a) We are given the set of all polynomials of the form a2t2 + a1t + a0 where a0 = 0.
If we let W1 be this set, then 6t2 – t ∈ W1 where a2 = 6 and a1= – 1 and there is no
constant term. Also, – 4t2 + 0.5t ∈ W1 but 9t2 + t – 4 ∈/ W1. Why?
30 | M o d u l e 4
Now that we know the elements of W1, let us verify if W1 is a subspace of P2.
(i) (Additive closure) If 𝓊, 𝜐 ∈ W1, then 𝓊  𝜐 ∈ W1.
Let 𝓊 = a12t2 + a11t and 𝜐 = a22t2 + a21t ∈ W1
where 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ and constant terms are both 0.
We observe  𝜐:
𝓊  𝜐 = (a12t2 + a11t)  (a22t2 + a21t) (by substituting the values of 𝓊 and 𝜐)
= (a12+ a22)t2 + (a11 + a21)t (by definition of )
Since 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ, then (a12+ a22), (a11 + a21) ∈ ℝ by closure property for
addition. Also, observe that the polynomial 𝓊  𝜐 is of degree 2 and there are no
constant terms. Hence, 𝓊  𝜐 ∈ W1.

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W1, then x ∙ 𝓊 ∈W1.
Let 𝓊 = a2t2 + a1t ∈ W1 where 𝑎2 , 𝑎1 ∈ ℝ and constant term is 0.
We observe x ∙ 𝓊, x ∈ ℝ:
x ∙ 𝓊 = x ∙ (a2t2 + a1t) (by substituting the value of 𝓊)
= xa2t2 + xa1t (by definition of ∙ )
Since 𝑎2 , 𝑎1 ∈ ℝ, then xa2, xa1 ∈ ℝ by closure property for multiplication. Also,
observe that the polynomial x ∙ 𝓊 is of degree 2 and there is no constant term.
Hence, x ∙ 𝓊 ∈ W1. Therefore, W1 is a subspace of P2.

(b) We are given the set of all polynomials of the form a2t2 + a1t + a0 where a0 = 2.
If we let W2 be this set, then 6t2 – t + 2 ∈ W2 where a2 = 6, a1= – 1 and a0 = 2. Also,
– 4t2 + 0.5t + 2 ∈ W2 but 9t2 + t – 2 ∈/ W2. Why?
The elements of W1 and W2 only differ by the constant term. Now that we know the
elements of W2, let us verify if W2 is a subspace of P2.
(i) (Additive closure) If 𝓊, 𝜐 ∈ W2, then 𝓊  𝜐 ∈ W2.
Let 𝓊 = a12t2 + a11t + 2 and 𝜐 = a22t2 + a21t + 2 ∈ W2
where 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ
We observe 𝓊  𝜐:

31 | M o d u l e 4
𝓊  𝜐 = (a12t2 + a11t + 2)  (a22t2 + a21t + 2) (by substituting the values of 𝓊
and 𝜐)
= (a12+ a22)t2 + (a11 + a21)t + (2+ 2) (by definition of )
= (a12+ a22)t2 + (a11 + a21)t + 4 (by simplifying 2 + 2)
Since 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ, then (a12+ a22), (a11 + a21) ∈ ℝ by closure property for
addition. Also, observe that the polynomial 𝓊  𝜐 is of degree 2 but the constant
term is 4 and not 2. Hence, 𝓊  𝜐 ∈
/ W2. This means, W2 is not a subspace of P2.

For the sake of discussion, we look into the second condition and see whether closure
property for ∙ holds.
( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W2, then x ∙ 𝓊 ∈W2.
Let 𝓊 = a2t2 + a1t + 2 ∈ W2 where 𝑎2 , 𝑎1 ∈ ℝ and constant term is 0.
We observe x ∙ 𝓊, x ∈ ℝ:
x ∙ 𝓊 = x ∙ (a2t2 + a1t + 2) (by substituting the value of 𝓊)
= xa2t2 + xa1t + x2 (by definition of ∙ )
Since 𝑎2 , 𝑎1 ∈ ℝ, then xa2, xa1 ∈ ℝ by closure property for multiplication. Also,
observe that the polynomial x ∙ 𝓊 is of degree 2 but the constant term is equal to 2
only when x = 1. Hence, x ∙ 𝓊 ∈/ W2 and therefore, W2 is not a subspace of P2.

(c) We are given the set of all polynomials of the form a2t2 + a1t + a0 where a0 = a2 + a1.
If we let W3 be this set, then 6t2 – t + 5 ∈ W3 where a2 = 6 and a1= – 1 and a0 = a2
+ a1 = 6 + – 1 = 5 Also, – 4t2 + t – 3 ∈ W3 but 9t2 + t – 4 ∈/ W1. Why?
Now that we know the elements of W3, let us verify if W3 is a subspace of P2.
(i) (Additive closure) If 𝓊, 𝜐 ∈ W1, then 𝓊  𝜐 ∈ W3.
Let 𝓊 = a12t2 + a11t + a10 and 𝜐 = a22t2 + a21t + a20 ∈ W3
where 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ and a10 = a12 + a11 and a20 = a22 + a21.
We observe  𝜐:
𝓊  𝜐 = (a12t2 + a11t + a10)  (a22t2 + a21t + a10) (by substituting the values of 𝓊
and 𝜐)
= (a12+ a22)t2 + (a11 + a21)t + (a10 + a20) (by definition of )

32 | M o d u l e 4
Since 𝑎12 , 𝑎11 , 𝑎22 , 𝑎21 ∈ ℝ, then (a12+ a22), (a11 + a21) ∈ ℝ by closure property for
addition. Also, observe that the polynomial 𝓊  𝜐 is of degree 2 and 𝓊  𝜐 ∈ W3
if (a10 + a20) = (a12 + a22) + (a11 + a21) (that is, the constant term must be equal to
the sum of the numerical coefficients of t2 and t).
So, the constant term of 𝓊  𝜐 is (a10 + a20). But we know the values of a10 and
a20.
Then, (a10 + a20) = (a12 + a11) + (a22 + a21) (by substituting the values of a10 and a20)
= a12 + (a11 + a22) + a21 (by associative property for addition)
= a12 + (a22 + a11) + a21 (by commutative property for addition)
= (a12 + a22) + (a11 + a21)(by commutative property for addition)
Hence, 𝓊  𝜐 ∈ W3.

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If x ∈ ℝ, 𝓊 ∈W3, then x ∙ 𝓊 ∈W3.
Let 𝓊 = a2t2 + a1t + a0 ∈ W3 where 𝑎2 , 𝑎1 ∈ ℝ and a0 = a2 + a1.
We observe x ∙ 𝓊, x ∈ ℝ:
x ∙ 𝓊 = x ∙ (a2t2 + a1t + a0) (by substituting the value of 𝓊)
= xa2t2 + xa1t + xa0 (by definition of ∙ )
Since 𝑎2 , 𝑎1 ∈ ℝ, then xa2, xa1 ∈ ℝ by closure property for multiplication. Also,
observe that the polynomial x ∙ 𝓊 is of degree 2 and x ∙ 𝓊 ∈ W3 if the constant
term is equal to the sum of the numerical coefficients of t2 and t, that is, if
xa0 = xa2 + xa1. But since we know the value of a0, then
xa0 = x(a2 + a1) (by substituting the values of a0)
= xa2 + xa1 (by distributive property for multiplication over addition)
Therefore, x ∙ 𝓊 ∈ W3 and W3 is a subspace of P2.

Example 3: Which of the following subsets of the vector space M23 are subspaces? The set of all
𝑎 𝑏 𝑐
matrices of the form (a) [ ] where b = 3c – 1;
𝑑 𝑒 𝑓
𝑎 1 0
(b) [ ]
0 𝑏 𝑐

33 | M o d u l e 4
𝑎 𝑏 𝑐
(c) [ ] where a + d = 0 and c + e + f = 0.
𝑑 𝑒 𝑓
𝑎 𝑏 𝑐
Solution: (a)We are given the set of all matrices of the form [ ] where the value of b is
𝑑 𝑒 𝑓
dependent on c.
1 2 1 −1 −1 0
If we let W1 be this set, then [ ] ∈ W1. Also, [ ] ∈ W1 but
0 0 0 3 4 5
0 1 2
[ ] ∈/ W1. Why?
1 4 3
Now that we know the elements of W1, let us verify if W1 is a subspace of M23.
(i) (Additive closure) If 𝓊,𝜐 ∈W1, then 𝓊𝜐 ∈W1.
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
Let 𝓊 = [ ] and 𝜐 = [ 2 ] ∈W1
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑒1 , 𝑓1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 , 𝑒2 , 𝑓2 ∈ ℝ
and b1 = 3c1 – 1 and b2 = 3c2 – 1
We observe 𝓊𝜐:
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
𝓊𝜐 = [ ] [ 2 ] (by substituting the values of 𝓊 and 𝜐)
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
𝑎1 + 𝑎2 𝑏1 + 𝑏2 𝑐1 + 𝑐2
=[ ] (by definition of )
𝑑1 + 𝑑2 𝑒1 + 𝑒2 𝑓1 + 𝑓2
𝑎 𝑏 𝑐
To show that 𝓊𝜐 ∈W1, it must be of the form [ ] where b = 2c + 1, that
𝑑 𝑒 𝑓
is, if 𝑏1 + 𝑏2 = 3(𝑐1 + 𝑐2 ) − 1
Since b1 = 3c1 – 1 and b2 = 3c2 – 1 then
b1 + b2 = (3c1 – 1) + (3c2 – 1) (by substituting the values of a1 and a2)
= (3c1 + (– 1)) + (3c2 + (– 1)) (by definition of subtraction)
= 3c1 + (– 1 + 3c2) + – 1 (by associative property for addition)
= 3c1 + (3c2 + (– 1)) + – 1 (by commutative property for addition)
= (3c1 + 3c2) + ((– 1) + (– 1)) (by commutative property for addition)
= 3(c1 + c2) + (– 2) (by factoring out 3 and simplifying ((– 1) + (– 1)))
We have seen that if 𝑏1 + 𝑏2 ≠ 3(𝑐1 + 𝑐2 ) − 1. Hence, 𝓊𝜐 ∈/ W1 and W1 is not a
subspace of M23. Can you show if W1 is closed under ∙ ?

34 | M o d u l e 4
𝑎 1 0
(b)We are given the set of all matrices of the form [ ] that is, there are three
0 𝑏 𝑐
constant entries and a, b, c ∈ ℝ.
1 1 0 −1 1 0
If we let W2 be this set, then [ ] ∈ W2. Also, [ ] ∈ W2 but
0 0 0 0 4 5
0 1 2
[ ] ∈/ W2. Why?
1 4 3
Now that we know the elements of W2, let us verify if W2 is a subspace of M23.
(i) (Additive closure)If 𝓊,𝜐 ∈W2, then 𝓊𝜐 ∈W2.
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
Let 𝓊 = [ ] and 𝜐 = [ 2 ] ∈W2
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑒1 , 𝑓1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 , 𝑒2 , 𝑓2 ∈ ℝ.
Specifically, b1= b2 = 1 and c1 = c2 = d1 = d2 = 0.
We observe 𝓊𝜐:
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
𝓊𝜐 = [ ] [ 2 ] (by substituting the values of 𝓊 and 𝜐)
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
𝑎1 + 𝑎2 𝑏1 + 𝑏2 𝑐1 + 𝑐2
=[ ] (by definition of )
𝑑1 + 𝑑2 𝑒1 + 𝑒2 𝑓1 + 𝑓2
𝑎 𝑏 𝑐
To show that 𝓊𝜐 ∈W2, it must be of the form [ ] where b = 1 and
𝑑 𝑒 𝑓
c = d = 0, that is, if 𝑏1 + 𝑏2 = 1 and 𝑐1 + 𝑐2 = 𝑑1 + 𝑑2 = 0
Since b1= b2 = 1 and c1 = c2 = d1 = d2 = 0, then
𝑏1 + 𝑏2 = 1 + 1 (by substituting the values of b1 and b2)
=2
Hence, 𝑏1 + 𝑏2 ≠ 1. But 𝑐1 + 𝑐2 = 𝑑1 + 𝑑2 = 0 since c1 = c2 = d1 = d2 = 0.
Still, 𝓊𝜐 ∈/ W2 and W2 is not a subspace of M23. Can you show if W1 is closed under ∙ ?

𝑎 𝑏 𝑐
(c) We are given the set of all matrices of the form [ ] where a + d = 0 and
𝑑 𝑒 𝑓
c + e + f = 0.

1 1 0 0 −3 −3
If we let W3 be this set, then [ ] ∈ W3. Also, [ ] ∈ W3 but
−1 0 0 0 −1 4
0 1 2
[ ] ∈/ W3. Why?
1 4 3

35 | M o d u l e 4
Now that we know the elements of W3, let us verify if W3 is a subspace of M23.
(i) (Additive closure)If 𝓊,𝜐 ∈W3, then 𝓊𝜐 ∈W3.
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
Let 𝓊 = [ ] and 𝜐 = [ 2 ] ∈W3
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑒1 , 𝑓1 , 𝑎2 , 𝑏2 , 𝑐2 , 𝑑2 , 𝑒2 , 𝑓2 ∈ ℝ.
Specifically, a1 + d1 = 0, a2 + d2 = 0, c1 + e1 + f1 = 0 and c2 + e2 + f2 = 0.
We observe 𝓊𝜐:
𝑎1 𝑏1 𝑐1 𝑎 𝑏2 𝑐2
𝓊𝜐 = [ ] [ 2 ] (by substituting the values of 𝓊 and 𝜐)
𝑑1 𝑒1 𝑓1 𝑑2 𝑒2 𝑓2
𝑎1 + 𝑎2 𝑏1 + 𝑏2 𝑐1 + 𝑐2
=[ ] (by definition of )
𝑑1 + 𝑑2 𝑒1 + 𝑒2 𝑓1 + 𝑓2
𝑎 𝑏 𝑐
To show that 𝓊𝜐 ∈W3, it must be of the form [ ] where a + d = 0 and
𝑑 𝑒 𝑓
c + e + f = 0, that is, if
(𝑎1 + 𝑎2 ) + (𝑑1 + 𝑑2 ) = 0 and (𝑐1 + 𝑐2 ) + (𝑑1 + 𝑑2 ) + (𝑒1 + 𝑒2 ) = 0
Since a1 + d1 = 0, a2 + d2 = 0, c1 + e1 + f1 = 0 and c2 + e2 + f2 = 0, then
(𝑎1 + 𝑎2 ) + (𝑑1 + 𝑑2 ) = 𝑎1 + (𝑎2 + 𝑑1 ) + 𝑑2 (by associative property for
addition)
= 𝑎1 + (𝑑1 + 𝑎2 ) + 𝑑2 (by commutative property for addition)
= (𝑎1 + 𝑑1 ) + (𝑎2 + 𝑑2 ) (by associative property for addition)
= 0 + 0 (by substituting the values of (𝑎1 + 𝑑1 ) and (𝑎2 + 𝑑2 ))
= 0 (definition of additive identity).
Also, (𝑐1 + 𝑐2 ) + (𝑒1 + 𝑒2 ) + (𝑓1 + 𝑓2 )
= 𝑐1 + (𝑐2 + 𝑒1 ) + (𝑒2 + 𝑓1 ) + 𝑓2 (by associative property for addition)
= 𝑐1 + (𝑒1 + 𝑐2 ) + (𝑓1 + 𝑒2 ) + 𝑓2 (by commutative property for
addition)
= 𝑐1 + 𝑒1 + (𝑐2 + 𝑓1 ) + 𝑒2 + 𝑓2 (by associative property for addition)
= 𝑐1 + 𝑒1 + (𝑓1 + 𝑐2 ) + 𝑒2 + 𝑓2 (by commutative property for addition)
= (𝑐1 + 𝑒1 + 𝑓1 ) + (𝑐2 + 𝑒2 + 𝑓2 ) (by associative property for addition)
= 0 + 0 (by substituting the values of (𝑐1 + 𝑒1 + 𝑓1 )&(𝑐2 + 𝑒2 + 𝑓2 ))
=0

36 | M o d u l e 4
Hence, (𝑎1 + 𝑎2 ) + (𝑑1 + 𝑑2 ) = 0 and (𝑐1 + 𝑐2 ) + (𝑒1 + 𝑒2 ) + (𝑓1 + 𝑓2 ) = 0.
Therefore, W3 is closed under .

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If xW3, then x ∙ 𝓊 ∈W3.
𝑎1 𝑏1 𝑐1
Let 𝓊 = [ ] ∈W3
𝑑1 𝑒1 𝑓1
where 𝑎1 , 𝑏1 , 𝑐1 , 𝑑1 , 𝑒1 , 𝑓1 ∈ ℝ. Specifically, a1 + d1 = 0 and c1 + e1 + f1 = 0. Also,
𝑥 ∈ ℝ. We observe 𝑥 ∙ 𝓊:
𝑎1 𝑏1 𝑐1
𝑥∙𝓊 =𝑥∙[ ] (by substituting the value of 𝓊)
𝑑1 𝑒1 𝑓1
𝑥𝑎1 𝑥𝑏1 𝑥𝑐1
=[ ] (by applying scalar multiplication)
𝑥𝑑1 𝑥𝑒1 𝑥𝑓1
𝑎 𝑏 𝑐
To show that 𝑥 ∙ 𝓊 ∈W3, it must be of the form [ ] where a + d = 0 and
𝑑 𝑒 𝑓
c + e + f = 0, that is, if 𝑥𝑎1 + 𝑥𝑑1 = 0 and 𝑥𝑐1 + 𝑥𝑒1 + 𝑥𝑓1 = 0.
Since a1 + d1 = 0 and c1 + e1 + f1 = 0, then
𝑥𝑎1 + 𝑥𝑑1 = 𝑥(𝑎1 + 𝑑1 ) (by factoring out 𝑥)
= 𝑥(0) (by substituting the value of (𝑎1 + 𝑑1 ))
=0
Also, 𝑥𝑐1 + 𝑥𝑒1 + 𝑥𝑓1 = 𝑥(𝑐1 + 𝑒1 + 𝑓1 ) (by factoring out 𝑥)
= 𝑥(0)(by substituting the value of (𝑐1 + 𝑒1 + 𝑓1 ))
=0
Hence, 𝑥𝑎1 + 𝑥𝑑1 = 0 and 𝑥𝑐1 + 𝑥𝑒1 + 𝑥𝑓1 = 0. Therefore, x ∙ 𝓊 ∈W3 and W3
is a subspace of M23.

Example 4: Consider the homogeneous system Ax = 0v where A is the coefficient matrix of size
mxn and x ∈ ℝ𝑛 . Let W be a subset of ℝ𝑛 which consists of all solutions to the given
system. Since homogeneous systems always have the trivial solution, then W is not
empty. Let us verify if W is a subspace of ℝ𝑛 .

Solution: (i) (Additive closure) If 𝓊, 𝜐 ∈ W, then 𝓊  𝜐 ∈ W.


Let 𝓊, 𝜐 ∈ W. Then A𝓊 = A𝜐 = 0v

37 | M o d u l e 4
So A(𝓊  𝜐) = A(𝓊)  A(𝜐) associative property for matrix multiplication
= 0v  0v substituting the values of A(𝓊) and A(𝜐)

= 0v property iv

Hence, 𝓊  𝜐 is a solution to the homogeneous system. Therefore, 𝓊  𝜐 ∈ W.

Next, we look into the second condition and see whether closure property for ∙ holds.
( ∙ i) (Multiplicative Closure) If 𝓊 ∈W, then x ∙ 𝓊 ∈W, x ∈ ℝ.
Let 𝓊 ∈ W. Then A𝓊 = 0v
So A(𝑥 ∙ 𝓊) = 𝑥A(𝓊) scalar multiplication of matrices
= 𝑥 0v substitute the value of A(𝓊)

= 0v by theorem 2 (vector spaces)

Hence, 𝑥 ∙ 𝓊 is a solution to the homogeneous system. Therefore, 𝑥 ∙ 𝓊 ∈ W. since


both properties are satisfied, then W is a subspace of ℝ𝑛 . W is more commonly
known as the solution space of the homogeneous system or the null space of A. We
will encounter this subspace later as we progress along basis in the next sections.

38 | M o d u l e 4
4.3 Spanning Sets and Linear Independence
In this section, we shall study spanning sets and linearly independent set of vectors.
Procedures on how to check if a given set of vectors span a vector space and are linearly
independent are discussed here.

Definition: LINEAR COMBINATION

Let v1, v2, …, vk be vectors in a vector space V. A vector v ∈ Vis called a linear combination
of v1, v2, …, vk if v = c1v1 + c2v2 +… + ckvk for some real numbers c1, c2, …, ck, k ∈ ℕ.

Example 1: Write the following vectors w1 = (1, 1, 1) and w2 = (–1, 2, 4) as a linear combination
of vectors in S = {v1, v2, v3} where v1 = (1, 2, 3), v2 = (–1, 0, 0) and v3 = (0, 1, 2).

Solution: (a) Let us try to write w1 = (1, 1, 1) in terms of v1, v2, and v3, that is, we need to find real
numbers c1, c2, and c3 such that w1 = c1v1 + c2v2 + c3v3

(1, 1, 1) = c1(1, 2, 3) + c2(–1, 0, 0) + c3(0, 1, 2)

𝑐1 − 𝑐2 =1
or { 2𝑐1 + 𝑐3 = 1
3𝑐1 + 2𝑐3 = 1

Let us solve for the unknowns using the Gauss – Jordan Reduction Method learned
in Module 2.

1 −1 0 ⋮ 1 1 −1 0 ⋮ 1 1 −1 0 ⋮ 1
[2 0 1 ⋮ 1] ≈ [0 2 1 ⋮ −1] ≈ [0 1 1⁄2 1
⋮ − ⁄2] ≈
3 0 2 ⋮ 1 0 3 2 ⋮ −2 0 3 2 ⋮ −2

1 0 1⁄ 1⁄ 1⁄ 1⁄
2 ⋮ 2 1 0 2 ⋮ 2 1 0 0 ⋮ 1
0 1 1⁄ 1
⋮ − ⁄2 ≈ [0 1 1⁄ ⋮ − 1⁄ ] ≈ [ 0 1 0 ⋮ 0]
2 2 2
1⁄ ⋮ ⋮ 0 0 1 ⋮ −1
[0 0 2 − 1⁄2] 0 0 1 −1

Hence, c1 = 1, c2 = 0 and c3 = –1 and that (1, 1, 1) = (1, 2, 3) – (0, 1, 2). Therefore,


w1 is a linear combination of the vectors in S.

39 | M o d u l e 4
(b) This time, let us try to write w2 = (–1, 2, 4) in terms of v1, v2, and v3, by finding real
numbers c1, c2, and c3 such that w2 = c1v1 + c2v2 + c3v3

(–1, 2, 4) = c1(1, 2, 3) + c2(–1, 0, 0) + c3(0, 1, 2)

𝑐1 − 𝑐2 = −1
or { 2𝑐1 + 𝑐3 = 2
3𝑐1 + 2𝑐3 = 4

Using the Gauss – Jordan Reduction Method, we solve for c1, c2 and c3.

1 −1 0 ⋮ −1 1 −1 0 ⋮ −1 1 −1 0 ⋮ −1
[2 0 1 ⋮ 2 ] ≈ [0 2 1 ⋮ 4 ] ≈ [0 1 1⁄2 ⋮ 2 ]≈
3 0 2 ⋮ 4 0 3 2 ⋮ 7 0 3 2 ⋮ 7

1 0 1⁄2 1 0 1⁄2
⋮ 1 ⋮ 1 1 0 0 ⋮ 0
0 1 1⁄2 ⋮ 2 ≈ [0 1 1⁄ ⋮ 2 ] ≈ [0 1 0 ⋮ 1]
⋮ 1 2 ⋮ 2 0 0 1 ⋮ 2
[0 0 1⁄2 ] 0 0 1

Hence, c1 = 0, c2 = 1 and c3 = 2 and (– 1, 2, 4) = (– 1, 0, 0) + 2(0, 1, 2). Therefore,


w2 is also a linear combination of the vectors in S.

Example 2: Write the following vector w = –3x2 + 15x + 18 as a linear combination of vectors in
S = {v1, v2, v3} where v1 = 2x2 + 7, v2 = 2x2 + 4x + 5 and v3 = 2x2 – 12x + 13.

Solution: (a) Let us try to write w = –3x2 + 15x + 18 in terms of v1, v2, and v3, that is, we need to
find real numbers c1, c2, and c3 such that w = c1v1 + c2v2 + c3v3, that is,

–3x2 + 15x + 18 = c1(2x2 + 7) + c2(2x2 + 4x + 5) + c3(2x2 – 12x + 13)

2𝑐1 + 2𝑐2 + 2𝑐3 = −3


or { 4𝑐2 − 12𝑐3 = 15
7𝑐1 + 5𝑐2 + 13𝑐3 = 18

Using the Gauss – Jordan Reduction Method, we solve for c1, c2 and c3.

40 | M o d u l e 4
2 2 2 ⋮ −3 1 1 1 ⋮ − 3⁄2 1 1 1 ⋮ − 3⁄2
[0 4 −12 ⋮ 15 ] ≈ [0 4 −12 ⋮ 15 ] ≈ [0 4 −12 ⋮ 15 ] ≈
7 5 13 ⋮ 18 7 5 13 ⋮ 18 0 −2 6 ⋮ 18

3 3 51⁄
1 1 1 ⋮ − ⁄2 1 1 1 ⋮ − ⁄2 1 0 4 ⋮ 4
[0 −2 6 ⋮ 57⁄ ] ≈ [0 1 −3 ⋮ − 57⁄ ] ≈ [0 1 −3 ⋮ − ⁄4]
57
0 4 −12 ⋮ 2 0 4 −12 ⋮ 4 0 0 0 ⋮
15 15 −42

Observe that the matrix in reduced row echelon form shows an inconsistent solution. This
means that there are no real numbers c1, c2 and c3 such that w = c1v1 + c2v2 + c3v3. Therefore, w
is not a linear combination of the vectors in S.

Definition: SPANNING SET

If S = {v1, v2, …, vk} is a set of vectors in a vector space V, then the set of all vectors in V
that are linear combinations of the vectors in S, denoted by span S or span {v1, v2, …, vk}.

2 −3 0 5
Example 1: Given the matrices 𝐴 = [ ] and 𝐵 = [ ] in M22. Determine which of
4 1 1 −2
the matrices below belongs to span {A, B}:

6 −19 6 2
(a) [ ] (b) [ ]
10 7 9 11

6 −19
Solution: (a) Let us try to write [ ] in terms of A and B, that is, we need to find real
10 7
6 −19
numbers c1 and c2 such that [ ] = c1A + c2B, that is,
10 7

6 −19 2 −3 0 5
[ ] = 𝑐1 [ ] + 𝑐2 [ ]
10 7 4 1 1 −2

2𝑐1 = 6
−3𝑐1 + 5𝑐2 = −19
or {
4𝑐1 + 𝑐2 = 10
𝑐1 − 2𝑐2 = 7

Using the Gauss – Jordan Reduction Method, we solve for c1 and c2.

41 | M o d u l e 4
2 0 ⋮ 6 1 0 ⋮ 3 1 0 ⋮ 3
[ −3 5 ⋮ −19] ≈ [ −3 5 ⋮ −19] ≈ [ 0 5 ⋮ −10]
4 1 ⋮ 10 4 1 ⋮ 10 0 1 ⋮ −2
1 −2 ⋮ 7 1 −2 ⋮ 7 0 −2 ⋮ 4

1 0 ⋮ 3 1 0 ⋮ 3
≈[ 0 1 ⋮ −2 ] ≈ [0 1 ⋮ −2]
0 5 ⋮ −10 0 0 ⋮ 0
0 −2 ⋮ 4 0 0 ⋮ 0

6 −19
Hence, c1 = 3 and c2 = –2. Therefore, [ ] is also a linear combination of
10 7
6 −19
the vectors in S and [ ] ∈ span {A, B}
10 7

6 2
(b) Let us try to write[ ] in terms of A and B, that is, we need to find real numbers c1
9 11
6 2 6 2 2 −3 0 5
and c2 such that[ ] = c1A + c2B, that is, [ ] = 𝑐1 [ ] + 𝑐2 [ ]
9 11 9 11 4 1 1 −2

2𝑐1 = 6
−3𝑐1 + 5𝑐2 = 2
or {
4𝑐1 + 𝑐2 = 9
𝑐1 − 2𝑐2 = 11

Using the Gauss – Jordan Reduction Method, we solve for c1 and c2.

2 0 ⋮ 6 1 0 ⋮ 3 1 0 ⋮ 3
[ −3 5 ⋮ 2 ] ≈ [ −3 5 ⋮ 2]≈[ 0 5 ⋮ 11 ]
4 1 ⋮ 9 4 1 ⋮ 9 0 1 ⋮ −3
1 −2 ⋮ 11 1 −2 ⋮ 11 0 −2 ⋮ 8

1 0 ⋮ 3 1 0 ⋮ 3
≈[ 0 1 ⋮ −3] ≈ [0 1 ⋮ −3]
0 5 ⋮ 11 0 0 ⋮ 26
0 −2 ⋮ 8 0 0 ⋮ −2

Observe that the matrix in reduced row echelon form shows an inconsistent solution. This
6 2 2 −3
means that there are no real numbers c1 and c2 such that [ ] = 𝑐1 [ ]
9 11 4 1
0 5 6 2
+ 𝑐2 [ ]. Therefore, [ ] ∈/ span {A, B}.
1 −2 9 11

42 | M o d u l e 4
Theorem: Let S = {v1, v2, …, vk} be a set of vectors in a vector space V. Then span S is a subspace
of V.

Proof: Given: S = {v1, v2, …, vk}∈ V.

NTS: span S is a subspace of V.

Since we want to show that span S is a subspace of V, we need to establish two


properties: (i) (Additive closure) and ( ∙ i) (Multiplicative Closure).

(i) (Additive closure): Suppose w1 and w2 ∈ span S. Then, w1 = c1v1 + c2v2 + … + ckvk and

w2 = d1v1 + d2v2 + … + dkvk for some real numbers ci and di, 1 < i < k, k ∈ ℕ. So that

w1  w2 = (c1v1 + c2v2 + … + ckvk)  (d1v1 + d2v2 + … + dkvk) (by substituting the values
of w1 and w2)

Note that v1, v2, …, vk ∈ V. Then, using the commutative and associative properties for the
vector space V and for ℝ, we have: w1  w2 = (c1+d1)v1 + (c2+d2)v2 + … + (ck +dk)vk . But
since ci’s and di’s are real numbers, 1 < i < k, k ∈ ℕ, we let ei = (ci+di) ∈ ℝ by closure
property for addition. Hence,

w1  w2 = (c1+d1)v1 + (c2+d2)v2 + … + (ck +dk)vk

= e1v1 + e2v2 + … + ekvk, ei’s ∈ ℝ, 1 < i < k, k ∈ ℕ.

Therefore, w1  w2 can be written as a linear combination of the vectors in S. This implies


that w1  w2 ∈ span S.

( ∙ i) (Multiplicative Closure): Suppose w1 ∈ span S and x ∈ ℝ. Then,

w1 = c1v1 + c2v2 + … + ckvk for some real numbers ci, 1 < i < k, k ∈ ℕ and

x ∙ w1 = x ∙ (c1v1 + c2v2 + … + ckvk)

Again, since v1, v2, …, vk ∈ V, then we can use vector property ( ∙ iv) (Multiplicative
Associativity) so that

x ∙ w1 = x ∙ (c1v1 + c2v2 + … + ckvk) = x(c1v1) + x(c2v2) + … + x(ckvk)

= (xc1)v1 + (xc2)v2 + … + (xck)vk

43 | M o d u l e 4
Now, we let yi = (xci). Since x, ci ∈ ℝ, 1 < i < k, k ∈ ℕ, then by closure property for
multiplication yi ∈ ℝ, 1 < i < k, k ∈ ℕ. Hence,

x ∙ w1 = x ∙ (c1v1 + c2v2 + … + ckvk)

= (xc1)v1 + (xc2)v2 + … + (xck)vk

= y1v1 + y2v2 + … + ykvk, yi ∈ ℝ

Therefore, x ∙ w1 can be written as a linear combination of the vectors in S. This implies


that x ∙ w1 ∈ span S. Since both properties are satisfied, then, span S is a subspace of V.

Example 2: Consider the set S of vectors given by S = {(1,0, 0), (0, 0, 1)}. Then span S is the set
of all vectors in ℝ3 of the form

a(1,0, 0) + b(0, 0, 1), where a, b ∈ ℝ

That is span S is the subspace of ℝ3 of the form (a, 0, b), a, b ∈ ℝ.

REMARK: If every vector in V is a linear combination of the vectors in S = {v1, v2, …, vk}∈ V,
then S is said to span V. We write span S = V. We say S spans V or {v1, v2, …, vk}
spans V. Similarly, we say “V is spanned by S.”

Example 3: Consider the set S of polynomials in P3 given by S = {x3, x2, x, 1}. Then span S is the
set of all polynomials of the form ax3 + bx2 + cx + d where a, b, c, d ∈ ℝ, 𝑎 ≠ 0.
span S is basically the whole of P3, that is, span S = P3.

How do we know if a set of vectors span the whole vector space? Below is the procedure to check
if the set of vectors S = {v1, v2, …, vk}∈ V span the vector space V.

1. Choose an arbitrary vector v ∈ V.


2. Write v as a linear combination of v1, v2, …, vk. If there exist c1, c2, …, ck ∈ ℝ such that
v = c1v1 + c2v2 + … + ckvk, then S spans V. Otherwise, S only spans a subspace of V.

Example 1: Determine whether the set S of vectors in ℝ2 given by S = {(2, 1), (–1, 2)} span ℝ2 .

Solution: Step 1: Choose an arbitrary vector v = (a, b) ∈ ℝ2 .

Step 2: Write v as a linear combination of the vectors in S.

v = (a, b) = c1(2, 1) + c2(–1, 2)

44 | M o d u l e 4
2𝑐1 − 𝑐2 = 𝑎 2 −1 ⋮ 𝑎 1 2 ⋮ 𝑏 1 2 ⋮ 𝑏
{ or [ ]≈[ ]≈[ ]≈
𝑐1 + 2𝑐2 = 𝑏 1 2 ⋮ 𝑏 2 −1 ⋮ 𝑎 0 −5 ⋮ −2𝑏 + 𝑎
1 2 ⋮ 𝑏 1 0 ⋮ (𝑏 + 2𝑎)⁄5
[ ]≈[ ]
0 1 ⋮ (2𝑏 − 𝑎)⁄5 0 1 ⋮ (2𝑏 − 𝑎)⁄5

𝑏+2𝑎 2𝑏−𝑎
Therefore 𝑐1 = and 𝑐2 = . Hence, S spans ℝ2 .
5 5

Example 2: Determine whether the set S of vectors in ℝ3 given by S = {(–2, 5, 0), (4, 6, 3)} span
ℝ3 .

Solution: Step 1: Choose an arbitrary vector v = (a, b, c) ∈ ℝ3 .

Step 2: Write v as a linear combination of the vectors in S.

v = (a, b, c) = c1(–2, 5, 0) + c2(4, 6, 3)

−2𝑐1 + 4𝑐2 = 𝑎 −2 4 ⋮ 𝑎 1 −2 ⋮ − 𝑎⁄2 1 −2 ⋮ − 𝑎⁄2


{ 5𝑐1 + 6𝑐2 = 𝑏 or [ 5 6 ⋮ 𝑏 ] ≈ [5 6 ⋮ 𝑏 ] ≈ [0 3 ⋮ 𝑐 ]≈
3 𝑐2 = 𝑐 0 3 ⋮ 𝑐 0 3 ⋮ 𝑐 5 6 ⋮ 𝑏
𝑎
1 −2 ⋮ − 𝑎⁄2 1 −2
⋮ − ⁄2
𝑐⁄
[0 3 ⋮ 𝑐 ]≈ 0 1 ⋮ 3 ≈
0 16 ⋮ (5𝑎 + 2𝑏)⁄ 0 16 ⋮ (5𝑎 + 2𝑏)⁄
𝑏 [ 𝑏]
(2𝑏𝑐 − 3𝑎)⁄
⋮ 3𝑏
1 0
0 1 ⋮ 𝑐⁄3
0 0 (−16𝑏𝑐 + 15𝑎 + 6𝑏)⁄
[ ⋮ 3𝑏 ]
Observe that the matrix in reduced row echelon form produces an inconsistent solution. Hence,
we cannot find real numbers c1 and c2 such that v = c1(–2, 5, 0) + c2(4, 6, 3). Therefore, S does
not span the whole of ℝ3 .

Example 3: Determine whether the set S of polynomials given by S = {–x + 2, 2x – 4} span P1.

Solution: Step 1: Choose an arbitrary vector v = ax + b ∈ P1, a, b ∈ ℝ.

Step 2: Write v as a linear combination of the vectors in S.

v = ax + b = c1(–x + 2) + c2(2x – 4)

45 | M o d u l e 4
−𝑐1 + 2𝑐2 = 𝑎 −1 2 ⋮ 𝑎 1 −2 ⋮ −𝑎 1 −2 ⋮ 2𝑎
{ or [ ]≈[ ]≈[ ]
2𝑐1 − 4𝑐2 = 𝑏 2 −4 ⋮ 𝑏 2 −4 ⋮ 𝑏 0 0 ⋮ 2𝑎 + 𝑏

Observe that the matrix in reduced row echelon form produces an inconsistent solution. Hence,
we cannot find real numbers c1 and c2 such that v = c1(–x + 2) + c2(2x – 4). Therefore, S does not
span the whole of 𝑃1 .

Example 4: Determine whether the set S of polynomials span P3, where

S = {x2 + x, x3 – x + 1, –2x3, 8x2 – 4}.

Solution: Step 1: Choose an arbitrary vector v = ax3 + bx2 + cx + d ∈ P3, a, b, c, d ∈ ℝ.

Step 2: Write v as a linear combination of the vectors in S.

v = ax3 + bx2 + cx + d = c1(x2 + x) + c2(x3 – x + 1) + c3(–2x3)+ c4(8x2 – 4)

𝑐2 − 2𝑐3 = 𝑎 0 1 −2 0 ⋮ 𝑎 1 0 −1 1 ⋮ 𝑏
𝑐 + 8𝑐 = 𝑏 ⋮ 𝑏] ≈ [ 0 1
{ 𝑐1 − 𝑐4 = 𝑐 or [ 1 0 −1 1 1 0 ⋮ 𝑎] ≈
1 2 −2 0 0 0 ⋮ 𝑐 −2 0 0 0 ⋮ 𝑐
𝑐2 − 4𝑐4 = 𝑑 0 8 0 −4 ⋮ 𝑑 0 8 0 −4 ⋮ 𝑑
1 0 −1 1 ⋮ 𝑏 1 0 −1 1 ⋮ 𝑏
[ 0 1 1 0 ⋮ 𝑎 ] ≈ [0 1 1 0 ⋮ 𝑎 ]≈
0 0 −2 2 ⋮ 2𝑏 + 𝑐 0 0 −2 2 ⋮ 2𝑏 + 𝑐
0 8 0 −4 ⋮ 𝑑 0 0 −8 −4 ⋮ 𝑑 − 8𝑎
1 0 −1 1 ⋮ 𝑏 1 0 0 1 ⋮ − 𝑐⁄2
0 1 1 0 ⋮ 𝑎 0 1 0 0 ⋮ 𝑎 + 𝑏 + 𝑐⁄2
𝑐 ≈
0 0 1 −1 ⋮ −𝑏 − 0 0 1 −1 ⋮ −𝑏 + 𝑐⁄2
2
[0 0 −8 −4
⋮ 𝑑 − 8𝑎 ] [0 0 0 −12 ⋮ 𝑑 − 8𝑎 − 8𝑏 − 4𝑐 ]
1 0 0 0 ⋮ − 𝑐⁄2
0 1 0 1 ⋮ 𝑎 + 𝑏 + 𝑐⁄2
≈ ⋮ −𝑏 − 𝑐⁄2
0 0 1 −1
(8𝑎 + 8𝑏 + 4𝑐 − 𝑑)⁄
[0 0 0 1 ⋮ 12]
⋮ 𝑐
− ⁄2
1 0 0 0
0 1 0 0 ⋮ (4𝑎 + 4𝑏 + 2𝑐 + 𝑑)⁄
12
≈ (8𝑎 − 4𝑏 − 2𝑐 − 𝑑)⁄
0 0 1 0 ⋮ 12
0 0 0 1 ⋮ (8𝑎 + 8𝑏 + 4𝑐 − 𝑑)⁄
[ 12 ]
𝑐 4𝑎+4𝑏+2𝑐+𝑑 8𝑎−4𝑏−2𝑐−𝑑 8𝑎+8𝑏+4𝑐−𝑑
Therefore, 𝑐1 = − 2 ; 𝑐2 = ; 𝑐3 = ; 𝑎𝑛𝑑 𝑐4 = . Hence, S spans
12 12 12
P3.

46 | M o d u l e 4
DEFINITION: LINEAR INDEPENDENCE

The vectors v1, v2, …, vk in a vector space V are said to be linearly dependent if there exist
constants c1, c2, …, ck, not all zero, such that c1v1 + c2v2 +…+ ckvk = 0v. Otherwise, v1, v2, …, vk
are said to be linearly independent. This means that v1, v2, …, vk are linearly independent if
whenever c1v1 + c2v2 +…+ ckvk = 0v, then c1 = c2 = …= ck = 0.

How do we know if a set of vectors are linearly independent? Below is the procedure to check the
linear dependence/independence of the set of vectors S = {v1, v2, …, vk}.

1. Write the zero vector as a linear combination of the given set of vectors, producing a
homogeneous system.
2. Use the Gauss – Jordan Elimination method to determine whether the system has a unique
solution. If the homogeneous system has only the trivial solution, then the given vectors
are linearly independent; if it has a nontrivial solution, then the given vectors are linearly
dependent.

Example 1: Determine whether the set S is linearly independent or linearly dependent:

(a) S = {(0, 0), (1, – 1)}


(b) S = {(1, – 4, 1), (6, 3, 2)}
1 −1 4 3 1 −8
(c) S = {[ ],[ ],[ ]}
4 5 −2 3 22 23
(d) S = {–x +2, –x2 + 2x, x2 –5x + 6}
(e) S = {x2 + 3x + 1, 2x2 + x – 1, 4x}

Solution: (a) Step 1: (0, 0) = c1(0, 0) + c2(1, – 1)

𝑐 =0 0 1 ⋮ 0 0 1 ⋮ 0
Step 2: { 2 or [ ]≈ [ ]
−𝑐2 = 0 0 −1 ⋮ 0 0 0 ⋮ 0

The resulting matrix in reduced row echelon form produces many solutions. One
solution, aside from the trivial solution is c1 = 1 and c2 = 0. Hence, S contains linearly
dependent set of vectors.

(b) Step 1: (0, 0, 0) = c1(1, – 4, 1) + c2(6, 3, 2)

𝑐1 + 6𝑐2 = 0 1 6 ⋮ 0 1 6 ⋮ 0
Step 2: { −4𝑐1 + 3𝑐2 = 0 or [−4 3 ⋮ 0] ≈ [0 27 ⋮ 0] ≈
𝑐1 + 2𝑐2 = 0 1 2 ⋮ 0 0 −4 ⋮ 0

47 | M o d u l e 4
1 6 ⋮ 0 1 0 ⋮ 0
[0 1 ⋮ 0] ≈ [ 0 1 ⋮ 0]
0 −4 ⋮ 0 0 0 ⋮ 0
Therefore, c1 = c2 = 0, the trivial solution, is the only solution to the given system.
Hence, the vectors in S are linearly independent.

0 0 1 −1 4 3 1 −8
(c) Step 1: [ ] = 𝑐1 [ ] + 𝑐2 [ ] + 𝑐3 [ ]
0 0 4 5 −2 3 22 23
𝑐1 + 4𝑐2 + 𝑐3 = 0 1 4 1 ⋮ 0
−𝑐 + 3𝑐2 − 8𝑐3 = 0
Step 2:{ 1 or [−1 3 −8 ⋮ 0] ≈
4𝑐1 − 2𝑐2 + 22𝑐3 = 0 4 −2 22 ⋮ 0
5𝑐1 + 3𝑐2 + 23𝑐3 = 0 5 3 23 ⋮ 0

1 4 1 ⋮ 0 1 4 1 ⋮ 0 1 0 5 ⋮ 0
[0 7 −7 ⋮ 0] ≈ [0 1 −1 ⋮ 0 ] ≈ [0 1 −1 ⋮ 0]
0 −18 18 ⋮ 0 0 −18 18 ⋮ 0 0 0 0 ⋮ 0
0 −17 18 ⋮ 0 0 −17 18 ⋮ 0 0 0 1 ⋮ 0
1 0 5 ⋮ 0 1 0 0 ⋮ 0
≈ [0 1 −1 ⋮ 0] ≈ [0 1 0 ⋮ 0]
0 0 1 ⋮ 0 0 0 1 ⋮ 0
0 0 0 ⋮ 0 0 0 0 ⋮ 0
Therefore, c1 = c2 = c3 = 0, the trivial solution, is the only solution to the given system.
Hence, the vectors in S are linearly independent.

(d) Step 1: 0 = c1(–x +2) + c2(–x2 + 2x) + c3 (x2 –5x + 6)

− 𝑐2 + 𝑐3 = 0 0 −1 1 ⋮ 0
Step 2: { 1 + 2𝑐2 − 5𝑐3 = 0 or
−𝑐 [−1 2 −5 ⋮ 0] ≈
2𝑐1 + 6𝑐3 = 0 2 0 6 ⋮ 0
2 0 6 ⋮ 0 1 0 3 ⋮ 0 1 0 3 ⋮ 0
[−1 2 −5 ⋮ 0] ≈ [−1 2 −5 ⋮ 0] ≈ [0 2 −2 ⋮ 0] ≈
0 −1 1 ⋮ 0 0 −1 1 ⋮ 0 0 −1 1 ⋮ 0
1 0 3 ⋮ 0 1 0 3 ⋮ 0
[0 1 −1 ⋮ 0] ≈ [0 1 −1 ⋮ 0]
0 −1 1 ⋮ 0 0 0 0 ⋮ 0
𝑐1 + 3𝑐3 = 0 𝑐1 = −3𝑐3
Therefore, { or { where 𝑐3 ∈ ℝ
𝑐2 − 𝑐3 = 0 𝑐2 = 𝑐3

Hence, aside from the trivial solution, we still have the nontrivial solution to the system. Then,
the vectors in S are linearly dependent.

48 | M o d u l e 4
(e) Step 1: 0 = c1(x2 + 3x + 1) + c2(2x2 + x – 1) + c3 (4x)

𝑐1 + 2𝑐2 =0
Step 2: {3𝑐1 + 𝑐2 + 4𝑐3 = 0 or
𝑐1 − 𝑐2 =0

1 2 0 ⋮ 0 1 2 0 ⋮ 0 1 2 0 ⋮ 0
[3 1 4 ⋮ 0] ≈ [0 −5 4 ⋮ 0] ≈ [0 −3 0 ⋮ 0]
1 −1 0 ⋮ 0 0 −3 0 ⋮ 0 0 −5 4 ⋮ 0
1 2 0 ⋮ 0 1 0 0 ⋮ 0 1 0 0 ⋮ 0
≈ [0 1 0 ⋮ 0] ≈ [0 1 0 ⋮ 0] ≈ [0 1 0 ⋮ 0]
0 −5 4 ⋮ 0 0 0 4 ⋮ 0 0 0 0 ⋮ 0
Therefore, the only solution to the homogeneous system is c1 = c2 = c3 = 0. Hence,
the vectors in S are linearly independent.

The following theorem is helpful in identifying linearly dependent vectors. Please check
on the required textbook for the proof of this theorem (page 259).

Theorem: The nonzero vectors v1, v2, …, vn in a vector space V are said to be linearly dependent
if and only if one of the vectors vj, j > 2 is a linear combination of the preceding vectors
v1, v2, …, vj–1.

Example: Let us use the result in the previous example. Among the five sets, there are two sets
with linearly dependent vectors. However, one of which contains the zero vector.
Hence, we are left with showing the other set that it really contains linearly dependent
vectors.

(d) S = {–x +2, –x2 + 2x, x2 –5x + 6}


Let us write x2 –5x + 6 as a linear combination of –x +2 and –x2 + 2x:
x2 –5x + 6 = c1(–x +2) + c2(–x2 + 2x)

− 𝑐2 = 1
𝑐1 = 3
{−𝑐1 + 2𝑐2 = −5 or {
𝑐2 = −1
2𝑐1 =6

Hence, x2 –5x + 6 = 3(–x +2) – (–x2 + 2x), is a linear combination of (–x +2) and (–x2 + 2x). We
can say that x2 –5x + 6 is dependent on (–x +2) and (–x2 + 2x).

49 | M o d u l e 4
4.4 Basis and Dimension
In this section, we shall continue on with the concepts of spanning sets and linearly
independent set of vectors. The natural basis or standard basis will be discussed here. Theorems
will be introduced to provide means of forming a basis for a vector space from a given spanning
set or from a given linearly independent set of vectors. The basis provides the smallest set of
vectors in a vector space V that completely describes V.
Here, we will deal with bases consisting of a finite number of vectors.

DEFINITION: BASIS
The vectors v1, v2, …, vk in a vector space V are said to form a basis for V if
(i) v1, v2, …, vk span V; and
(ii) v1, v2, …, vk are linearly independent.

Remarks:
1. If v1, v2, …, vk form a basis for a vector space V, then they must be (a) distinct and (b)
nonzero.
2. The definition tells us that a basis S has two properties: (a) S must have enough vectors to
span the vector space (V = span S) but not so many vectors that one of them can be written
as a linear combination of the other vectors in S. (i.e., S contains linearly independent set
of vectors.)

Example 1: Let S = {e1, e2} be vectors in ℝ2 where e1 = (1, 0) and e2 = (0, 1). Then S is a basis
for ℝ2 . To show this, we use the procedures for spanning set (i) and linear
independence(ii).
Solution: (i) Step 1: Take arbitrary element v ∈ ℝ2 , say v = (a, b) where a, b ∈ ℝ and write it as a
linear combination of e1 and e2, i,e,
v = c1e1 + c2e2 or (a, b) = c1(1, 0) + c2(0, 1)
Step 2: Solve for c1 and c2: by addition of vectors, we have c1 = a and c2 = b. Since
a, b ∈ ℝ, then v is a linear combination of e1 and e2 and that S spans ℝ2 .

50 | M o d u l e 4
(ii) Step 1: Write the zero vector as a linear combination of e1 and e2, i.e.,
(0, 0) = c1e1 + c2e2 or (0, 0) = c1(1, 0) + c2(0, 1)
Step 2: Solve for c1 and c2 and identify if the trivial solution is obtained. By addition
of vectors, we have c1 = 0 and c2 = 0. Since we only have the trivial solution,
then e1 and e2 are linearly independent.
Since we have shown that S spans ℝ2 and S consists of linearly independent
vectors, then, S is a basis for ℝ2 . Such vectors e1 and e2 are called the standard basis or
the natural basis for ℝ2 .

For ℝ3 , the standard basis is given by S = {e1, e2, e3} where e1 = (1, 0, 0), e2 = (0, 1, 0) and
e3 = (0, 0, 1). In some books, they use the letters i, j, and k for e1, e2 and e3 respectively. In general,
the standard basis is given by S = {e1, e2, …, en} where
e1 = (1, 0, 0, …, 0)
e2 = (0, 1, 0, …, 0)

en-1 = (0, 0, …0, 1, 0)
en = (0, 0, …0, 1)

Example 2: Recall example 3 from spanning set of the previous section on page 45, S = {x3, x2,
x, 1}. be vectors in P3. Then S is a basis

51 | M o d u l e 4

You might also like