Notes_Unit-II_Vector Space (1)
Notes_Unit-II_Vector Space (1)
(COMP)
Vector Space
1.1 Introduction
Linear algebra is the study of a certain algebraic structure called a vector space. A good
argument could be made that linear algebra is the most useful subject in all of mathematics and
that it exceeds even courses like calculus in its significance. It is used extensively in applied
mathematics and engineering. It is also fundamental in pure mathematics areas like number
theory, functional analysis, geometric measure theory, and differential geometry. Even calculus
cannot be correctly understood without it. For example, the derivative of a function of many
variables is an example of a linear transformation, and this is the way it must be understood as
soon as you consider functions of more than one variable. It is difficult to think a mathematical
tool with more applications than vector spaces of Linear Algebra as we may sum forces, control
devices, model complex systems,denoise images etc. They underlie all these processes and it is
thank to them as we can ”nicely” operate with vectors. These are the mathematical structures that
generalizes many other useful structures.
Definition : Field: A nonempty set F with two binary operations + (addition) and .
(multiplication) is called a field if it satisfying the following axioms for any a,b, c ∈ F
1. a + b ∈ F (F is closed under +)
2. a + b = b + a ( + is commutative operation on F)
3. a + (b + c) = (a + b) + c ( + is associative operation on F)
4. Existence of zero element (additive identity): There exists 0 ∈ F such that a + 0 = a.
5. Existence of negative element: For given a ∈ F, there exists − a ∈ F such that
a + (−a) = 0.
6. a.b ∈ F (F is closed under .)
7. a.b = b.a ( . is commutative operation on F)
8. (a.b).c = a . (b . c) ( . is associative operation on F)
9. Existence of Unity: There exists 1 ∈ F such that a.1 = a, ∀ a ∈ F.
1 1
10. For any nonzero a ∈ F, there exists a−1 = ∈ F such that a. = 1.
a a
11. Distributive Law: a .(b + c) = (a .b) + (a .c).
Example 1.1.
(i) Set of rational numbers Q, set of real numbers R, set of complex numbers C are all
fields with respective usual addition and usual multiplication.
(ii) For prime number p, (Zp,+p,×p) is a field with respect to addition modulo p and
multiplication modulo p operation.
Definition : Vector Space: A nonempty set of vectors V with vector addition (+) and scalar
multiplication operation (.) is called a vector space over field F if it satisfying the following
axioms for any u, v, w ∈ V and α, β, ∈ F
1. C1: u + v ∈ V (V is closed under +)
Remark 1.1. (i) The field of scalars is usually R or C and the vector space will be called real or
complex depending on whether the field is R or C. However,other fields are also possible. For
example, one could use the field of rational numbers or even the field of the integers mod p for a
prime p.
A vector space is also called a linear space.
(ii) A vector is not necessary of the form
2 0 1 1 1 1 0 0 0 0
S A , A1 , A2 , A3 , A4
1 3 0 0 0 0 1 0 0 1
6 8 0 0 6 0 1 5
a , c d v1
1 8 0 0 3 8 7 1 v
x y 2
a, b, c, d x y, y z , z w, w .
z w B
.
2 3 vn
5, 2, 1, 0
1 0 B
3 4 2
2 1 n n
1, 4, 6, 1 0 v 1 i vi i vi 0 1 1
5 1 B i 2 i 2 6 7 5
(iii) A vector (v1, v2, · · · , vn) (in its most general form) is an element of a vector space.
Example 1.2.
Let V = R2 = {u = (u1, u2) / u1, u2 ∈ R}.
For u = (u1, u2) , v = (v1, v2) ∈ V and α ∈ R, we define
u + v = (u1 + v1, u2 + v2) and α.u = (αu1, αu2).
Show that V is a real vector space with respect to defined operations.
Solution: Let u = (u1, u2) , v = (v1, v2) and w = (w1,w2) ∈ V and α, β ∈ R, then
C1 : u + v = (u1 + v1, u2 + v2) ∈ V (∵ ui + vi ∈ R, ∀ ui, vi ∈ R)
C2 : α.u = (αu1, αu2) ∈ V (∵ αui ∈ R, ∀ ui, α ∈ R)
A1 : Commutativity:
u + v = (u1 + v1, u2 + v2) (by definition of + on V )
= (v1 + u1, v2 + u2) (+ is commutative operation on R)
= v + u (by definition of + on V ).
Thus, A1 holds.
A2 : Associativity:
(u + v) + w = (u1 + v1, u2 + v2) + (w1,w2) (by definition of + on V )
= ((u1 + v1) + w1, (u2 + v2) + w2) (by definition of + on V )
= (u1 + (v1 + w1), u2 + (v2 + w2)) (+ is associativity on R)
= (u1, u2) + (v1 + w1, v2 + w2) (by definition of + on V )
= u + (v + w) (by definition of + on V ).
Thus, A2 holds.
A3 : Existance of zero vector: For 0 ∈ R there exists 0 = (0, 0) ∈ R2 = V such
that u + 0 = (u1, u2) + (0, 0) = (u1 + 0, u2 + 0) = (u1, u2) = u.
Therefore, 0 = (0, 0) ∈ R2 is zero vector.
Example 1.3.
Example 1.4. V = The set of all real-valued functions and F = R ⇒ vector = f(t).
Consider V to be the set of all arrows (directed line segments) in 3D. Two arrows are regarded as
equal if they have the same length and direction.
Define the sum of arrows and the multiplication by a scalar as shown below:
Example 1.6. The graphical representation of commutative and associative properties of vector
addition.
u + v = v + u (u + v) + w = u + (v + w)
Let (Pn) be the set of all polynomials of degree n and u(x) = u0 + u1x + u2x2 +...+ un xn.
Define the sum among two vectors and the multiplication by a scalar as
(u + v)(x) = u(x) + v(x)
= (u0 + v0) + (u1 + v1)x + (u2 + v2)x2 + ... + (un + vn)xn
and (αu)(x) = αu0 + αu1x + αu2x2 + ... + αunxn
v11 v12
⇒ vector =
v21 v22
Example 1.9. Let V = R/C, then V is a real vector space with respect to usual addition of real
numbers and usual multiplication of real numbers.
Example 1.10. Let V = Q/R/C, then V is a real vector space over field Q.
Example 1.11. Let V = C, then V is a complex vector space.
Example 1.12. Let V = {x ∈ R/x > 0} = R+. If for x, y ∈ V and α ∈ R, the vector operations are
defined as follows as
x + y = xy and x x , then show that V is a real vector space.
Solution: Let x, y, z ∈ V = R+ and α, β ∈ R.
C1: Closureness w.r.t. addition
x + y = xy ∈ R+
C2: Closureness w.r.t. scalar multiplication
x x ∈ R+
A1: Commutative law for addition
x + y = xy = yx = y + x
A2: Associative law for addition
x + (y + z) = x(yz) = (xy)z = (x + y) + z
M4:
1.x = x1 = x
Thus V = R+ satisfy all axioms of a vector space over R with respect to defined operations.
Therefore, V = R+ is a real vector space.
Note:
(i) The set V = {x ∈ R / x ≥ 0} is not a vector space with respect to above defined
operations because 0 has no negative element.
(ii) V = R is not a vector space over R with respect to above defined operation
1
because C2 fails for x < 0, α =
2
Example 1.13. Let V = Mm×n (R) = Set of all m × n matrices with real entries.
Then V is a real vector space with respect to usual addition of matrices and usual scalar
multiplication.
Example 1.14. Let V = Pn = P(x) = {a0 + a1x + a2x2 + · · · anxn / ai ∈ R}
= Set of all polynomials of degree ≤ n.
Then V is a real vector space with respect to usual addition of polynomials and usual scalar
multiplication.
Example 1.15. Let V = X ⊆ R, V = F[x] = {f / f : X → R is a function}. Then V is a real vector
space with respect to usual addition of real valued functions and usual scalar multiplication.
Example 1.16. Let V = 0 . For 0 ∈ V and α ∈ F, we define 0 + 0 = 0 and
a . 0 = 0 , Then V is a vector space over field F.
Note that the vector space V is called the trivial vector space over field F.
Theorem 1.1. Let V be a vector space over field F. Then for any u ∈ V and α ∈ F
(i) 0.u = 0 , for 0 ∈ F.
(ii) α . 0 = 0
(iii) (−1).u = − u and
(iv) If α.u = 0 then α = 0 or u = 0 .
Proof :
(i) 0.u = (0 + 0).u (by property of 0 ∈ F)
= 0.u + 0.u (by property M2)
0.u + (− 0.u) = (0.u + 0.u) + (− 0.u) (adding (−0.u) on both sides)
∴ 0 = 0.u + [0.u + (− 0.u)] (by property A4 and A2)
= 0.u + 0 (by property A4)
= 0.u (by property A3)
∴ 0.u = 0 is proved.
(ii)
α . 0 = α.( 0 + 0 )
= α. 0 + α. 0 (by property M1)
α . 0 + (− α . 0 ) = (α . 0 + α . 0 ) + (− α 0 ) (adding (−α . 0 ) on both sides)
∴ 0 = α . 0 + [α . 0 + (− α 0 )] (by property A4 and A2)
=α. 0 + 0 (by property A4)
=α. 0 (by property A3)
∴ α . 0 = 0 is proved.
(iii) To show that (−1).u = −u
Consider (−1).u + u = (−1).u + 1.u (by property M4)
= (−1 + 1).u
= 0.u = 0
∴ (−1).u is a negative vector of u
Hence, (−1).u = − u (by uniqueness of negative vector)
(iv) To show that α.u = 0 ⇒ α = 0 or u = 0 .
If α = 0 then α.u = 0.u = 0
1 1
If 0 , then u = 0
1
u = 0
1.u= 0
u= 0
∴u= 0
Hence, the proof is completed.
Illustrations
Example 1.17. Let V = R3, define operations + and . as
x, y, z x ', y ', z ' x x ', y y ', z z '
α.(x, y, z) = (αx, y, z)
Is V a real vector space?
Solution: Clearly, C1, C2, A1, A2, A3, and A4 are obvious.
M1: Let u = (x, y, z) and v = (x′, y′, z′) ∈ V
Then, u + v = (x + x′, y + y′, z + z′)
α.(u + v) = α.(x + x′, y + y′, z + z′)
= (α(x + x′), y + y′, z + z′)
= (αx + αx′, y + y′, z + z′)
= (αx, y, z) + (αx′, y′, z′)
= α.(x, y, z) + α.(x′, y′, z′)
= α.u + α.v
M2: Let α, β ∈ R and (x, y, z) ∈ R3 then
(α + β).u = (α + β).(x, y, z)
= [(α + β).x, y, z)]
(α + β).u = α.(x, y, z) + β.(x, y, z)
= (α.x, y, z) + (β.x, y, z)
= (α.x + β.x, 2y, 2z)
From this, we have
(α + β).u α.u + β.u
M3: Let
α.(β.u) = α.(βx, y, z)
= (αβx, y, z)
= αβ.(x, y, z)
= (αβ).u
M4: Let
1.u = (1.x, y, z)
= (x, y, z)
=u
∴ 1.u. = u
Thus V = R3 satisfy all axioms except M2 of a vector space over field R with respect
to defined operations.
Therefore, V = R3 is not real vector space.
Example 1.18. Let V = R3, for u = (x, y, z), v = (x′, y′, z′) and α ∈ R, we define + and . as
follows
u + v = (x + x′, y + y′, z + z′)
α.u = (0, 0, 0)
Then 1.u u, for u 0
∴ M4 is not satisfied.
∴ V = R3 is not a vector space.
Example 1.19. Let V = R3, for u = (x, y, z), v = (x′, y′, z′) and α ∈ R, we define + and . as
follows
u + v = (x + x′, y + y′, z + z′)
α.u = (2αx, 2αy, 2αz)
Then α(β.u) (αβ).u, for u 0
and 1.u u for u 0
∴ V = R3 is not a vector space.
Example 1.20. Let V = {x ∈ R / x ≥ 0} is not a vector space with respect to usual addition and
scalar multiplication, because existence of negative vector axiom fails.
Example 1.21. Let V = R2, and for u = (x, y), v = (x′, y′) and α ∈ R, we define + and . as follows
u + v = (x + x′+ 1, y + y′+ 1)
α.(x, y) = (αx, αy)
Is V a real vector space?
Solution: Clearly, C1, C2, A1 and A2 are hold.
A3: Let
u+v=u
⇒ v = (−1,−1) = 0
i.e. u + 0 = u, where 0 = (−1,−1)
∴ A3 hold.
A4: Let
u+v= 0
⇒ x + x′+ 1 = −1 , y + y′+ 1 = −1
∴ x′= − x − 2, y′= − y − 2
∴ v = (− x − 2, − y − 2) = − u, is negative vector of u ∈ R2
∴ u + v = 0 , where v = (− x − 2,− y − 2)
∴ A4 hold.
M1: Let
α.(u + v) = α.(x + x′+ 1, y + y′+ 1)
= (α(x + x′+ 1), α(y + y′+ 1)
= (αx + αx′+ α, αy + αy′+ α) (1.1)
Also, α.u + α.v = (αx + αx′+ 1, αy + αy′+ 1) (1.2)
From (1.1) and (1.2), we get
∴ α.(u + v) α.u + α.v in general for α 1.
∴ M1 fail.
M2: Similarly,
(α + β).u α.u + β.u
∴ M2 fail.
Therefore, V = R2 is not a vector space.
a 1
Example 1.22. Let V = A / a, b R
1 b
is not a vector space with respect to usual addition and usual scalar multiplication as C1 and C2
fail.
a a b
Example 1.23. Let V = A / a, b R
a b b
is a vector space with respect to usual addition and usual scalar multiplication.
Example 1.24. Let V = {(1, x) / x ∈ R}, and for u = (1, x), v = (1, y) ∈ V and α ∈ R, we define
+ and . as follows
u + v = (1, x + y)
α.u = (1, αx)
Show that V is a real vector space.
M3:
α.(β.u) = α(1, βx) = (1, αβx)
= (αβ)(1, x)
= (αβ).u
∴ M3 holds.
M4:
1.u = (1, 1.x) = (1, x)
=u
∴ M4 holds.
Therefore, V is a real vector space.
Example 1.25. Let V = R2, and for u = (x, y), v = (x′, y′) ∈ R2 and α ∈ R, we
define + and . as follows
u + v = (xx′, yy′)
α.u = (αx, αy)
Show that V = R2 is not a real vector space.
Solution: Clearly, C1, C2, A1 and A2 are hold.
A3: Let
u + v = u(xx′, yy′) = (x, y)
⇒ xx′= x and yy′= y
⇒ x′= 1, y′= 1
∴ 0 = (1, 1) ∈ V = R2 is a zero element.
∴ A3 hold.
A4: Let
u + v = (1, 1)
⇒ (xx′, yy′) = (1, 1)
1 1
∴ x′= , y′ = x 0, y 0
x y
∴ (0, 0) has no negative vector.
∴ A4 does not hold.
M1: Let
α.(u + v) = α.(xx′, yy′) = (αxx′, αyy′)
and α .u + α .v = (αx, αy) + (αx′, αy′) = (α2xx′, α2yy′)
∴ α .(u + v) α.u + α.v
∴ M1 does not hold.
M2:
(α + β).u = ((α + β)x, (α + β)y)
and α.u + β.u = (αx, αy) + (βx, βy)
= (αβx2, αβy2)
∴ (α + β).u α.u + β.u
∴ M2 fail.
M3:
α.(β.u) = α(βx, βy) = (αβx, αβy)
and (αβ).u = (αβx, αβy)
= (αβ).u
∴ M3 hold.
M4:
1.u = (1.x, 1.y) = (x, y) = u
∴ M4 hold.
Therefore, V is not a real vector space.
a b 1
Example 1.26. Let V = A / A exists or A 0
c d
is not a vector space with respect to usual addition and usual scalar multiplication as C1, A1, A3
and A4 fails.
a 1
Example 1.27. Let V = A / a, b R
1 b
a1 1 a2 1
and for A1 , A2
1 b1 1 b2
a a 1 a1 1
we define A1 A2 1 2 and A
1 b1 b2 1 b1
.
Then show that V is a real vector space.
Example 1.28. Let V = R2, and for u = (x, y), v = (x′, y′) ∈ R2 and α ∈ R, we define
u + v = (x + x′, y + y′)
α.u = (αx, 0)
Then only M4 fail, therefore, V = R2 is not a real vector space. This is called a weak vector
space.
Example 1.29. Show that all points of R2 lying on a line is a vector space with respect to
standard operation of a vector addition and scalar multiplication, exactly when line passes
through the origin.
Solution: Let W = {(x, y) / y = mx} then W represent the line passing through origin with slope
m, that is the line passing through origin is a set Wm = {(x,mx)/for some x ∈ R}.
Then Wm is a vector space.
Exercise:1.1
(i) Determine which of following sets are vector spaces under the given operations.
For those that are not, list all axioms that fail to hold.
(a) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′, y′, z′) = (x + x′, y + y′, z + z′), α.(x, y, z) = (αx, y, z).
(b) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′, y′, z′) = (x + x′, y + y′, z + z′), α.(x, y, z) = (0, αy, αz)
(c) Set R3 with the operations + and . defined as follows
(x, y, z) + (x′, y′, z′) = (|x + x′|, y + y′, z + z′), α.(x, y, z) = (αx, αy, αz)
(d) Set R2 with the operations + and . defined as follows
1.3 Subspace
Definition 1.3. A nonempty subset W of a vector space V is said to be a subspace of V if W
itself is a vector space with respect to operations defined on V .
Remark 1.2. Any subset which does not contain the zero vector cannot be a subspace because it
won′t be a vector space.
Necessary and Sufficient condition for subspace:
Theorem 1.2. A non-empty subset W of a vector space V is a subspace of V if and only if W
satisfy the following
C1 : w1 + w2 ∈ W, ∀ w1,w2 ∈ W
C2 : kw ∈ W, ∀ w ∈ W and k in F.
Proof :
(i) Necessary Condition: If W is a subspace of a vector space V then W itself is a vector space
with respect to operations defined on V .
∴ W satisfy C1 and C2 as required.
(ii) Sufficient Condition: Conversely, suppose W is a non-empty subset of vector space V
satisfying C1 and C2 then
(a) For 0 ∈ R, w ∈ W
0.w = 0 ∈ W.
∴ A3 is satisfied.
(b) For w ∈ W, −1 ∈ R
−1.w = − w ∈ W.
∴ A4 is satisfied.
Now as commutative, associative and distributive laws are inherited from superset V to subset
W.
∴ A1, A2, M1, M2, M3, M4 are hold in W.
Thus, W satisfy all axioms of vector space.
∴ W is a subspace of V .
n
The set L(S) = i vi / i F
i 1
is called linear span of a set S.
Example 1.30.
In R3 if S = {e1, e2, e3} where
e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1)
Then (x, y, z) = xe1 + ye2 + ze3, ∀(x, y, z) ∈ R3.
Therefore, L(S) = R3.
Theorem 1.4. If S is nonempty subset of a vector space V , then L(S) is the smallest
subspace of V containing S.
Proof : Let S = { v1, v2, · · · vn } ⊆ V ,
n
then L(S) = i vi / i F ⊆ V .
i 1
n
For each αi = 0, v
i 1
i i 0 ∈ L(S).
∴ L(S) ϕ (1.3)
Moreover, for i ∈ {1, 2, · · · , n}, if αi = 1 and αj = 0, j i, then we have
n
j 1, j i
j v j vi ∈ L(S) ∀ i.
∴ S ⊆ L(S). (1.4)
n
w1 w2 i i vi
i 1
∴ w1 + w2 ∈ L(S), as αi + βi ∈ F, ∀ i. (1.5)
Moreover,
n n n
k.w = kw1 k i vi ki vi i 'vi
i 1 i 1 i 1
∴ kw ∈ L(S). (1.6)
From equations (1.3), (1.4), (1.5) and (1.6), L(S) is the subspace of V containing S.
Now, suppose W is other subspace of V containing S.
v
i 1
i i ∈ L(S) ⊂ W for any αi′s ∈ R.
∴ L(S) ⊂ W
Therefore, L(S) is smallest subspace of V containing S.
Hence, proof is completed.
Exercise: 1.2
(i) Determine which of following are subspaces of R3.
(a) Set of all vectors of the form (x, 0, 0)
(b) Set of all vectors of the form (x, 1, 2)
(c) Set of all vectors of the form (x, y, z), where y = x + z
(d) Set of all vectors of the form (x, y, z), where y = x + z + 2
(ii) Determine which of following are subspaces of M22(R).
(a) Set of all 2 × 2 matrices with rational entries
a b
(b) W = A M 22 ( R) / a 2b c d 0
c d
a b
(c) Wd = A M 22 ( R) / A 0
c d
.
(iii) Determine which of following are subspaces of P3.
(a) W = {p(x) = a0 + a1x + a2x2 + a3x3 ∈ P3/a0 = 0}
(b) W = {p(x) = a0 + a1x + a2x2 + a3x3 ∈ P3/a0 + a1 + 2a2 + 3a3 = 0}
(c) W = {p(x) = a0 + a1x + a2x2 + a3x3 ∈ P3/a0 is an integer}
(d) W = {p(x) = a0 + a1x ∈ P3/a0, a1 ∈ R}
(iv) Determine which of following are subspaces of F(−∞,∞).
(a) W = {f ∈ F(− ∞,∞)/f(x) ≤ 0}
(b) W = {f ∈ F(− ∞,∞)/f(0) = 0}
(c) W = {f ∈ F(− ∞,∞)/f(0) = 4}
(d) W = {f(x) = a0 + a1x ∈ F(− ∞,∞)/a0, a1 ∈ R}.
(v) Determine which of following are subspaces of Mnn(R).
(a) W = {A ∈ Mnn (R)/tr.(A) = 0}
(b) W = {A ∈ Mnn (R)/At = A}
(c) W = {A ∈ Mnn (R)/At = −A}
(d) W = {A ∈ Mnn (R)/AX = 0 has only trivial solution}.
v
i 1
i i 0 ⇒ αi = 0, ∀ i = 1, 2, · · · , n.
If there exist some nonzero values among α1, α2, · · · , αn such that
n
v
i 1
i i 0
Example 1.37. Show that the set of vectors S = {(1, 2, 0), (0, 3, 1), (−1, 0, 1)} is linearly
independent in an Euclidean space R3.
Solution: For linear independence, we consider
Example 1.38. Show that the set of vectors S = {(1, 3, 2), (1,−7,−8), (2, 1,−1)}
is a linearly dependent set in R3.
Solution: For linear dependence, we consider
a(1, 3, 2) + b(1,−7,−8)+c(2, 1,−1) = 0
⇒ a + b + 2c = 0
3a − 7b + c = 0
2a − 8b − c = 0
This homogeneous system has a nonzero solution as follows
a = 3, b = 1, c = −2.
Therefore, set of vectors S = {(1, 3, 2), (1,−7,−8), (2, 1,−1)} is linearly dependent
set in R3.
Example 1.39. Find t, for which u = (cost, sint), v = (− sint, cost) forms a linearly independent
set in R2.
Solution: For linear independence, we consider
αu + βv = 0
α(cost, sint) + β(− sint,cost) = 0
⇒ αcost − βsint = 0
αsint + βcost = 0
The determinant of the coefficient matrix of this homogeneous system is
cos t sin t
= cos2t + sin2t = 1 0, ∀ t ∈ R.
sin t cos t
The homogeneous system has only trivial solution i.e. α = 0 and β = 0.
Therefore, the given vectors are linearly independent for any scalar t.
Example 1.40. Find t, for which u = (cost, sint), v = (sint, cost) form a linearly independent set
in R2.
Solution: For linear independence, we consider
αu + βv = 0
α(cost, sint) + β(sint,cost) = 0
⇒ αcost + βsint = 0
αsint + βcost = 0
The determinant of the coefficient matrix of this homogeneous system is
cos t sin t 2n 1
= cos2t − sin2t = cos2t 0, ∀ t ∈ R / nZ
sin t cos t 4
Therefore, the given vectors are linearly independent for any scalar
2n 1
R / nZ
4
If f1(x), f2(x) · · · fn(x) are (n − 1) times differentiable functions on the interval (−∞, ∞) then the
determinant
f1 ( x) f 2 ( x) . . f n ( x)
f1' ( x) f 2 ' ( x) f n ' ( x)
W(x) =
f1 ( x) f 2 ( x) f 3 ( x) 1 e x e2 x
W(x) = f1' ( x) f 2 ' ( x) f 2 ' ( x) = 0 e x 2e 2 x
f1'' ( x) f 2'' ( x) f 3'' ( x) 0 e x 4e 2 x
= 4e3x − 2e3x
= 2 e3x 0
∴ {f1, f2, f3} forms a linearly independent set.
Example 1.42. In the vector space of continuous functions, consider the vectors
f1(x) = sin(x)cos(x) and f2(x) = sin(2x). The set {f1(x), f2(x)} is linearly dependent because
f2(x) = 2f1(x).
Example 1.43. In the vector space C(R), of continuous functions over R, the vectors f1(x) = sinx
and f2(x) = cosx are linearly independent because f2(x) c f1(x) for any real c.
Theorem 1.5. A subset S in a vector space V containing two or more vectors is linearly
dependent if and only if atleast one vector is expressible as the linear combination of remaining.
Proof: Suppose S = { v1, v2, · · · vr } is linearly dependent set in a vector space V .
Therefore, there exists some non-zero αi′s such that
n
v
i 1
i i 0
If α r 0 then, we have
n
r vr
i 1,i r
i vi
n
vr i vi
i 1,i r r
i 1,i r
i vi (vr ) 0 , where αr = −1 0
∴ S is linearly dependent.
Example 1.44.
Show that B = {e1, e2, · · · , en} is a basis for Euclidean n− dimensional space Rn,
where e1 = (1, 0, · · · , 0), e2 = (0, 1, · · · , 0), · · · , en = (0, 0, · · · , 1).
Solution: (i) To show that B is linearly independent.
For this, we have
a1e1 + a2e2 + · · · + anen = 0
⇒ (a1, a2, · · · , an) = (0, 0, · · · , 0)
∴ a1 = a2 = · · · = an = 0
Therefore, B is linearly independent.
(ii) For any u = (u1, u2, · · · , un) ∈ Rn, is written as
u = u1e1 + u2e2 + · · · + unen
∴ L(B) = Rn.
From (i) and (ii), we prove B = { e1, e2, · · · , en } is a basis for Rn.
This basis is called as natural or standard basis for Rn.
Example 1.45. Let C be a real vector space. Show that B = {2 + 3i, 5 − 6i} is a
basis for C.
Solution: (i) To show that B is linearly independent.
For this, we have
a(2 + 3i) + b(5 − 6i) = 0 = 0 + i 0
⇒ 2a + 5b = 0
3a − 6b = 0
Solving this system of equations, we get
a = b = 0.
Therefore, B is linearly independent.
(ii) For any u = x + iy = (x, y) ∈ C, is written as
x + iy = a(2 + 3i) + b(5 − 6i)
⇒ 2a + 5b = x
3a − 6b = y
This system of equations can be written in matrix form as follows
2 5 a x
3 6 b y
The coefficient matrix of this system is,
2 5
A
3 6
⇒ |A| = −27 0, hence A−1 exist.
Therefore, from above matrix form of system, we get
a 1 x
A
b y
1 6 5 x
=
27 3 2 y
a 1 6 x 5 y
b 27 3x 2 y
1 1
a 6 x 5 y , b 3x 2 y ∈ R are such that
27 27
x + iy = a(2 + 3i) + b(5 − 6i), ∀ x + iy ∈ C.
∴ L(B) = C.
From (i) and (ii), we prove B is a basis for real vector space C.
Example 1.46. Let C be a real vector space. Show that B = {1, i} is a basis.
Example 1.47. Show that the set S = {(1, 1, 2), (1, 2, 5), (5, 3, 4)} do not form basis for R3.
Solution: For is linearly independence, we consider
a(1, 1, 2) + b(1, 3, 5)+c(5, 3, 4) = 0
⇒ a + b + 5c = 0
a + 2b + 3c = 0
2a + 5b + 4c = 0.
Example 1.48. Show that B = {p1, p2, p3} where p1 = 1 + 2x + x2, p2 = 2 + x,p3 = 1 − x + 2x2
is a basis for P2.
1 2 1 a 0
2 1 1 b 0 (1.9)
1 0 2 c 0
The coefficient matrix of this homogeneous system is
1 2 1
A = 2 1 1
1 0 2
⇒ |A| = −9 0.
Therefore, the above system has only trivial solution as follows
a = b = c = 0.
Therefore, set B is linearly independent.
(ii) Suppose
a0 + a1x + a2x2 = ap1 + bp2 + cp3
Then replacing
0 a0
0 by a in (1.9), we get
1
0 a2
1 2 1 a a0
2 1 1 b a
1
1 0 2 c a2
a a0 a0
b A1 a 1 adj.( A) a
1 A 1
c a2 a2
2 4 3 a0
5 1 3 a1
1
9
1 2 3 a2
1
∴ a = (2a0 − 4a1 − 3a2),
9
1
b = (−5a0 + a1 + 3a2),
9
1
c = (−a0 + 2a1 − 3a2) ∈ R
9
such that
a0 + a1x + a2x2 = ap1 + bp2 + cp3.
∴ L(B) = P2.
Hence, from (i) and (ii), we prove B is basis for P2.
Theorem 1.7. If B = {v1, v2, · · · , vn} is a basis for the vector space V , then any vector v ∈ V is
uniquely expressed as linear combination of the basis vector.
Proof : Suppose
n n
v i vi i vi for some αi′s, βi′s ∈ F.
i 1 i 1
Then
n n
i 1
( i vi i vi 0
i 1
n
∴
i 1
( i i )vi 0
∴ αi − βi = 0, ∀ i = 1, 2, · · · , n.
∴ αi = βi, ∀ i = 1, 2, · · · , n.
Hence, any vector v ∈ V is uniquely expressed as the linear combination of basis vectors.
n
v i vi then v B 1 , 2 , 3 ... n
i 1
is called as co-ordinate vector of v related to the basis B.
Example 1.49.
Let B = {(1, 0), (1, 2)} be a basis of R2 and (x)B = (−2, 3), then
⇒ 2c1 − c2 = 4
c1 + c2 = 5.
Solving these equations, we get
c1 = 3, c2 = 2.
∴ (x)B = (3, 2).
Example 1.51. Find co-ordinate vectors of (1, 0, 0), (0, 1, 0), (0, 0, 1) and (1, 1,−1) with respect
to basis B = {(1, 2, 1), (2, 1, 0), (1,−1, 2)} of Euclidean space R3.
1
b = (−5x + y + 3z),
9
1
c = (−x + 2y − 3z)
9
The co-ordinate vectors of general and given vectors to the relative basis are as
follows
1
∴ (v)B = (a, b, c) = ( (2x − 4y − 3z), (−5x + y + 3z), (−x + 2y − 3z))
9
1
∴ (1, 0, 0)B = (−2, 5, 1)
9
1
(0, 1, 0)B = (4,−1,−2)
9
1
(0, 0, 1)B = (1,−1, 1)
9
1
(1, 1,−1)B = (−1, 7,−4).
9
1 0 1 1 1 1 1 1
Example 1.52. Let B , , , be basis for a vector space M2(R).
0 0 0 0 1 0 1 1
Find co-ordinate vectors of matrices
2 3 1 0 2 1
, ,
1 0 0 1 5 1
Solution: Let
x y 1 0 1 1 1 1 1 1
a b c d .
z w 0 0 0 0 1 0 1 1
This gives the following system of equations
a+b+c+d=x
b+c+d=y
c+d=z
d=w
Solving this system, we get
a = x − y, b = y − z, c = z − w, d = w.
The co-ordinate vectors of general and given vectors to the relative basis are as follows
x y
a, b, c, d x y, y z, z w, w
z w B
2 3
5, 2, 1, 0 ,
1 0 B
1 3
1, 0, 1,1 ,
0 1 B
2 1
1, 4, 6, 1
5 1 B
Example 1.53.
(i) dim{Rn} = n.
(ii) dim{P2} = 3 because one of its bases is a natural basis {1, t, t2}.
(iii) dim{Pn} = n + 1 because one of its bases is a natural basis {1, t, t2, · · · , tn}.
(iv) dim{P} = ∞
(v) dim{Span{v1, v2}} = 2 if v1 and v2 are linearly independent vectors.
Subspaces of R3.
⇒ atleast one αi or α = 0
n
If α = 0 then, v
i 1
i i 0
⇒ αi = 0, ∀ i as S is linearly independent.
Therefore, S′ is linearly independent set. This is contradiction to S′ is linearly dependent.
∴ α 0.
n
v i vi
i 1
n
v i vi
i 1
Thus, any vector in V is expressed as linear combination of vectors in S.
Therefore, L(S) = V .
(ii) Let S be subset of a n- dimensional vector space V containing n vectors such that L(S) = V .
To prove S is a basis of V , it is sufficient to prove that S is linearly independent.
On the contrary, suppose that set S is linearly dependent then at least one vector in S is expressed
as linear combination of others.
Let
n
v1 i vi (1.10)
i 2
Now, L(S) = V . Therefore, any vector v ∈ V is expressed as
n
v i vi
i 1
n
v 1v1 i vi
i 2
n
n
1 i vi i vi (by equation (1.10))
i 2 i 2
Therefore, V is expressed as linear combination of the vectors v2, v3, · · · , vn.
Example 1.56.
(i) Let B = {(1, 0, 0), (2, 3, 0)} is a set of 2 linearly independent vectors.
But it cannot span R3 because for this we need 3 vectors. so, it can
not be a basis.
(ii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6)} is a set of 3 linearly independent vectors
that spans R3, so it is a basis of R3.
(iii) Let B = {(1, 0, 0), (2, 3, 0), (4, 5, 6), (7, 8, 9)} is a set of 4 linearly dependent
vectors that spans R3, so it cannot be a basis as n(B) = 4 > 3 = dim.R3.
Theorem 1.11. Any two bases of finite dimensional vector space V has same number
of elements.
Proof : Let B = {v1, v2, · · · , vn} and B′ = {u1, u2, · · · , um} be the bases of the vector space V .
Then B and B′ are linearly independent sets.
Now, if B is basis and B′ is linearly independent set then
No. of elements in B′ ≤ No. of elements in B (1.11)
While if B′ is basis and B is linearly independent set then
No. of elements in B ≤ No. of elements in B′ (1.12)
From equations (1.11) and (1.12), we get
No. of elements in B = No. of elements in B′
Thus, any two bases of finite dimensional vector space V has same number of elements,
is proved.
Note: From definition of linear dependence set and basis, we have
Theorem 1.12. Let S = {v1, v2, · · · , vr} be the set of vectors in Rn. If r > n then S is linearly
dependent.
Proof: Suppose
v1 = (v11, v12, · · · , v1n)
v2 = (v21, v22, · · · , v2n)
...
vr = (vr1, vr2, · · · , vrn)
Consider the equation
k1v1 + k2v2 + · · · + krvr = 0
k1(v11, v12, · · · , v1n) + k2(v21, v22, · · · , v2n) + · · ·+kr(vr1, vr2, · · · , vrn) = 0
Comparing both sides, we get
k1v11 + k2v21 + · · · + krvr1 = 0
k1v12 + k2v22 + · · · + krvr2 = 0
...
k1v1n + k2v2n + · · · + krvrn = 0
Theorem 1.13. Let V be n dimensional vector space and S = { v1, v2, · · · , vr } be linearly
independent set in V then S can be extended to a basis
S′ = {v1, v2, · · · vr, vr+1, · · · , vn} of V.
Proof : If r = n then S itself is a basis of V .
If r < n then choose vr+1 ∈ V − S such that
S1 = { v1, v2, · · · vr, vr+1}
is linearly independent set in V .
If r + 1 = n then S1 = S′ is basis of V .
If r + 1 < n then add vr+2 in S1 so that
S2 = { v1, v2, · · · vr, vr+1, vr+2}
is linearly independent set in V .
If r + 2 = n then S2 = S′ is basis of V .
If r + 2 < n then continue this process till S is extended to a basis
S′ = { v1, v2, · · · vr, vr+1, · · · , vn } of V.
Exercise: 1.3
(i) Determine which of following are linear combinations of u = (0,−1, 2) and v = (1, 3,−1) ?
(a) (2, 2, 2) (b) (3, 1, 5) c) (0, 4, 5) (d) (0, 0, 0).
(ii) Express the following as linear combinations of u = (2, 1, 4) , v = (1,−1, 3) and w = (3, 2, 5).
(a) (−9,−7,−15) (b) (6, 11, 6) (c) (7, 8, 9) (d) (0, 0, 0).
(iii) Express the following as linear combinations of p1 = 2 + x + 4x2 , p2 = 1− x + 3x2
and p3 = 3 + 2x + 5x2.
(a) −9 − 7x − 15x2 (b) 6 + 11x + 6x2 (c) 7 + 8x + 9x2 (d) 0.
(iv) Determine which of following are linear combinations of matrices?
4 0 1 1 0 2
A , B ,C
2 2 2 3 1 4
6 8 0 0 6 0 1 5
(a) (b) (c) (d)
1 8 0 0 3 8 7 1
(v) Determine whether the given sets span R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(2,−1, 3), (4, 1, 2), (8,−1, 8)}
(c) S = {(3, 1, 4), (2,−3, 5), (5,−2, 9), (1, 4,−1)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1), (3, 3, 1)}
(vi) Determine whether the polynomials 1−x + 2x2, 3 + x, 5− x + 4x2, −2−2x +2x2 span P2 ?
(vii) Let S = {(2, 1, 0, 3), (3,−1, 5, 2), (−1, 0, 2, 1)}. Which of the following vectors
are in linear span of S ?
(a) (2, 3,−7, 3) (b) (1, 1, 1, 1) (c) (−4, 6,−13, 4).
(viii) By inspection, explain why the following are linearly dependent sets of vectors?
(a) S = {(−1, 2, 4), (5,−10,−20)} in R3
(b) S = {p1 = 1 − 2x + 4x2, p2 = −3 + 6x − 12x2} in P2
(c) S = {(3, −1), (4, 5), (−2, 9)} in R2
4 0 4 0
(d) S A , B in M22.
2 2 2 2
(ix) Determine which of the following sets are linearly dependent in R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(8,−1, 3), (4, 0, 1)}
(c) S = {(3, 1, 4), (2,−3, 5), (5,−2, 9)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1), (3, 3, 1)}.
(x) Show that the vectors v1 = (0, 3, 1,−1), v2 = (6, 0, 5, 1), v3 = (4,−7, 1, 3) form a linearly
dependent set in R4. Express each vector as a linear combination of remaining two.
(xi) For which values of λ do the following set vectors form a linearly dependent set in R3 ?
v1 = (λ,−1/2,−1/2), v2 = (−1/2, λ,−1/2), v3 = (−1/2,−1/2, λ).
(xii) Show that if S = {v1, v2, ..., vr} is linearly dependent set of vectors, then so is every
nonempty subset of S.
(xiii) Show that if S = {v1, v2, ..., vr} is linearly independent set of vectors, then
S = {v1, v2, ..., vr, vr+1, ..., vn} is also linearly dependent set.
(xiv) Prove: For any vectors u, v and w , the vectors u − v, v − w and w − u forms a linearly
dependent set.
(xv) By inspection, explain why the following set of vectors are not bases for the indicated
vector spaces?
(a) S = {(−1, 2, 4), (5,−10,−20), (1, 0, 2), (1, 2, 3)} forR3
(b) S = {(p1 = 1 − 2x + 4x2, p2 = −3 + 6x − 12x2} for P2
(c) S = {(3,−1), (4, 5), (−2, 9)} for R2
4 0 4 0
(d) S A , B in M22
2 2 2 2
(e) S = {(1, 2, 3), (−8, 2, 4), (2, 4, 6)} for R3.
(xvi) Determine which of the following sets are bases for R3 ?
(a) S = {(2, 2, 2), (0, 0, 3), (0, 1, 1)}
(b) S = {(8,−1, 3), (4, 0, 1) (12,−1, 4)}
(c) S = {(3, 1, 4), (2,−3, 5), (5,−2, 9)}
(d) S = {(1, 2, 6), (3, 4, 1), (4, 3, 1)}
(xvii) Determine which of the following sets are bases for P2 ?
(a) S = {1 − 3x + 2x2, 1 + x + 4x2, 1 − 7x}
(b) S = {4 + 6x + x2, −1 + 4x + 2x2 5 + 2x − x2}
(c) S = {1, 1 + x, 1 + x − x2}
(d) S = {−4 + x + 3x2, 6 + 5x + 2x2, 8 + 4x + x2}
(xviii) Show that the following set of vectors is a basis for M22.
3 6 0 1 0 8 1 0
A , B ,C , D
3 6 1 0 12 4 1 2
(xix) Let V be space spanned by the set S = { v1 = cos2x, v2 = sin2x, v3 = cos2x}.
Show that S is not the basis of V .
(xx) Find the coordinate vector of w relative the basis B = {v1, v2} for R2.
(a) v1 = (1, 0), v2 = (0, 1); w = (3,−7)
(b) v1 = (2,−4), v2 = (3, 8); w = (1, 1)
(c) v1 = (1, 1), v2 = (0, 2); w = (a, b)
(xxi) Find the coordinate vector of w relative the basis B = {v1, v2, v3} for R3.
(a) v1 = (1, 0, 0), v2 = (1, 1, 0), v3 = (1, 1, 1); w = (2,−1, 3)
(b) v1 = (1, 0, 0), v2 = (0, 1, 0), v3 = (0, 0, 1); w = (a, b, c)
(c) v1 = (1, 2, 3), v2 = (−4, 5, 6), v3 = (7,−8, 9); w = (5,−12, 3)
(xxii) Find the coordinate vector of p relative the basis B = {p1, p2, p3} for P2.
(a) p1 = 1, p2 = x, v3 = x2; p = 2 − 1x + 3x2
(b) p1 = 1 + x, p2 = 1 + x2, p3 = x + x2; p = 2 − x + x2
(xxiii) Find the coordinate vector of A relative the basis B = {A1, A2, A3, A4} for M22 where
2 0 1 1 1 1 0 0 0 0
A , A1 , A2 , A3 , A4
1 3 0 0 0 0 1 0 0 1
(xxiv) If B = {v1, v2, v3} is a basis for vector space V then show that
B′ = {v1, v1 + v2, v1 + v2 + v3} is also a basis of V .
(xxv) Determine basis and dimension of the each of following subspaces of R3 ?
(a) S = {(x, y, z) / 3x − 2y + 5z = 0}
(b) S = {(x, y, z) / x − y = 0}
(c) S = {(x, y, z) / z = x − y, y = 2x + 3z}
(xxvi) Determine basis and dimension of the each of following subspaces of R4 ?
(a) S = {(x, y, z, w) /w = 0}
(b) S = {(x, y, z, w) / w = x + y, z = x − y}
(c) S = {(x, y, z, w) / x = y = z = w}
(xxvii) Determine basis and dimension of subspace of P3 consisting of all polynomials
a0 + a1x + a2x2 + a3x3 for which a0 = 0.
(xxviii) Find t, for which u = (eat, aeat), v = (ebt, bebt) form a linearly independent set in R2.
y y y y
= x, , 0 0, , z where x, , 0 W1 and 0, , z W2
2 2 2 2
y y
x, , 0 W1 0, , z W2 Wi W j 0
2 2
j i
a11 a12 a13 ........ a1n a11 a12 a13 a1n
a a
21 a22 a23 ........ a2 n a21 a22 a23 2n
A a31 a32 a33 ........ a3n c1 a31 , c2 a32 , c3 a33 ,....cn a3n
.... .... .... .... ....
am1 am 2 am 3 ....... amn am1 am 2 am 3 amn
x1 s t
1 2 1 1 2 1
0 1 1 4 1 3 1 2 3 1 5 x2 s 2t 1 0 1 0 1
R A A 2 1 3 1 4 x3 s R 0 1 1 0 2
0 0 0 5 3 2
1 1 2 1 3 x4 0 0 0 0 1 0
0 0 0 2 0 2 x5 t
x1 1 1
1 2
1 0 0 1 2 1 x2 1 2 1 1
c1 ' 0 , c2 ' 1 , c4 ' 0 c1 2 , c2 1 , c4 1 x3 s 1 t 0 A 2 3 7 9
0 0 1 1 1 1 x4 0 0 1 4 2 1
x5 0 1
1 2 1 2 2 3 1 1
1 1 3 1 1 1 3 1 4 5 2
2 1 2 1
0,1, , 5 4 4 2 1 3 0
R 0 1
3 3 3 3 1 0 2 1
0 0 0 0 7 6 2
1 3 2 2
3 5 1 2
Theorem 1.14. If W1 and W2 are the subspaces of a vector space V then V = W1 ⊕W2 if and
only if (a) V = W1+W2 and (b) W1 ∩W2 = { 0 }.
Proof: Suppose V = W1 ⊕W2. then v ∈ V is uniquely expressed as v = w1 + w2 for
w1 ∈ W1 and w2 ∈ W2. Therefore, V = W1+W2.
Moreover, if w ∈ W1 ∩W2 then
w = w + 0 for w ∈ W1, 0 ∈ W2
w = 0 + w for w ∈ W2, 0 ∈ W1
Since, these expressions are unique, w = 0 and consequently W1 ∩W2 = { 0 }.
Conversely, suppose (a) and (b) hold then to show that V = W1 ⊕W2.
For this suppose v ∈ V have representations,
v = w1 + w2 = w′1 + w′2 for w1,w′∈ W1 and w2,w′2 ∈ W1.
Then,
w1 − w1 = w2 – w ∈ W1 ∩W2 = { 0 }
Therefore, w1 = w′1 and w2 = w′2.
Thus any vector in v ∈ V has unique representation W1 ∩W2
Therefore, V = W1 ⊕W2
.
Theorem 1.15. If W1,W2, · · · ,Wn are the subspaces of a vector space V then
V = W1 ⊕W2 ⊕ ... ⊕Wn if and only if
(a) V = W1 +W2 + .... +Wn and
(b) Wi W j 0 , i = 1, 2, ..., n.
j i
Corollary 1.1. If the vector space is the direct sum of its subspaces W1,W2, · · · ,Wn
i.e. V = W1 ⊕W2 ⊕ ... ⊕Wn then dim.V = dim.W1 + dim.W2 + ... + dim.Wn.
Exercise: 1.4
1. Let V = Mn(R) be real vector space of all n × n matrices. Then show that,
(a) W1 = {A + At / A ∈ V } is subspace of V .
(b) W2 = {A – At / A ∈ V } is subspace of V .
(c) V = W1 ⊕W2
2. Let V = P100 be real vector space of all polynomials of degree less equal 100.Show that,
(a) W1 = {a0 + a2x2 + a4x4 + .... + a100x100 / a0, a2, ...., a100 ∈ R}
is subspace of P100
(b) W2 = {a1x + a3x3 + a5x5 + .... + a99x99 / a1, a3, ...., a99 ∈ R}
is subspace of P100
(c) V = W1 ⊕W2.
3. Let V = P100 be real vector space of all polynomials of degree less equal 100.
Show that,
(a) W1 = {a0 + a1x + a3x3 + .... + a60x60/a0, a1, ...., a60 ∈ R} is subspace of P100
(b) W1 = {a41x41 + a42x42 + a43x43 + .... + a100x100 / a41, a42, ...., a100 ∈ R}
is subspace of P100
(c) V = W1 ⊕W2 but V W1 ⊕W2.
4. If W1 and W2 are subspaces of the vector space V such that V = W1+W2 then
show that, dim.V = dim.W1 + dim.W2 − dim(W1 ∩W2).
5. If W1 and W2 are subspaces of the vector space V such that V = W1 ⊕W2 then
show that, dim.V = dim.W1 + dim.W2
the vectors
r1 = [a11, a12, · · · , a1n]
r2 = [a21, a22, · · · , a2n]
...
Definition 1.15. If A is an m × n matrix, then the subspace of Rn spanned by the row vectors of
A is called the row space of A, and the subspace of Rm spanned by the column vectors of A is
called the column space of A.
Definition 1.16. If A is an m×n matrix, then the dimension of null space of A is called nullity of
A and it is denoted by Null(A) or Nullity(A).
The dimension of row space of A called row-rank of A and dimension of column space of A is
called column-rank of A.
The dimension of range space of A is called rank of A and it is denoted by rank(A).
Definition 1.17. If A and B be m × n matrices with real entries. If B can be obtained by applying
successively a number of elementary row operations on A, then A is said to be row equivalent to
B, written as A ∼R B.
Theorem 1.16. If A and B be m× n matrices. If A ∼R B then A and B have the same row rank
and column rank.
Theorem 1.17. If A is m × n row reduced echelon matrix with r non-zero rows,
then the row rank of A is r.
Theorem 1.18. If a matrix is in reduced row echelon form, then the column vectors that contains
leading 1′s form a basis for column space of the matrix.
Example 1.59.
Find basis for the subspace of R3 spanned by the vectors (1, 2,−1),(4, 1, 3), (5, 3, 2) and (2, 0, 2).
Solution: The subspace spanned by the vectors is the row space of the matrix
1 2 1
4 1 3
A
5 3 2
2 0 2
We shall reduce the matrix A to its row echelon form by elementary row transformations.
After applying successive row transformations, we obtain the following row
echelon form of matrix A.
1 2 1
0 1 1
R
0 0 0
0 0 0
∴ Basis for row space of A = basis for space generated by given vectors
= {(1, 2,−1), (0, 1,−1)}.
Example 1.60. Find basis for column space of matrix
1 2 3 1 5
A 2 1 3 1 4
1 1 2 1 3
Solution:
After applying successive elementary row transformations on matrix A, we obtain
the following row reduced echelon form of matrix A,
1 0 1 0 1
R 0 1 1 0 2
0 0 0 1 0
Example 1.61. Determine basis for (a) range space and (b) null space of A given
by
1 2 3 1 5
A 2 1 3 1 4
1 1 2 1 3
Solution: (a) We have the basis for range space is the basis for column space of A.
From the above example, the vectors
1 2 1
c1 2 , c2 1 , c4 1
1 1 1
1 0 1 0 1
R 0 1 1 0 2
0 0 0 1 0
Therefore, the reduced system of equations is
x1 + x3 + x5 = 0
x2 + x3 + 2x5 = 0
x4 = 0
This system has 3 equations and five unknowns, so assign 5 − 3 = 2 unknowns.
Let x5 = t, x3 = s, t, S ∈ R.
So, we obtain the solution set
x1 = −s – t , x2 = −s − 2t , x3 = s, x4 = 0, x5 = t
The matrix form of the solution is given as follows
x1 s t
x s 2t
2
x3 s
x4 0
x5 t
∴
x1 1 1
x 1 2
2
x3 s 1 t 0
x4 0 0
x5 0 1
Hence, the vectors (−1,−1, 1, 0, 0) and (−1,−2, 0, 0, 1) form the basis for null space of A.
∴ Nullity(A) = 2.
Example 1.62. Find basis and dimension of row space of matrix A given by
1 2 1 2
A 3 0 1 4
1 1 1 1
Solution: We shall reduce matrix A to its row echelon form. r
Applying successive row transformations to A , we get
1 2 1 2
2 1
R 0 1
3 3
0 0 0 0
Corollary 1.2. If Pm×m and Bn×n are two nonsingular matrices then
rank(PA) = rank(A) and rank(AQ) = rank(A).
where A is m × n matrix.
(iv) Find basis for subspace W of an Euclidean space R4 spanned by the set