0% found this document useful (0 votes)
42 views60 pages

Linear Equations and Matrix Algebra

The document is a course outline for 'Mathematical Methods for Physics/Meteorology/Forensic Physics 2' at the University of Zimbabwe, focusing on linear equations and matrix algebra. It covers definitions, properties, and operations related to matrices, including addition, scalar multiplication, multiplication, and transposition. Key concepts such as square matrices, diagonal elements, trace, and identity matrices are also introduced.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views60 pages

Linear Equations and Matrix Algebra

The document is a course outline for 'Mathematical Methods for Physics/Meteorology/Forensic Physics 2' at the University of Zimbabwe, focusing on linear equations and matrix algebra. It covers definitions, properties, and operations related to matrices, including addition, scalar multiplication, multiplication, and transposition. Key concepts such as square matrices, diagonal elements, trace, and identity matrices are also introduced.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

UNIVERSITY OF ZIMBABWE

MATHEMATICAL METHODS FOR PHYSICS/


METEOROLOGY/FORENSIC PHYISICS 2

HIPH203/HMCS102/HMPH103/HMS204/HFOSCP203/HSST103

Lecturer : Mr. T. Mazikana


Course title : Math Methods 2
Department : Mathematics and Computational Sciences
Course duration : 1 Semester

Linear Equations and Matrix Algebra.

1
Chapter 1

Introduction to Matrices

Definition 1.0.1. A matrix over a field K (elements of K are called numbers or scalars) is a
rectangular array of scalars presented in the following form
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A= . ..  .
 
..
 .. . ··· . 
am1 am2 · · · amn

The rows of such a matrix are the m horizontal list of scalars, that is

(a11 , a12 , · · · , a1n ), (a21 , a22 , · · · , a2n ), · · · , (am1 , am2 , · · · , amn )

and the columns of A are the n vertical list of scalars,


     
a11 a12 a1n
 a21   a22   a2n 
     
 a31   a32 
,  , · · · ,  a3n  .
 

 ..   ..   .. 
 .   .   . 
am1 am2 amn

The element aij , called the ij-entry or ij-element appears in row i and column j. Denote a matrix
simply by A = [aij ].

2
A matrix with m rows and n columns is called an m by n matrix, written m × n. The pair of
numbers m and n are called the size of the matrix. We will use capital letters to denote matrices
and lowercase letters to denote numerical quantities.

Two matrices are equal, written A = B, if they have the same size and if corresponding elements
are equal.

Example 1.0.1. Find x, y, z, t such that


   
x + y 2z + t 3 7
= .
x−y z−t 1 5

Solution: By definition of equality of matrices, the four corresponding entries must be equal. Thus

x + y = 3, 2z + t = 7, x − y = 1, z − t = 5.

Solving the above system of equations yield

x = 2, y = 1, z = 4, t = −1.

A matrix whose entries are all zero is called a zero matrix.

Example 1.0.2.  
0 0  
A= , P = 0 0 .
0 0

Matrices whose entries are all real numbers are called real matrices and are said to be matrices
over R. Matrices whose entries are all complex numbers are called complex matrices and are
said to be matrices over C. These notes will be mainly concerned with such real and complex
matrices.

1.1 Matrix Addition and Scalar Multiplication

Let A = [aij ] and B = [bij ] be two matrices with the same size, say m × n matrices. The sum of
A and B, written A + B, is the matrix obtained by adding corresponding elements from A and B,

3
that is  
a11 + b11 a12 + b12 ··· a1n + b1n
 a21 + b21 a22 + b22 ··· a2n + b2n 
A+B = .
 
.. .. ..
 . . ··· . 
am1 + bm1 am2 + bm2 · · · amn + bmn
The product of a matrix A by a scalar k, written kA, is the matrix obtained by multiplying each
element of A by k, that is  
ka11 ka12 · · · ka1n
 ka21 ka22 · · · ka2n 
kA =  . ..  .
 
..
 .. . ··· . 
kam1 kam2 · · · kamn

Observe that A + B and kA are also m × n matrices.

We also define
−A = (−1)A and A − B = A + (−B).

The matrix −A s called the negative of matrix A and the matrix A − B is called the difference
of matrix A and B.

Example 1.1.1. Let    


1 −2 3 4 6 8
A= and B=
0 4 5 1 −3 −7
then    
1 + 4 −2 + 6 3+8 5 4 11
A+B = =
0 + 1 4 + (−3) 5 + (−7) 1 1 −2
and    
3(1) 3(−2) 3(3) 3 −6 9
3A = = .
3(0) 3(4) 3(5) 0 12 15

1.1.1 Properties

Theorem 1.1.1. Consider any matrices A, B and C (with same size) and scalars k and l. Then

(i) (A + B) + C = A + (B + C).

4
(ii) A + 0 = 0 + A = A.

(iii) A + (−A) = (−A) + A = 0.

(iv) A + B = B + A.

(v) k(A + B) = kA + kB.

(vi) (k + l)A = kA + lA.

(vii) (kl)A = k(lA).

(viii) 1 · A = A.

1.2 Matrix Multiplication

The product of matrices A and B, is written as AB. Consider the product AB, of a row matrix
A = [aij ] and a column matrix B = [bij ] with the same number of elements is defined to be the
scalar obtained by multiplying corresponding entries and adding, that is
 
b1
 b2  n
X
AB = [a1 , a2 , · · · , an ]  .  = a1 b1 + a2 b2 + · · · + an bn = ak bk .
 
 .. 
k=1
bn

The product AB is not defined when A and B have different number of elements.

Example 1.2.1.
 
3
[7, −4, 5]  2  = 7(3) + −4(2) + 5(−1) = 21 − 8 − 5 = 8.
−1

We now define matrix multiplication in general.

Definition 1.2.1. Suppose A = [aik ] and B = [bkj ] are matrices such that the number of columns
of A is equal to the number of rows of B, say, A is an m × p matrix and B is a p × n matrix. Then

5
the product AB is the m × n matrix whose ij-entry is obtained by multiplying the ith row of A by
the jth column of B, that is
   
b11 · · · b1j · · · b1n c11 · · · · · · c1n

a11 · · · · · · aip
 .. .. .. ..   .. .. .. .. ..   .. .. .. .. 
 . . . .  . . . . . 
 .
 . . . 
  . . . . . .
 ai1 · · · · · · aip   . . . . . 
=
 . 
  . . . . .   . · · · cij · · · 
 .. .. .. ..   . .. .. .. ..   ..
 
.. .. .. 

 . . . .   .. . . . .   . . . . 
am1 · · · · · · amp bp1 · · · bpj · · · bpn cm1 · · · · · · cmn

where
p
X
cij = ai1 b1j + ai2 b2j + · · · + aip bpj = aik bkj .
k=1

The product AB is not defined if A is an m × p matrix and B is an q × n matrix, where p 6= q.

Example 1.2.2. Find AB where


   
1 3 2 0 −4
A= and B= .
2 −1 5 −2 6

Solution: Since A is 2 × 2 and B is 2 × 3, the product AB is defined and AB is a 2 × 3 matrix.


Hence    
2 + 15 0 − 6 −4 + 18 17 −6 14
AB = = .
4 − 5 0 + 2 −8 − 6 −1 2 −14
   
1 2 5 6
Example 1.2.3. Suppose A = and . Then
3 4 0 −2
   
5+0 6−4 5 2
AB = =
15 + 0 18 − 8 15 10

and    
5 + 18 10 + 24 23 34
BA = = .
0−6 0−8 −6 −8

The above example shows that matrix multiplication is not commutative, that is, the products AB
and BA of matrices need not be equal. Matrix multiplication satisfies the following properties

Theorem 1.2.1. Let A, B and C be matrices, then, whenever the products and sums are defined.

6
(i) (AB)C = A(BC).
(ii) A(B + C) = AB + AC.
(iii) (B + C)A = BA + CA.
(iv) k(AB) = (kA)B = A(kB), where k is a scalar.
Exercise 1.2.1. Prove that A(B + C) = AB + AC.

1.3 Transpose of a Matrix

Definition 1.3.1. The transpose of a matrix A, written At , is the matrix obtained by writing the
columns of A, in order, as rows.
Example 1.3.1.    
 t 1 4 1
1 2 3
= 2 5 and [1, −3, −5]t = −3 .
4 5 6
3 6 −5

In other words, if A = [aij ] is an m × n matrix, then At = [bij ] is the n × m matrix, where bij = aji .
Observe that the transpose of a row vector is a column vector. Similarly, the transpose of a column
vector is a row vector. The basic properties of the transpose operation are
Theorem 1.3.1. Let A and B be matrices and let k be a scalar. Then, whenever the sum and
product are defined, we have

(i) (A + B)t = At + B t .
(ii) (At )t = A.
(iii) (kA)t = kAt .
(iv) (AB)t = B t At .

1.4 Square Matrices

Definition 1.4.1. A square matrix is a matrix with the same number of rows as columns.

7
An n × n square matrix is said to be of order n and is sometimes called an n-square matrix.

Recall that not every two matrices can be added or multiplied. However, if we only consider square
matrices of some given order n, then this inconvenience disappears. Specifically, the operations
of addition, multiplication, scalar multiplication, and transpose can be performed on any n × n
matrices, and the result is again an n × n matrix.
Example 1.4.1. The following are square matrices of order 3.
   
1 2 3 2 −5 1
A = −4 −4 −4 and B = 0 3 −2 .
5 6 7 1 2 −4

1.5 Diagonal and Trace

Definition 1.5.1. Let A = [aij ] be an n-square matrix. The diagonal or main diagonal of A
consists of the elements with the same subscripts, that is,

a11 , a22 , a33 , . . . , ann .

Definition 1.5.2. The trace of A, written tr(A), is the sum of the diagonal elements. Namely,

tr(A) = a11 + a22 + a33 + · · · + ann .

1.5.1 Identity Matrix

The n-square identity or unit matrix, denoted by I, is the n-square matrix with 1’s on the diagonal
and 0’s elsewhere.

For any n-square matrix A,


AI = IA = A.
Example 1.5.1. The following are identity matrices of order 3 and 4.
 
  1 0 0 0
1 0 0 0 1 0
0 1 0 and  0
.
0 0 1 0
0 0 1
0 0 0 1

8
1.6 Powers of Matrices

Let A be an n-square matrix over a field K. Powers of A are defined as follows

A2 = AA,
A3 = A2 A, · · · , An+1 = An A and A0 = I.
 
1 2
Example 1.6.1. Suppose A = . Then
3 −4

   
1 2
2 1 2 7 −6
A = =
3 −4 3 −4 −9 22

and     
3 7 −6 1 2
2 −11 38
A =A A= = .
−9 22 3 −4 57 −106

1.7 Special Types of Square Matrices

This section describes a number of special kinds of square matrices.

1.7.1 Diagonal Matrix

A square matrix D = [dij ] is diagonal if its non diagonal entries are all zero.

Example 1.7.1.  
3 0 0  
4 0
A = 0 −7 0 and B= .
0 −5
0 0 2

1.7.2 Triangular Matrices

A square matrix A = [aij ] is upper triangular if all entries below the main diagonal are equal to
zero.

9
Example 1.7.2.  
  b11 b12 b13
a11 a12
A= and B =  0 b22 b23  .
0 a22
0 0 b33

A lower triangular matrix is a square matrix whose entries above the main diagonal are all zero.

Suppose A is a square matrix with real entries. The relationship between A and its transpose At
yields important kinds of matrices.

1.7.3 Symmetric Matrices

Definition 1.7.1. A matrix A is symmetric if

At = A.
   
2 −3 5 2 −3 5
Example 1.7.3. Let A = −3 6 7 , then At = −3 6 7 . Hence At = A, thus A is
5 7 −8 5 7 −8
symmetric.

1.7.4 Skew-Symmetric Matrices

Definition 1.7.2. A matrix A is skew-symmetric if

At = −A.

The diagonal elements of such matrix must be zero.

Example 1.7.4.  
0 3 −4
B = −3 0 5 .
4 −5 0

10
1.7.5 Orthogonal Matrices

Definition 1.7.3. A real matrix A is orthogonal if


At = A−1 ,
that is
AAt = At A = I.

A must necessarily be square and invertible.

1.8 Complex Matrices

Let A be a complex matrix. The conjugate of a complex matrix A, written A, is the matrix
obtained from A by taking the conjugate of each entry of A.

A∗ is used for the conjugate transpose of A, that is


A∗ = (A)t = (At ).

If A is real then A∗ = At .
 
  2 − 8i −6i
2 + 8i 5 − 3i 4 − 7i
Example 1.8.1. Let A = , then A∗ = 5 + 3i 1 + 4i.
6i 1 − 4i 3 + 2i
4 + 7i 3 − 2i

Consider a complex matrix A. The relationship between A and its conjugate transpose A∗ yields
important kinds of complex matrices (which are analogous to the kinds of real matrices described
above).

1.8.1 Hermitian Matrices

Definition 1.8.1. A complex matrix A is said to be Hermitian if


A∗ = A.

11
Skew-Hermitian Matrices

Definition 1.8.2. A complex matrix A is said to be skew-Hermitian if


A∗ = −A.

1.8.2 Unitary Matrices

Definition 1.8.3. A complex matrix A is unitary if


A∗ A−1 = A−1 A∗ = I, i.e., A∗ = A−1 .

1.9 Tutorial Questions


   
1 −2 3 3 0 2
1. Given that A = and B = . Find (i) A + B (ii) 2A − 3B.
4 5 −6 −7 1 8
     
x y x 6 4 x+y
2. Find x, y, z, t where 3 = + .
z t −1 2t z+t 3
   
3 5
3. Calculate [8, −4, 5]  2  and [3, 8, −2, 4] −1.
−1 6
   
1 3 2 0 −4
4. Let A = and B = . Find AB and BA.
2 −1 3 −2 6
5. Find the transpose of each of the following matrices.    
  2 1 2 3
1 −2 3
(i) A = (ii) B = [1, −3, 5, −7] (iii) C = −4 (iv) D = 2 4 5.
7 8 −9
6 3 5 6
 
1 2 0
6. Let A = . Calculate At A. What type of matrix is At A? Does At and A commute?
3 −1 4
Verify.
7. Find thediagonal and
 trace of each
 of the following
 matrices.
1 3 6 2 4 8  
1 2 −3
(i) A = 2 −5 8 (ii) B =  3 −7 9  (iii) C = .
4 −5 6
4 −2 9 −5 0 2

12
 
4 x+2
8. Suppose B = is symmetric. Find x and B.
2x − 3 x + 1
 
3 x + 2i yi
9. Find real numbers x, y, z such that A is Hermitian, where A = 3 − 2i 0 1 + zi.
yi 1 − xi −1

13
Chapter 2

Inversion of Matrices

Here we are dealing with square matrices.

In real arithmetic, every nonzero number a has a reciprocal a−1 = ( a1 ) with the property

a · a−1 = a−1 · a = 1.

The number a−1 is sometimes called the multiplicative inverse of a. Our next objective is to
develop an analog of this result for matrix arithmetic.

Proposition 2.0.1. For every n × n matrix A,

AI = IA = A.

This raises the following question : Given an n × n matrix A, is it not possible to find
another n × n matrix B, such that AB = BA = I?

Definition 2.0.1. An n × n matrix A is said to be invertible, if there exists an n × n matrix B,


such that
AB = BA = I.
In this case, we see that B is the inverse of A and write

B = A−1 .

If no such matrix B can be found, then A is said to be singular.

14
Proposition 2.0.2. Suppose that A is an invertible n × n matrix. Then its inverse A−1 is unique.

Proof. Suppose that B satisfies the requirements for being the inverse of A. Then AB = I = BA.
It follows that
A−1 = A−1 I = A−1 (AB) = (A−1 A)B = IB = B.
Hence the inverse A−1 is unique.

Exercise 2.0.1. Suppose that A and B are invertible n × n matrices. Prove that

(AB)−1 = B −1 A−1 .

Exercise 2.0.2. Suppose that A is an invertible n × n matrix. Prove that

(A−1 )−1 = A.

Theorem 2.0.3. The matrix  


a b
A=
c d
is invertible if and only if ad − bc 6= 0, in which case the inverse is given by the formula
 
−1 1 d −b
A = .
ad − bc −c a

2.1 Determinants

Each n-square matrix A = [aij ] is assigned a special scalar called the determinant of A, denoted
by det A or |A| or
a11 a12 · · · a1n
a21 a22 · · · a2n
.. .. .. .
. . ··· .
am1 am2 · · · amn

We emphasize that an n×n array of scalars enclosed by straight lines, called a determinant of order
n, is not a matrix but denotes the determinant of the enclosed array of scalars (i.e., the enclosed
matrix). We shall see that the determinant is an indispensable tool in investigating and obtaining
properties of square matrices.

15
2.1.1 Determinants of Order 1 and 2

Determinants of order 1 and 2 are defined as


a11 a12
|a11 | = a11 and = a11 a22 − a12 a21 .
a21 a22

5 3
Example 2.1.1. (a) det(27) = 27 and (b) = 5(6) − 3(4) = 30 − 12 = 18.
4 6

2.2 Application to Linear Equations

Consider two linear equations in two unknowns, say


a1 x + b1 y = c1
a2 x + b2 y = c2 .
Let D = a1 b2 − a2 b1 , the determinant of the matrix of coefficients. Then the system has a unique
solution if and only if D 6= 0. In such a case, the unique solution may be expressed completely in
terms of determinants as follows,
c1 b1 a1 c1
Nx b2 c1 − b1 c2 c2 b2 Ny a1 c2 − a2 c1 a2 c2
x= = = , y= = = .
D a1 b2 − a2 b1 a1 b1 D a1 b2 − a2 b1 a1 b1
a2 b2 a2 b2
Here D appears in the denominator of both quotients. The numerators Nx and Ny of the quotients
for x and y, respectively, can be obtained by substituting the column of constant terms in place of
the column of coefficients of the given unknown in the matrix of coefficients. On the other hand, if
D = 0, the system may have no solution or more than one solution.
Example 2.2.1. Solve by determinants of the system
4x − 3y = 15
2x + 5y = 1.

Solution: Find first the determinant D of the matrix of coefficients


4 −3
D= = 4(5) − (−3)(2) = 20 + 6 = 26.
2 5

16
Because D 6= 0, the system has a unique solution. Therefore,

15 −3 4 15
Nx = = 75 + 3 = 78, Ny = = 4 − 30 = −26.
1 5 2 1

Then, the unique solution of the system is


Nx 78 Ny −26
x= = = 3, y= = = −1.
D 26 D 26

2.3 Determinants of Order 3

Consider an arbitrary 3 × 3 matrix A = [aij ]. The determinant of A is defined as follows

a11 a12 a13


det A = a21 a22 a23
a31 a32 a33
= a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a12 a21 a33 − a11 a23 a32 .

A procedure for evaluating the determinants of 3 × 3 is called Sarrus’ Rule.

+ + +
a11 a12 a13 a11 a12

a21 a22 a23 a21 a22

a31 a32 a33 a31 a32


− − −
   
2 1 1 3 2 1
Example 2.3.1. Let A = 0 5 −2 and B = −4 5 −1. Find det A and det B.
1 −3 4 2 −3 4

 
2 1 1
det A = 0 5 −2 =
1 −3 4

17
+ + +
2 1 1 2 1

0 5 −2 0 5

1 −3 4 1 −3
− − −

= 2(5)(4) + 1(−2)(1) + 1(0)(−3) − (1)(0)(4) − (2)(−2)(−3) − (1)(5)(1) = 40 − 2 + 0 − 0 − 12 − 5 = 21.

 
3 2 1
det B = −4 5 −1 =
2 −3 4

+ + +
3 2 1 3 2

−4 5 −1 −4 5

2 −3 4 2 −3
− − −

= 3(5)(4) + 2(−1)(2) + 1(−4)(−3) − (2)(−4)(4) − (3)(−1)(−3) − (1)(5)(2) = 60 − 4 + 12 + 32 − 9 − 10


= 81.

Sarrus’ rule applies for evaluating the determinant of 3 × 3 matrices only.

2.4 Evaluation of Determinants of Any Order

2.4.1 Minors and Co-factors

Definition 2.4.1. If A = [aij ] is an n × n matrix, then the minor of the element aij denoted by
Mij and is defined ad the determinant of the (n − 1) × (n − 1) sub-matrix which is obtained by

18
deleting all the entries in the ith row and the jth column.
Example 2.4.1. For the matrix
a11 a12 a13
A = a21 a22 a23 .
a31 a32 a33
The minor of a11 is
a22 a23
= M11 .
a32 a33
The minor of a12 is
a21 a23
= M12 .
a31 a33
The minor of a13 is
a21 a22
= M13 .
a31 a32
Definition 2.4.2. The co-factor of an element aij denoted by Aij is defined as the product of
(−1)i+j and the minor of aij , that is
Aij = (−1)i+j Mij .

Co-factor of an element is merely the signed minor of the element. We emphasize Mij denotes a
matrix and Aij denotes a scalar.
a11 a12 a13
a a
Example 2.4.2. If A = a21 a22 a23 , then the co-factor of a11 = A11 = (−1)1+1 22 23
a32 a33
a31 a32 a33
a a a a a a
= + 22 23 , the co-factor of a12 = A12 = (−1)1+2 21 23 = − 21 23 .
a32 a33 a31 a33 a31 a33

2.5 Laplace Expansion of the Determinant

To compute the determinant of an n×n matrix we make use of the concept of co-factors and minors
to reduce the matrix to lower ones whose determinants we already know how to calculate.

The determinant of a square matrix A = [aij ] is equal to the sum of the products obtained by
multiplying the elements of any row (column) by their respective co-factors.
n
X
|A| = ai1 Ai1 + ai2 Ai2 + · · · + ain Ain = aij Aij .
j=1

19
This expansion can be carried out along any row of the matrix in question and the value of the
determinant is the same.
 
3 −1 5
Example 2.5.1. Given that A = 0 4 −3. Find |A|.
2 1 2

Solution: Expanding along the first row, gives


4 −3 0 −3 0 4
det A = 3(−1)1+1 + (−1)(−1)1+2 + 5(−1)1+3
1 2 2 2 2 1
4 −3 0 −3 0 4
= 3 + +5
1 2 2 2 2 1
= 3(8 + 3) + (0 + 6) + 5(−8)
= 3(11) + 6 − 40 = 33 + 6 − 40
= −1.

Expanding along the second row, gives


−1 5 3 5 3 −1
det A = 0(−1)2+1 + 4(−1)2+2 + (−3)(−1)2+3
1 2 2 2 2 1
= 0 − 16 + 15
= −1.

Note that expanding by a row or column that contains zeros significantly reduces the number
of cumbersome calculations that need to be done. It is sensible to evaluate the determinant by
co-factor expansion along a row or column with the greatest number of zeros.
 
0 0 0 1
3 5 0 −1
Example 2.5.2. Given that A =  0 3 −2 5 . Find det A.

1 0 0 2

Solution:
3 5 0
5 0
det A = − 0 3 −2 = −
3 −2
1 0 0
= −(−10) − 0
= 10.

20
Note : the determinant of the identity matrix is 1. The determinant of a diagonal matrix D of
order n × n is given by the product of the elements on its main diagonal. The determinant of a
triangular matrix of order n × n is given by the product of the elements on its main diagonal.

2.6 Properties

1. For general matrices, A and B


|AB| = |A||B|.

2. In general, for an n × n matrix A,

det A = det At .

3. If A and B are n × n matrices, then

|AB| = |BA|.

4. In general, if two rows (columns) of an n × n matrix A are interchanged, then

det A = − det A.

5. If the elements of any rows (columns) of an n × n matrix A are multiplied by the same scalar
k, then the value of the determinant of the new matrix is k times the determinant of A.

6. If the elements of any row (column) of A are all zeros, then the determinant of A is zero.

7. If an n × n matrix A is multiplied by a scalar k, then the determinant of kA is k n det A, that


is
det kA = k n det A.

8. If A is an n × n matrix, with any two of its rows (columns) equal, then the determinant of A
is zero.

9. If A is an n × n matrix, in which one row (column) is proportional to another, then the


determinant of the matrix is zero.

21
2.7 Adjoint

Definition 2.7.1. Let A = [aij ] be an n × n matrix and let Aij denote the co-factors of aij . The
adjoint of A, denoted by adj A is the transpose of the matrix of co-factors of A, that is
adj A = [Aij ]t .
 
2 3 −4
Example 2.7.1. Let A = 0 −4 2 . The co-factors of the nine elements of A are as follows,
1 −1 5

−4 2 0 2 0 −4
A11 = + = −18, A12 = − = 2, A13 = + =4
−1 5 1 5 1 −1
3 −4 2 −4 2 3
A21 = − = −11, A22 = + = 14, A23 = − =5
−1 5 1 5 1 −1
3 −4 2 −4 2 3
A31 = + = −10, A32 = − = −4, A33 =+ = −8.
−4 2 0 2 0 −4
   
A11 A12 A13 −18 2 4
[Aij ] = A21 A22 A23  = −11 14 5 .
A31 A32 A33 −10 −4 −8
The transpose of the above matrix of co-factors yields the adjoint of A, that is
 
−18 −11 −10
adj A =  2 14 −4  .
4 5 −8
Theorem 2.7.1. Let A be any square matrix.Then
A(adj A) = (adj A)A = |A|I,
where I is the identity matrix. Thus, if |A| =
6 0,
1
A−1 = adj A.
|A|
Example 2.7.2. Let A be the matrix above. We have
det A = −40 + 6 + 0 − 16 + 4 + 0 = −46.
Thus A does have an inverse and from
 
−18 −11 −10
1 1 
A−1 = adj A = − 2 14 −4  .
|A| 46
4 5 −8

22
2.8 Properties of Inverses

1. If an n × n matrix A is invertible, then det A 6= 0.


Definition 2.8.1. A matrix which has an inverse is said to be invertible. A matrix whose
determinant is non-zero is said to be non-singular and if a matrix has determinant equal to
zero it is called a singular matrix.

2. If an n × n matrix A is invertible, then


(A−1 )−1 = A.

3. If an n × n matrix A is invertible, then At is also invertible, and


(At )−1 = (A−1 )t .

Proof. We can establish the invertibility and obtain the formula at the same time by showing
that
At (A−1 )t = (A−1 )t At = I.
But we know that I t = I, we have
At (A−1 )t = (A−1 A)t = I t = I
(A−1 )t At = (AA−1 )t = I t = I
which completes the proof.

4. If A is an n × n invertible matrix, then


1
det A−1 = .
det A

2.9 Tutorial Questions


 
2 1 −3 4
5 −4 7 −2
1. Find the co-factor of the 7 in the matrix  .
4 0 6 −3
3 −2 5 2
 
1 2 −2 3
3 −1 5 0
2. For the matrix 4
. Find the co-factors of (i) the entry 4 (ii) the entry 5 and
0 0 1
1 7 2 −3
(iii) the entry 7.

23
3. Evaluate
 the determinants
 of  
2 5 −3 −2 3 −2 −5 4    
−2 2 0 −1 t + 3 −1 1
−3 2 −5 −5 2 8 −5
(i)   (ii)   (iii) 3 0 2 v (iv)  5 t−3 1 .
1 3 −2 2  −2 4 7 −3
4 −3 7 6 −6 t + 4
−1 −6 4 3 2 −3 −5 8

0 0 0 a1 b2 b3 b4 b5
0 0 b1 a2 0 c3 0 0
4. Evaluate the determinants and .
0 c1 b2 a3 0 d3 d4 d5
d1 c2 b3 a4 0 e3 0 e5

a+b c c
5. Show that a b+c a = 4abc.
b b c+a
   
1 −2 −2 1
6. A and B are defined as A = and B = .
−2 3 1 1

(i) Compute the determinants |A|, |B|, |AB| and verify that |AB| = |A||B|.
1 1
(ii) Compute A−1 , B −1 and verify that |A−1 | = and |B −1 | = .
|A| |B|
 
1 2 3
7. Consider the matrix A = 2 3 4. Compute |A| and adjA. Verify that A(adjA) = |A|I
1 5 7
−1
and find A .
 
a b
8. Consider an arbitrary 2 × 2 matrix A = . Find adjA and show that adj(adjA) = A.
c d
 
1 1 0
9. Let A = 1 1 1. Find adjA and A−1 .
0 2 1

10. Suppose A is orthogonal, that is, At A = I. Show that |A| = ±1.


 
2 0 10
11. Let 0 7 + x −3. Find all values of x such that A is invertible.
0 4 x
12. Let A and B be n × n matrices, where n is an integer greater than 1. Is it true that

det(A + B) = det(A) + det(B)?

If so, then give a proof. If not, then give a counterexample.

24
 
100 101 102
13. Find the determinant of 101 102 103.
102 103 104
14. Use determinants to solve the system

3y + 2x = z + 1
3x + 2z = 8 − 5y
3z − 1 = x − 2y.

15. Solve the following systems by determinants


(a)

3x + 5y = 8
4x − 2y = 1,

(b)

ax − 2by = c
3ax − 5by = 2c, (ab 6= 0).

25
Chapter 3

Application of Matrices

3.1 Elementary Row Operations

Let ri denote row i of matrix A. There are 3 elementary row operations, namely

1. ri ↔ rj meaning interchanging row i with row j.


2. ri → kri , k 6= 0 meaning multiply ri by a scalar k.
3. ri → kri + rj meaning multiply row i by k and add row j.
 
1 0 2
Example 3.1.1. Consider the matrix A = 4 1 3. Then
3 2 6
 
4 1 3
r1 ↔ r2 gives 1 0 2
3 2 6
 
1 0 2
r2 → 2r2 gives 8 2 6
3 2 6
 
5 2 10
r1 → 2r1 + r3 gives 4 1 3  .
3 2 6

26
3.2 Inverses Using Row Operations

We can use row operations to find the inverse of A by writing a matrix (A|In ), then use row
operations to get (In |A−1 ).
 
2 3
Example 3.2.1. Consider A = .
2 2

Solution: We write  
2 3 1 0
.
2 2 0 1
Then performing row operations we have
 
2 3 1 0
r2 → r2 − r1
0 −1 −1 1
 
2 0 −2 3
r1 → r1 + 3r2
0 −1 −1 1
0 −1 32
 
1 1
r1 → r1
2 0 −1 −1 1
0 −1 32
 
1
r2 → −r2 .
0 1 1 −1
−1 23
 
Therefore A−1 = . Checking can be done by verifying that A−1 A = I.
1 −1
−1 32
    
2 3 1 0
= = I.
1 −1 2 2 0 1

3.3 Linear Equations

An equation of the kind


a1 x1 + a2 x2 + · · · + an xn = b
is called a linear equation in the n variables x1 , x2 , · · · , xn and a1 , a2 , · · · , an and b are real
constants.

A solution of a linear equation a1 x1 + a2 x2 + · · · + an xn = b is a sequence of n numbers s1 , s2 , · · · , sn


such that the equation is satisfied when we substitute x1 = s1 , x2 = s2 , · · · , xn = sn . The set of all
solutions of the equation is called its solution set.

27
A finite set of linear equations in the variables x1 , x2 , · · · , xn is called a system of linear equations
or a linear system.

A sequence of numbers s1 , s2 , · · · , sn is called a solution of the system if x1 = s1 , x2 = s2 , · · · ,


xn = sn is a solution of every equation in the system. Not all systems of linear equations have
solutions. A system of equations that has no solutions is said to be inconsistent. If there is at
least one solution, it is called consistent.

Every system of linear equations has either no solutions, exactly one solution or infinitely
many solutions.

An arbitrary system of m linear equations in n unknowns will be written as


a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
.. .
. = ..
am1 x1 + am2 x2 + · · · + amn xn = bm
where x1 , x2 , · · · , xn are the unknowns. We can write a rectangular array of numbers, as
 
a11 a12 · · · a1n b1
 a21 a22 · · · a2n b2 
..  .
 
 .. .. ..
 . . ··· . . 
am1 am2 · · · amn bm
This is called the augmented matrix for the system.
Example 3.3.1. The augmented matrix for the system of equations
x1 + x2 + 2x3 = 9
2x1 + 4x2 − 3x3 = 1
3x1 + 6x2 − 5x3 = 0
is  
1 1 2 9
2 4 −3 1 .
3 6 −5 0
Example 3.3.2. Find the solution set of
2x1 − x2 + x3 = 4
−3x1 + 2x2 − 4x3 = 1
x1 − 5x3 = 0

28
Solution: The augmented matrix for the linear system is
 
2 −1 1 4
−3 2 −4 1 .
1 0 −5 0

Doing row operations, we have


 
1 0 −5 0
r3 ↔ r1 −3 2 −4 1 .
2 −1 1 4
 
1 0 −5 0
r2 → r2 + 3r1 , r3 → r3 − 2r1 0 2 −19 1 .
0 −1 11 4
 
1 0 −5 0
r2 ↔ r3 0 −1 11 4 .
0 2 −19 1
 
1 0 −5 0
r2 ↔ (−1)r2 0 1 −11 −4 .
0 2 −19 1
 
1 0 −5 0
r3 → r3 − 2r2 0 1 −11 −4 .
0 0 3 9

Corresponding system of linear equations which is derived from the augmented matrix is

x1 − 5x3 = 0
x2 − 11x3 = −4
3x3 = 9.

Now using the method of back substitution, we find the values of the unknown as follows

x3 = 3
x2 = −4 + 33 = 29
x1 = 0 + 15 = 15.

The solution set is


(x1 , x2 , x3 ) = (15, 29, 3).

29
3.3.1 Applications of Linear Equations

Linear equations arise in many applications, for example, quadratic interpolation, temperature
distribution, global positioning system (gps), e.t.c.

3.4 Row Echelon Form

To be in this form, a matrix must have the following properties

1. If a row does not consist entirely of zeros, then the first non-zero number in the row is a 1
(leading 1).

2. If there are any rows that consists entirely of zeros, then they are grouped together at the
bottom of the matrix.

3. In any two successive rows that do not consist entirely of zeros, the leading 1 in the lower
row occur further to the right than the leading 1 in the higher row.

4. Each column that contain a leading 1 has zeros elsewhere in that column.

A matrix having properties 1, 2 and 3 is said in row-echelon form.

A matrix in reduced row-echelon form must have zeros above and below each leading 1.

Example 3.4.1. These are in row echelon form


 
1 −3 1 0 −8  
0 1 −9 6 0  1 1 −3 4
  and 0 0 1 9 .
0 0 0 1 7
0 0 0 0
0 0 0 0 1

Example 3.4.2. These are in reduced row-echelon form


   
  0 1 0 1 0 0 0
1 0 −4 0 0 0
0 1 2 0 ,  0,
0
 0 0 0
.
0 0 1 0 0 0 0
0 0 0 1
0 0 0 0 0 0 0

30
The procedure for reducing a matrix to a reduced row-echelon form is called Gauss-Jordan Elim-
ination and the procedure which produces a row-echelon form is called the Gauss Elimination.
The Gauss Elimination method requires fewer elementary row operations than the Gauss-Jordan
method.

Two matrices are row equivalent if one can be obtained from the other by a sequence of elementary
row operations. The matrix in reduced row echelon that is row equivalent to A is denoted by
rref (A). The rank of a matrix A is the number of rows in rref (A).

Example 3.4.3. For each of the following matrices, find a row-equivalentmatrixwhich is in reduced
1 3
row echelon form. Then determine the rank of each matrix. (a) A =
−2 2
 
2 −2 4
(b) C = 4 1 −2 .
6 −1 2

Solution : (a) The matrix A has rank 2, which can be seen by computing
       
1 3 r2 +2r1 1 3 18 r2 1 3 r1 −3r2 1 0
−→ −→ −→ .
−2 2 0 8 0 1 0 1

Example 3.4.4. Solve the following system of linear equations

−3x2 + 4x3 = −2
x1 + 5x2 + 2x3 = 9
x1 + x2 − 6x3 = −7.

Solution: The augmented matrix is

 
0 −3 4 −2
 1 5 2 9 .
1 1 −6 −7

31
Doing row operations, yields
 
1 5 2 9
r1 ↔ r2 , r2 ↔ r3 , r2 → r2 − r1 0 −4 −8 −16 .
0 −3 4 −2
 
1 5 2 9
1
r2 → − r2 , r3 → r3 + 3r2 0 1 2 4  .
4
0 0 10 10
 
1 5 2 9
1
r3 → r3 0 1 2 4 ,
10
0 0 1 1
which is now in echelon form and is equivalent to

x1 + 5x2 + 2x3 = 9
x2 + 2x3 = 4
x3 = 1.

This means x3 = 1, x2 = 4 − 2x3 = 2 and finally x1 = 9 − 5x2 − 2x3 = −3. Hence

(x1 , x2 , x3 ) = (−3, 2, 1).

Alternatively we could continue with the elementary row operations as follows


 
1 0 −8 −11
r1 → r1 − 5r2 , r2 → r2 − 2r3 , 0 1 0 2 .
0 0 1 1
 
1 0 0 −3
r1 → r1 + 8r3 0 1 0 2  ,
0 0 1 1
which is now in reduced row-echelon form and is equivalent to

x1 = −3
x2 = 2
x3 = 1.

Therefore
(x1 , x2 , x3 ) = (−3, 2, 1).

The types of solutions one can get when solving system of linear equations, we shall look at several
augmented matrices that have already been reduced to echelon form.

32
Case 1

 
1 2 0 3
0 1 1 4 
0 0 1 −1
is equivalent to

x1 + 2x2 = 3
x2 + x3 = 4
x3 = −1,

which implies that x1 = −7, x2 = 5 and x3 = −1.

Case 2

 
1 0 −2 1
0 1 −1 3
0 0 0 0
which is equivalent to

x1 − 2x3 = 1
x2 − x3 = 3

which implies that

x1 = 2x3 + 1
x2 = x3 + 3
x3 = x3 .

Since x1 and x2 correspond th the leading 10 s in the augmented matrix, we call these the leading
variables. The remaining variable (in this case x3 ) are called free variables. In this case all the
solutions are expressed in terms of x3 . Any arbitrary value can be assigned to x3 and the resulting
values of x1 , x2 and x3 will satisfy all the equations in the system. The solution set therefore is
infinite and written as follows

(x1 , x2 , x3 ) = (2t + 1, t + 3, t).

33
Case 3

 
1 −2 4 0
0 1 3 −2 .
0 0 0 −4
The equation represented by the last row is 0x1 + 0x2 + 0x3 = −4. Clearly, we can never find
suitable values for x1 , x2 and x3 which satisfy this equation. Therefore the solution does not exist.

The reduced row-echelon form of a matrix is unique and a row-echelon form is not unique, by
changing the sequence of elementary row operations it is possible to arrive at different row-echelon
forms.
Example 3.4.5. Find the value of α for which the following system of equations is (a) consistent
(b) inconsistent.
−3x1 + x2 = −2
x1 + 2x2 = 3
2x1 + 3x2 = α.

Solution: The augmented matrix is


 
−3 1 −2
 1 2 3 .
2 3 α
Doing elementary row operations, we have
 
1 2 3
r1 ↔ r2 , −3 1 −2 .
2 3 α
 
1 2 3
r2 → r2 + 3r1 , r3 → r3 − 2r1 0 7 7 .
0 −1 α − 6
 
1 2 3
1
r2 → r2 0 1 1 .
7
0 −1 α − 6
 
1 2 3
r3 → r3 + r2 0 1 1 
0 0 α−5
which is now in echelon form. The last row is equivalent to 0x1 + 0x2 = α − 5.

34
(a) The system can be consistent only if α − 5 = 0, that is, when α = 5.

(b) The system is inconsistent if α − 5 6= 0, that is, when α 6= 5.

3.5 Homogeneous System of Linear Equations

Definition 3.5.1. A homogeneous system of linear equations is a system in which all the constant
terms are zero.

a11 x1 + a12 x2 + · · · + a1n xn = 0


a21 x1 + a22 x2 + · · · + a2n xn = 0
.. .. . .
. . · · · .. = ..
am1 x1 + am2 x2 + · · · + amn xn = 0.

Any homogeneous system of equations will always have a solution no matter what the coefficient
matrix is like, and so can never be inconsistent. The system has only the trivial solution or the
solution has infinitely many solutions in addition tothe trivial solution.

The obvious solution is x1 = x2 = · · · = xn = 0. This solution is known as the trivial solution, if


there are other solutions, they are called nontrivial solutions.

3.5.1 Solution of Homogeneous Systems

Example 3.5.1. Find the solution set of the following homogeneous system of linear equations

x1 − 2x2 + x3 = 0
2x1 + x2 − 3x3 = 0
−3x2 + x3 = 0.

35
Solution: The augmented matrix is
 
1 −2 1 0
2 1 −3 0 .
0 −3 1 0

doing the elementary row operations, we have


 
1 −2 1 0
r2 → r2 − 2r1 0 5 −5 0 .
0 −3 1 0
 
1 −2 1 0
1
r2 → r2 0 1 −1 0 .
5
0 −3 1 0
 
1 −2 1 0
r3 → r3 + 3r2 0 1 −1 0
0 0 1 0

which is equivalent to

x1 − 2x2 + x3 = 0
x2 − x3 = 0
x3 = 0.

Therefore (x1 , x2 , x3 ) = (0, 0, 0) has only one solution, the trivial solution.

In general if (i) n = m the system has only the zero solution. (ii) if m < n, the system has a non
zero solution.

Theorem 3.5.1. A homogeneous system of linear equations with more unknowns than equations
has a non-zero solution.

Practice is the best of all instructors.

—Publilius

36
3.6 Tutorial Questions
   
3 1 1 x
1. Let C = 1 5 1. What is the dimension of the product B = (x, y, z)·C · y ? Calculate
1 1 3 z
B. Find the determinant |C| of C and, using Gauss Jordan elimination, find the inverse of
C.
 
3 1 0
2. Find the inverse of the matrix A = −2 −4 3 .
5 4 −2
   
1 2 4 1 0 0
3. Use elementary row operations to find A−1 if A = 2 0 2 and A = 2 0 4.
1 1 3 5 1 7
 
x 1
4. Given that E(x) = . Show that (a) E(x)E(0)E(y) = −E(x + y) and (b) the inverse
−1 0
of E(x) is E(0)E(−x)E(0).
 
1 1 1 1
1 2 −1 2
5. Use row operations to determine the inverse of A =  1 −1 2 1.

1 3 3 2
 
1 2 1 a
2 5 1 a 
6. Let W =  −1 1 −3 −3a + 1. Find all values of a for which the matrix W has an

3 8 1 2a
inverse and calculate the inverse of W for one such value of a.
7. Solve the following systems
2x + y − 2z = 10 x + 2y − 3z = 6
(a) 3x + 2y + 2z = 1 (b) 2x − y + 4z = 2
5x + 4y + 3z = 4 4x + 3y − 2z = 14
x + 2y + 2z = 2
x + 5y + 4z − 13w = 3
3x − 2y − z = 5
(c) (d) 3x − y + 2z + 5w = 2 .
2x − 5y + 3z = −4
2x + 2y + 3z − 4w = 1
x + 4y + 6z = 0
8. Let A and I be 2 × 2 matrices defined as follows
   
1 b 1 0
A= , I= .
c d 0 1

37
Prove that the matrix A is row equivalent to the matrix I if d − cb 6= 0.
9. Suppose that the following matrix A is the augmented matrix for a system of linear equations.
 
1 2 3 4
A =  2 −1 −2 a2 
−1 −7 −11 a
where a is a real number. Determine all the values of a so that the corresponding system is
consistent.
10. Find the rank of the following real matrix
 
a 1 2
1 1 1 
−1 1 1 − a
where a is a real number.
11. If A and B have the same rank, can we conclude that they are row-eqivalent? If so, then
prove it. If not, then provide a counterexample.
12. (a) Find all 3 × 3 matrices which are in reduced row echelon form and have rank 1.
(b) Find all matrices with rank 2.
ax + by = 1
13. Consider the system . Show that if ad − bc 6= 0, then the system has the
cx + dy = 0
unique solution
d −c
x= , y= .
ad − bc ad − bc
Also show that if ad − bc = 0, c 6= 0, d 6= 0, the system has no solution.
14. Show that the homogeneous system
(a − r)x + dy = 0
cx + (b − r)y = 0
has a non-trivial solution if and only if r satisfies the equation (a − r)(b − r) − cd = 0.
15. In the following linear systems, determine all the values of a for which the resulting linear
system has (a) no solution (b) a unique solution and (c) infinitely many solutions.
(i)
x+y−z = 2
x + 2y + z = 3
x + y + (a2 − 5)z = a

38
(ii)

x+y+z = 2
2x + 3y + 2z = 5
2x + 3y + (a2 − 1)z = a + 1

(iii)

x+y = 3
2
x + (a − 8)y = a.

16. Consider the system of equations

2x + y + αz = β
2x − αy + 2z = β
x − 2y + 2αz = 1.

For what values of α and β does the system have a unique solution?

17. For what values of c does the following system of linear equations have no solution, a unique
solution, infinitely many solutions.

x + 2y − 3z = 4
3x − y + 5z = 2
4x + y + (c2 − 14)z = c + 2.

Show that in case of infinitely many solutions, the solution may be written as
( 87 − α, 10
7 + 2α, α) for any real α.

18. Find a cubic polynomial p(x) = a + bx + cx2 + dx3 such that p(1) = 1, p0 (1) = 5, p(−1) = 3
and p0 (−1) = 1.

19. A 2−digit number has two properties. The digits sum to 11, and if the number is written with
digits reversed, and subtracted from the original number, the result is 45. Find the number.

39
Chapter 4

Vectors and the Geometry of Space

4.1 Introduction

Many quantities in geometry and physics, such as area, volume,energy, work, electrical resistance,
temperature, mass and time, can be characterized by single real numbers scaled to appropriate
units of measure. We call these scalar quantities, and the real number associated with each is
called a scalar. A scalar quantity has magnitude, including the sense of being positive or negative,
but no assigned position and no assigned direction.

Other quantities such as force, displacement, acceleration, momentum and velocity involve both
magnitude and direction and cannot be characterized by single real numbers. A vector is a quantity
having both magnitude and direction.

−−→
Graphically a vector is represented by an arrow OP defining the direction, the magnitude of the
vector being indicated by the length of the arrow. The tail end O of the arrow is called the origin
or initial point of the vector, and the head P is called the terminal point or terminus. This arrow
−−→
representing the vector is called a directed line segment. The length |OP | is the magnitude of the
line segment from O to P .

Vectors can be represented in text by bold-case letters, such as A, B, C and so on or lower-case


boldface letters such as a, b, c and so on. When written by hand, however, vectors are often


denoted by letters with arrows above them, such as → −a , b and so on or a bar above, such as a, b

40
and so on or a bar below, such as a, b and so on. When the initial point of the vector is fixed, it is
called a fixed or localized vector, otherwise, it is a free vector.

4.2 Unit Vectors

A unit vector is a vector of unit length. A unit vector is sometimes denoted by eb or b


e . Therefore,

|b
e| = 1.

Any vector can be made into a unit vector by dividing it by its length, that is,
u
e= .
|u|
b

u
So is a unit vector in the direction of the vector u.
|u|

An important set of unit vectors are those having the directions of the positive x, y, and z axes of
a three dimensional rectangular coordinate system. Vectors will be denoted as

A = (A1 , A2 , A3 ) = A1 i + A2 j + A3 k,

where i, j and k are unit base vectors defined by

i = (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1).

The vectors A1 i, A2 j, and A3 k are called the rectangular component vectors or simply com-
ponent vectors of A in the x, y and z directions respectively. A1 , A2 and A3 are called the
rectangular components or simply components of A in the x, y and z directions respectively. The
magnitude or length of A is q
A = |A| = A21 + A22 + A23 .
In particular, the position vector or radius vector r from O to the point (x, y, z) is written as

r = xi + yj + zk
p
and has magnitude r = |r| = x2 + y 2 + z 2 . That is, i, j and k are three mutually perpendicular
vectors pointing along Ox, Oy and Oz axes respectively. These vectors are often called the basis
vectors.

41
Example 4.2.1. Given A = 3i − 2j + k, B = 2i − 4j − 3k and C = −i + 2j + 2k, find the magnitudes
of (i) C, (ii) A + B + C and (iii) 2A − 2B − 5C.

p
Solution: (i) |C| = | − i + 2j + 2k| = (−1)2 + 22 + 22 = 3.

(ii) A+B+C = 3i−2j+k+2i−4j−3k−i+2j+2k p = (3+2−1)i+(−2−4+2)j+(1−3+2)k


√ √ =
4i − 4j + 0k. Then |A + B + C| = |4i − 4j + 0k| = 42 + (−4)2 = 32 = 4 2.

(iii) 2A − 3B − 5C = 2(3i − 2j
p+ k) − 3(2i − 4j − √
3k) − 5(−i + 2j + 2k) = 5i − 2j + k. Then
2 2 2
|2A − 2B − 5C| = |5i − 2j + k| = 5 + (−2) + 1 = 30.
Example 4.2.2. Find the component form and magnitude of the vector A having initial point
(−2, 3, 1) and terminal point (0, −4, 4). Then find a unit vector in the direction of A.

Solution: The component form of A is


A = (0 − (−2), −4 − 3, 4 − 1) = (2, −7, 3)
which implies that its magnitude is
p √
|A| = 22 + (−7)2 + 32 = 62.
The unit vector in the direction of v is
 
A 1 2 −7 3
U= = √ (2, −7, 3) = √ ,√ ,√ .
|A| 62 62 62 62

4.3 Vectors in Rn

Euclidean 2-space, denoted by R2 , is the set of all vectors with two entries, that is
(  )
x 1
R2 = x1 , x2 ∈ R .
x2

Similarly, Euclidean 3-space, denoted by R3 , is the set of all vectors with three entries, that is
  
 x1 
R3 = x2  x1 , x2 , x3 ∈ R .
x3
 

42
In general, Euclidean n-space consists of vectors with n entries, usually denoted by Rn , is defined
by   

 x1 



  x2  


  
n  x3 
R =   xi ∈ R, i = 1, 2, . . . , n .
  ..  


  .  


 
xn
 

4.3.1 Linear Combination

In 3-dimensional Euclidean space R3 , the coordinate vectors that define the three axis, are the
vectors      
1 0 0
e1 = 0 , e2 = 1 , e3 = 0 .
0 0 1
Every vector in R3 can be obtained from these coordinate vectors.
Example 4.3.1.        
2 1 0 0
v = 3 = 2 0 + 3 1 + 3 0 .
3 0 0 1

A vector written as a combination of other vectors using addition and scalar multiplication is
called a linear combination.
Example 4.3.2. If      
1 0 −1
v1 = 1 , v2 = 1 and v3 =  1  ,
1 1 1
 
2
then 3v1 − v2 + v3 = 3.
3
Definition 4.3.1. Let S = {v1 , v2 , · · · , vn } be a set of vectors in Rn , and let c1 , c2 , · · · , cn be
scalars. An expression of the form
k
X
c1 v1 + c2 v2 + · · · + ck vk = ci vi
i=1

is called a linear combination of the vectors of S.

43
 
−1
Example 4.3.3. Determine whether the vector v =  1  is a linear combination of the vectors
10
     
1 −2 −6
v1 = 0 , v2 =  3  , v3 =  7 .
1 −2 5

Solution: The vector v is a linear combination of the vectors v1 , v2 and v3 , if there are scalars
c1 , c2 and c3 , such that
 
−1
v =  1  = c1 v1 + c2 v2 + c3 v3
10
     
1 −2 −6
= c1 0 + c2 3 + c3 7 
    
1 −2 5
 
c1 − 2c2 − 6c3
=  3c2 + 7c3  .
c1 − 2c2 + 5c3

Equating components gives the linear system

c1 − 2c2 − 6c3 = −1
3c2 + 7c3 = 1
c1 − 2c2 + 5c3 = 10.

To solve this linear system, we reduce the augmented matrix


   
1 −2 −6 −1 1 0 0 1
0 3 7 1  to 0 1 0 −2 .
1 −2 5 10 0 0 1 1

From the last matrix, we see that the linear system is consistent with the unique solution

c1 = 1, c2 = −2, c3 = 1.

Using the scalars, we can write v as the linear combination


       
−1 1 −2 −6
v = 1 = 1 0 + (−2) 3 + 1 7  .
      
10 1 2 5

44
 
−5
Exercise 4.3.1. Determine whether the vector v =  11  is a linear combination of the vectors
−7
     
1 0 2
v1 = −2 , v2 = 5 and v3 = 0 .
2 5 8

4.4 Linear Independence

Definition 4.4.1. A set of vectors {v1 , v2 , · · · , vk } is said to be linearly independent, if the


only scalars c1 , c2 , · · · , ck satisfying

c1 v1 + c2 v2 + · · · + ck vk = 0

are c1 = c2 = · · · = ck = 0. We also say that the vectors v1 , v2 , · · · , vk are linearly independent.

If vectors are not linearly independent, the are linearly dependent.

The set of n-dimensional vectors {v1 , v2 , · · · , vk } are linearly dependent if k > n. If there are more
vectors than the dimension, then the vectors are linearly dependent.

Example 4.4.1. Find the value(s) of h for which the following set of vectors
     
1 h 1
v1 = 0 , v2 =  1  , v3 =  2h 
0 −h 3h + 1

is linearly independent.

Solution: Let us consider the linear combination

x1 v1 + x2 v2 + x3 v3 = 0.

If this homogeneous system has only zero solution x1 = x2 = x3 = 0, then the vectors v1 , v2 , v3 are
linearly independent. We reduce the augmented matrixfor the system as folows
   
1 h 1 0 1 h 1 0
R3 +hR2
0 1 2h 0 −−−−−→ 0 1 2h 0 .
0 −h 3h + 1 0 2
0 0 2h + 3h + 1 0

45
From this, we see that the homogeneous system has only the zero solution if and only if

2h2 + 3h + 1 6= 0.

Since, we have
2h2 + 3h + 1 = (2h + 1)(h + 1),
if h 6= − 12 , −1, then 2h2 + 3h + 1 6= 0. In summary, the vectors v1 , v2 , v3 are linearly independent
for any h except − 12 , −1.

4.5 Tutorial Questions


 
2
1. Express the vector b = 13 as a linear combination of the vectors

6
     
1 1 1
v1 =  5  , v2 = 2 ,
 v3 = 4  .

−1 1 3

2. Prove that any set of vectors which contains the zero vector is linearly dependent.

3. Let      
1 1 0
v 1 = 2 ,
 v2 = a ,
 v3 = 4 

0 5 b
be vectors in R3 . Determine a condition on the scalars a, b so that the set of vectors {v1 , v2 , v3 }
is linearly dependent.

4.6 General Vector Spaces

4.6.1 Real Vector Spaces

In this section we shall generalize the concept of a vector still further. We shall state a set of axioms
which, if satisfied by a class of objects, will entitle those objects to be called ”vectors”.

46
Definition 4.6.1. Let V be a arbitrary non-empty set of objects on which two operations are
defined, addition and multiplication by scalars. If the following axioms are satisfied by all objects
u, v, w in V and all scalars k and l, then we call V and vector space and we call the objects
vectors.

1. If u and v are objects in V, then u + v is in V.

2. u + v = v + u.

3. u + (v + w) = (u + v) + w

4. There is an object 0 in V, called zero vector for V, such that


0 + u = u + 0 = uf orallu in V.

5. For each u in V, there is an object −u in V, called the negative of u, such that u + (−u) =
(−u) + u = 0.

6. If k is any scalar and u is any object in V, then ku is in V.

7. k(u + v) = ku + kv

8. (k + l)u = ku + lu

9. k(lu) = (kl)u

10. 1u = u

Exercises

1. Prove that the set V = R2 is a vector space.

2. Prove that the set of all 2 × 2 matrices, Ms×2 , is a vector space.


Theorem 4.6.1. Let V be a vector space, u a vector in V, and k a scalar; then :
(a) 0u = 0
(b) k0 = 0
(c) (−1)u = −u
(d) If ku = 0, then k = 0 or u = 0.

Proof. (a) By axiom 8 and the propety of the number 0 we can write

0u + 0u = (0 + 0)u = 0u

47
By axiom 5 the vector 0u has a negative, −0u. adding this to both side of the above yields

[0u + 0u] + (−0u) = 0u + (−0u)


0u + [0u + (−0u)] = 0
0u + 0 = 0
0u = 0

Proof. (b), (c) and (d) have been left as homework.

Subspaces

Definition 4.6.2. A subset W of a vector space V is called a subspace of V if W is itself a vector


space and if and if and only if
(a) If u and v are vectors in W, then u + v is in W.
(b) If k is any scalar and u is any vector in W, then ku is in W.

Example Let n be a positive integer, and let W consist of all functions expressible in the form

p(x) = a0 + a1 x + a2 x2 + · · · + an xn ,

where a0 , · · · , an are all real numbers. Show that W is a subspace.

Proof. Classwork

Linear Combinations of Vectors

Definition 4.6.3. A vector w is called a linear combination of vectors v1 , v2 , · · · , vr if it can


be expressed in the form
w = k1 v1 + k2 v2 + · · · + kr vr
where k1 , k2 , · · · , kr are scalars.

Example

48
1. Show that every vector v = (a, b, c) in R3 is expressible as a linear combination of the standard
basis vectors
i = (1, 0, 0), j = (0, 1, 0), k = (0, 0, 1).

2. Consider the vectors u = (1, 2, −1) and v = (6, 4, 2) in R3 . Show that w = (9, 2, 7) is a linear
combination of u and v and that z = (4, −1, 8) is not a linear combination of u and v.
Solution: (i) There must be scalars k1 and k2 such that w = k1 u + k2 v.

Spanning
Theorem 4.6.2. If v1 , v2 , · · · , vr are vectors in a vector space V, then:
(a) The set W of all linear combinations of v1 , v2 , · · · , vr is a subspace of V.
(b) W is the smallest subspave of V that contains v1 , v2 , · · · , vr in the sense that every other
subspace of V that contains v1 , v2 , · · · , vr must contain W.

Proof. To show that W is a subspace of V, we must prove it is closed under addition and scalar
multiplication. 0 ∈ V since 0 = 0v1 + 0v2 + · · · + 0vr
If u, v are vectors in W, then
u = c1 v1 + c2 v2 + · · · + cr vr
v = k1 v1 + k2 v2 + · · · + kr vr
where c1 , c2 , · · · , cr , k1 , k2 , · · · , kr are scalars. Therefore

u + v = (c1 + k1 )v1 + (c2 + k2 )v2 + · · · + (cr + kr )vr

and, for any scalar k,


u + v = (kc1 )v1 + (kc2 )v2 + · · · + (kcr )vr
Thus, u + v and ku are linear combinations of v1 , v2 , · · · , vr and consequently lie in W. therefore,
W is closed under addition and scalar multiplication.

Proof. (b) Classwork

Definition 4.6.4. If S = {v1 , v2 , · · · , vr } is a set of vectors in a vector space V, then the subspace
W of V consisting of all linear combinations of the vectors in S is called the the space spanned
by v1 , v2 , · · · , vr , and we say that the vectors v1 , v2 , · · · , vr span W.

Example: Determine whether v1 = (1, 1, 2), v2 = (1, 0, 1), and v3 = (2, 1, 3), span the vector
space R3
Solution: Classwork

49
Tutorial 1

1. Determine which sets are vector spaces under the given given operations. For those that are
not, list all axioms that fail to hold.

(a) The set of all triples of real numbers (x, y, z) with the operations

(x, y, z) + (x0 , y 0 , z 0 ) = (x + x0 , y + y 0 , z + z 0 ) and k(x, y, z) = (kx, ky, kz)

(b) The set of all pairs of real numbers (x, y) with the operations

(x, y) = (x0 , y 0 ) = (x + x0 , y + y 0 ) and k(x, y) = (2kx, 2ky).

(c) The set of all pairs of numbers (x, y) with the operations

(x, y) = (x0 , y 0 ) = (x + x0 + 1, y + y 0 + 1) and k(x, y) = (kx, ky).


 
a 1
(d) The set of all 2 × 2 matrices of the form with matrix addition and scalar
1 b
multiplication.
 
a 0
(e) The set of all 2 × 2 matrices of the form with matrix addition and scalar
0 b
multiplication.

2. Determine which of the following are subspaces of M2×2

(a) all 2 × 2 matrices with integer entries.


(b) all 2 × 2 matrices A such that det(A) = 0.

3. Determine which of the following are subspaces of P3 .

(a) all polynomials a0 + a1 x + a2 x2 + a3 x3 such that a0 = 0.


(b) all polynomials a0 + a1 x + a2 x2 + a3 x3 such that a0 + a1 + a2 + a3 + a4 = 0.

4. Express the following as linear combinations of u = (2, 1, 4), v = (1, −1, 3) and w = (3, 2, 5).
(a) (0, 0, 0) (b) (−9, −7, −15) (c) (6, 11, 6)

5. Express the following as linear combinations of p1 = 2 + x + 4x2 , p2 = 1 − x + 3x2 and


p3 = 3 + 2x + 5x2 .
(a) 0 (b) −9 − 7x − 15x2

6. In each part determine whether the given vectors span R3

50
(a) v1 = (2, 2, 2), v2 = (0, 0, 3), v3 = (0, 1, 1).
(b) v1 = (2, −1, 3), v2 = (4, 1, 2) v3 = (8, −1, 8).
(c) v1 = (1, 2, 6), v2 = (3, 4, 1), v3 = (4, 3, 1), v4 = (3, 3, 1)

7. Let v1 = (2, 1, 0, 3) v2 = (3, −1, 5, 2) v3 = (−1, 0, 2, 1). Which of the following vectors are in
span{v1 , v2 , v3 }
(a) (0, 0, 0, 0) (b) (2, 3, −7, 3) (c) (1, 1, 1, 1)

4.6.2 Linear Independence

In the preceding section we learned that set of vectors S = {v1 , v2 , · · · , vr } span a given vector
space V if every vector in V can be expressed as a linear combination of the vectors in S. In general,
there may be more than one way to express a vector in V as a linear combination of vectors in
the spanning set. In this section we study the conditions under which each vector in V can be
expressed as a linear combination of the spanning vectors in exactly one way.

Definition 4.6.5. If S = {v1 , v2 , · · · , vr } is a non-empty set of vectors, the the vector equation

k1 v1 + k2 v2 + · · · + kr vr = 0

has atleast one solution, namely

k1 = 0, k2 = 0, · · · , kr = 0.

If this is the only solution, then S is called a linear independent set. If there are other solutions,
then S is called a linear dependent set.

Examples

1. If v1 = (2, −1, 0, 3), v2 = (1, 2, 5, −1) and v3 = (7, −1, 5, 8), then then the set of vectors
S = {v1 , v2 , v3 } is linearly dependent since 3v1 + v2 + v3 = 0.

2. The polynomials p1 = 1−x, p2 = 5+ 3x−2x2 and p3 = 1+ 3x−x2 form a linearly dependent


set since 3p1 − p2 + 2p3 = 0.

3. The vectors i = (1, 0, 0) j = (0, 1, 0) and k = (0, 0, 1) in R3 are linearly dependent.

4. Determine whether the vectors v1 = (1, −2, 3), v2 = (5, 6, −1) and v3 = (3, 2, 1) form a
linearly dependent or independent set. {k1 = − 12 t, k2 = − 12 t, k3 = t}

51
4.6.3 Basis For a Vector Space

Definition 4.6.6. If V is any vector space and S = {v1 , v2 , · · · , vr } is a set of vectors in V, then
S is called a basis for V if the following two conditions hold
(a) S is linearly independent.
(b) S spans V.

Theorem 4.6.3. If S = {v1 , v2 , · · · , vn } is a basis for a vector space V, then every vector v in V
can be expressed in the form v = c1 v1 + c2 v2 + · · · + cn vn in exactly one way

Proof. Since S spans V, it follows that every vector in V can be expressed as linear combination
of vectors in S. To see that there is only one way to express a vector as a linear combination of
vectors in S, suppose that some vector v can be expressed as

v = c1 v1 + c2 v2 + · · · + cn vn

and also as
v = k1 v1 + k2 v2 + · · · + kn vn .
Subtracting the second equation from the first gives

0 = (c1 − k1 )v1 + (c2 + k2 )v2 + · · · + (cn + kn )vn

Since the right hand side of this equation is a linear combination of vectors in S, the linear inde-
pendence of S implies that

c1 − k1 = 0, c2 − k2 = 0, · · · , cn − kn = 0

that is
c1 = k1 , c2 = k2 , · · · , cn = kn
Thus, the two expressions for v are the same.

Example Let v2 = (1, 2, 1), v2 = (2, 9, 0) and v3 = (3, 3, 4). Show that the set S{v1 , v2 , v3 }
is a basis for R3 .
Solution: Classwork

Definition 4.6.7. A non-zero vector space V is called finite-dimensional if it contains a finite


set of vectors {v1 , v2 , · · · , vn } that forms a basis. If no such set exists, V is called infinite-
dimensional. The zero vector space shall be regarded to be finite dimensional.

52
Theorem 4.6.4. If V is a finite-dimensional vector space and {v1 , v2 , · · · , vn } is any basis, then:
(a) Each set with more than n vectors is linearly dependent.
(b) No set with fewer that n vectors spans V.

Proof. Exercise

Definition 4.6.8. The dimension of a finite dimensional vector space V, denoted by dim(V ),
is defined to be the number of vectors in a basis for V. The zero vector space is defined to have
dimension zero.

Example:Determine a basis for and dimension for the solution space of the homogeneous
system
2x1 +2x2 −x3 +x5 = 0
−x1 −x2 +2x3 −3x4 +x5 = 0
x1 +x2 −2x3 −x5 = 0
x3 +x4 +x5 = 0.
Solution Using the Gauss-Jordan elimination, we see that the general solution of the given system
is
x1 = −s − t, x2 = s, x3 = −t, x4 = 0, x5 = t
Thus, the solution vectors can be written as
           
x1 −s − t −s −t −1 −1
 x2   s   s   0   1   0 
           
 x3  =  −t  =  0 + −t  = s 0  + t −1 .
           
 x4   0   0   0   0   0 
x5 t 0 t 0 1

which shows that the vectors


   
−1 −1

 1 


 0 

v1 = 
 0  and v2 = 
  −1 

 0   0 
0 1

span the solution space. Since they are also linearly independent (verify), {v1 v2 } is a basis, and
the solution space is two-dimensional.

53
4.6.4 Row Space, Column Space and Null Space

Definition 4.6.9. For an m × n matrix


 
a11 a12 ··· a1n
 a21 a22 ··· a2n 
A=
 
.. .. .. 
 . . . 
am1 am2 · · · amn
the vectors
r1 = a11 a12 ··· a1n
r2 = a21 a22 ··· a2n
.. .. ..
. . .
rm = am1 am2 · · · amn
in Rn formed from the rows of A are called the row vectors of A, and the vectors
     
a11 a12 a1n
 a21   a22   a2n 
c1 =  .  , c2 =  .  , · · · , cn =  . 
     
.
 .  .
 .  .
 . 
am1 am2 amn
in Rm formed from the columns of A are called column vectors of A.
Definition 4.6.10. If A is an m × n matrix, then the subspace of Rn spanned by the row vectors of
A is called the row space of A, and the subspace of Rm spanned by the column vectors is called the
column space of A. The solution space of the homogeneous system of equations Ax = 0, which is
a subspace of Rn , is called the null space of A.

Example Find a basis for the space spanned by the vectors v1 = (1, −2, 0, 0, 3), v2 = (2, −5, −3, −2, 6),
v3 = (0, 5, 15, 10, 0), v4 = (2, 6, 18, 8, 6)
Solution: The space spanned by these vectors is the row space of the matrix
 
1 −2 0 0 3
 2 −5 −3 −2 6 
 0 5 15 10 0  .
 

2 6 18 8 6
Reducing this matrix to row-echelon form we obtain
 
1 −2 0 0 3
 0 1 3 2 0 
 
 0 0 1 1 0 
0 0 0 0 0

54
The non-zero vectors of this matrix are

w1 = (1, −2, 0, 0, 3), w2 = (0, 1, 3, 2, 0), w3 = (0, 0, 1, 1, 0).

These vectors form a basis for the row space and consequently for a basis for the subspace of R5
spanned by v1 , v2 , v3 and v4 .

4.6.5 Rank and Nullity

Definition 4.6.11. The common dimension of the row space and column space of a matrix A is
called the rank and is denoted by rank(A); the dimension of the null space of A is called the nullity
of A and is denoted by nullity(A).

Example: Find the rank and nullity of the matrix


 
−1 2 0 4 5 −3
 3 −7 2 0 1 4 
A= .
 2 −5 2 4 61 
4 −9 2 −4 −4 7

Solution: Reducing A to echelon form we obtain


 
1 0 −4 −28 −37 13
 0
 1 −2 −12 −16 5 .
 0 0 0 0 0 0 
0 0 0 0 0 0

Since there are two non zero rows, the row space and column space are both two-dimensional. To
find the nullity of A, we must find the dimension of the solution space of the linear system Ax = 0.

Theorem 4.6.5. If A is a matrix with n columns, then

rank(A) + nullity(A) = n

55
Tutorial 2

1. Which of the following sets of vectors in R3 are linearly independent?


(a) (4, −1, 2), (−4, 10, 2) (b) (−3, 0, 4), (5, −1, 2), (1, 1, 3) (c) (8, −1, 3), (4, 0, 1).
2. Which of the following sets of vectors in R4 are linearly independent?
(a) (3, 8, 7, −3), (1, 5, 3, −1), (2, −1, 2, 6), (1, 4, 0, 3) (b) (0, 0, 2, 2), (3, 3, 0, 0), (1, 1, 0, −1)
3. Which of the following sets of vectors in P2 are linearly independent?
(a) 2 − x + 4x2 , 3 + 6x + 2x2 , 2 + 10x − 4x2 (b) 3 + x + x2 , 2 − x + 5x2 , 4 − 3x2
4. For which values of λ do the following vectors form a linearly dependent set in R3 ?
1 1 1 1 1 1
v1 = (λ, − , − ), v2 = (− , λ, − ), v3 = (− , − , λ)
2 2 2 2 2 2
5. Which of the following sets of vectors are bases for the indicated vector space?
(a) (2, 1), (3, 0) for R2 (b) (1, 0, 0), (2, 2, 0), (3, 3, 3) for R3
(c) 1 − 3x2x2 , 1 + x + 4x2 , 1 − 7x for P2
6. Let {v1 , v2 , v3 } be a basis for a vector space V. Show that {u1 , u2 , u3 } is also a basis, where
u1 = v1 , u2 = v1 + v2 and u3 = v1 + v2 + v3
7. Determine the dimension of and a basis for the solution space of the systems?
x +y +z =0
x1 +x2 −x3 = 0
3x1 +x2 +x3 +x4 = 0 3x +2y −2z =0
(a) −2x1 −x2 +2x3 = 0 (b) (c)
5x1 −x2 +x3 −x4 = 0 4x +3y −z =0
−x1 +x3 = 0
6x +5y +z =0
8. Let  
    1 4 5 6 9
1 −1 3 1 4 5 2  3 −2 1 4 −1 
(a) A =  5 −4 −4  (b) A =  2 1 3 0  (c) A =  
 −1 0 −1 −2 −1 
7 −6 2 −1 3 2 2
2 3 5 7 8
(a) Find a basis for the null space of A.
(b) Find a basis for the row space of A by reducing the matrix to row-echelon for.
(c) Find the basis for the column space of A.
(d) Find a basis for the row space of A consisting entirely of row vectors of A.
(e) Find the rank and nullity of A.
9. Find a basis for the subspace of R4 spanned by the given vectors.
(a) (1, 1, −4, −3), (2, 0, 2, −2), (2, −1, 3, 2) (b) (−1, 1 − 2, 0), (3, 3, 6, 0), (9, 0, 0, 3)

56
4.7 EIGENVALUES, EIGENVECTORS

Definition 4.7.1. If A is an n×n matrix, then a non-zero vector x in Rn is called an eigenvector


of A if Ax is a scalar multiple of x; that is
Ax = λx
for some scalar λ. The scalar λ is called an eigenvalue of A, and x is said to be an eigenvector of
A corresponding to λ.

To find the eigenvalues of an n × n matrix A we rewrite Ax = λx as


Ax = λIx
or equivalently,
(λI − A)x = 0. (4.7.1)
For λ to be an eigenvalue, there must be a no-zero solution of this equation. Equation (3.1) has a
non-zero solution if and only if
det(λI − A) = 0. (4.7.2)
This is called the characteristic equation of A; the scalars satisfying this equation are the
eigenvalues of A. When expanded the determinant det(λI − A)x is a polynomial in λ called the
characteristic polynomial of A. Example Find the eigenvalues of
 
0 1 0
A= 0 0 1 
4 −17 8
Solution: The characteristic polynomial of A is
 
λ 1 0
det(λI − A) = det  0 λ 1  = λ3 − 8λ2 + 17λ − 4.
4 −17 λ − 8
The eigenvalues of A must therefore satisfy the cubic equation
λ3 − 8λ2 + 17λ − 4.
Solving the above equation gives
(λ − 4)(λ2 − 4λ + 1) = 0.
Thus, the eigenvalues of A are
√ √
λ = 4, λ = 2 + 3, λ = 2 − 3.

57
Theorem 4.7.1. If A is an n×n triangular matrix (upper triangular, lower triangular or diagonal),
then the eigenvalues of A are the entries on the main diagonal of A.

The eigenvectors of A corresponding to an eigenvalue λ are the non-zero vectors x satisfying the
equation Ax = λx. Equivalently, the eigenvectors corresponding to λ are non-zero vectors in the
solution space of (λI −A)x = 0. We call this solution space the eigenspace of A corresponding to λ.

Example: Find bases for the eigenspaces of


 
0 0 −2
 1 2 1 .
1 0 3

Solution The characteristic equation of of A is λ3 −5λ2 +8λ−4 = 0, or factored form, (λ−1)(λ−2)2 =


0. Thus, the eigenvalues of A are λ = 1 and λ = 2, so there are two eigenspaces of A. By definition
 
x1
x =  x2 
x3

is an eigenvector of A corresponding to λ if and only if x is a non-trivial solution of (λI − A)x = 0,


that is, of     
λ 0 2 x1 0
 −1 λ − 2 −1   x2  =  0  .
−1 0 λ−3 x3 0
If λ the above equation becomes
    
2 0 2 x1 0
 −1 0 −1   x2  =  0  .
−1 0 −1 x3 0
Solving this system yields
x1 = −s, x2 = t, x3 = s.
Thus, the eigenvectors of A corresponding to λ = 2 are the non zero vectors of the form
         
−s −s 0 −1 0
x =  t  =  0  +  t  = s 0  + t 1 .
s s 0 1 0
Since    
−1 0
 0  and  1 
1 0

58
are linearly independent, these vectors form a basis for the eigenspace corresponding to λ = 2.
C/W Find the eigenspace corresponding to λ = 1.

Theorem 4.7.2. If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a corresponding


eigenvector, then λk is a corresponding eigenvalue of Ak and x is a corresponding eigenvector.

59
Tutorial 4

1. For the matrices below find;

(a) the characteristic equations.


(b) the eigenvalues.
(c) the bases for the eigenspace.
     
3 0 −2 −7 0 0
(a) (b) (c)
8 −1 1 2 0 0
     
−2 0 1 5 0 1 5 6 2
(d)  −6 −2 0  (e) 1 1 0  (f)  0 −1 −8 
19 5 −4 −7 1 0 1 0 −2
   
0 0 2 0 10 −9 0 0
 1 0 1 0   4 −2 0 0 
(g) 
 0 1 −2 0  (h)  0
  
0 −2 −7 
0 0 0 1 0 0 1 2
2. Determine whether A is diagonalizable. If so find the matrix P that diagonalizes A, and
determine P −1 AP
     
19 −9 6 −1 4 −2 5 0 0
(a)  25 −11 −9  (b)  −3 4 0  (c)  1 5 0 
17 −9 −4 −3 1 3 0 1 5

60

You might also like