0% found this document useful (0 votes)
3 views

Lecture03_-_Matrices and Determinants 2

The document discusses mathematical concepts related to matrices and determinants, focusing on the adjoint, inverse, and properties of determinants. It includes theorems, definitions, and examples illustrating how to find the adjoint and inverse of matrices, as well as the relationship between determinants and linear independence. Additionally, it explains how to determine the rank and nullity of a matrix through row echelon form.

Uploaded by

lourdes.gerges
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture03_-_Matrices and Determinants 2

The document discusses mathematical concepts related to matrices and determinants, focusing on the adjoint, inverse, and properties of determinants. It includes theorems, definitions, and examples illustrating how to find the adjoint and inverse of matrices, as well as the relationship between determinants and linear independence. Additionally, it explains how to determine the rank and nullity of a matrix through row echelon form.

Uploaded by

lourdes.gerges
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Mathematical Methods

in Electrical Engineering
Lecture 𝟎𝟑 – Matrices and Determinants
Part II

Dr. Elie Abou Diwan


Adjoint and Inverse
THEOREM:
−1 −1 1
If matrix 𝐴 is invertible (if 𝐴 exists), then det 𝐴 = and det⁡(𝐴) ≠ 0.
det⁡(𝐴)

Proof:
Matrix 𝐴 is invertible ⇒ 𝐴𝐴−1 = 𝐼𝑛 ⇒ det 𝐴𝐴−1 = det 𝐼𝑛 = 1

= det 𝐴 det⁡(𝐴−1 )
Therefore:
1
det 𝐴 det 𝐴−1 = 1 ⇒ det 𝐴−1 =
det⁡(𝐴)
Adjoint and Inverse (cont.)
EXAMPLE:
α −3
Find α where 𝐴 = is not invertible.
4 1−α

If 𝐴 is not invertible, then det 𝐴 = 0. Therefore:


det 𝐴 = α 1 − α + 12 = −α2 + α + 12 = 0
∆= 𝑏 2 − 4𝑎𝑐 = 1 − 4 −1 12 = 49
Finally:
−𝑏 + ∆
α1 = = −3⁡⁡⁡
2𝑎
−𝑏 − ∆
α2 = = 4⁡⁡⁡⁡⁡⁡
2𝑎
Adjoint and Inverse (cont.)
DEFINITION:
The matrix of cofactors is the matrix 𝐶 found by replacing each element of a matrix 𝐴 by its
cofactor:

𝐶11 𝐶12 ⋯ 𝐶1𝑗 ⋯ 𝐶1𝑛


𝐶21 𝐶22 ⋯ 𝐶2𝑗 ⋯ 𝐶2𝑛
⋮ ⋮ ⋱ ⋮ ⋮ ⋮
𝐶=
𝐶𝑖1 𝐶𝑖2 ⋯ 𝐶𝑖𝑗 ⋯ 𝐶𝑖𝑛
⋮ ⋮ ⋮ ⋮ ⋱ ⋮
𝐶𝑛1 𝐶𝑛2 ⋯ 𝐶𝑛𝑗 … 𝐶𝑛𝑛

The adjoint of a matrix 𝐴 is the transpose of the matrix of cofactors. It is denoted by 𝑎𝑑𝑗 𝐴 such
that 𝑎𝑑𝑗 𝐴 = 𝐶 𝑡 . An adjoint matrix is also called an adjugate matrix.
Adjoint and Inverse (cont.)
EXAMPLE:
−1 5
Find the adjoint of the matrix 𝐴 = .
3 −2

𝐶11 𝐶12
The matrix of cofactors is 𝐶 = where:
𝐶21 𝐶22

𝐶11 = −1 1+1 𝑀 = −2 𝐶12 = −1 1+2 𝑀 = −3


11 12

−1 5 −1 5
3 −2 3 −2

𝐶21 = −1 2+1 𝑀 = −5 𝐶22 = −1 2+2 𝑀 = −1


21 22

−1 5 −1 5
3 −2 3 −2
Adjoint and Inverse (cont.)
Therefore:
𝐶11 𝐶12 −2 −3
𝐶= =
𝐶21 𝐶22 −5 −1
Finally:
−2 −5
𝑎𝑑𝑗 𝐴 = 𝐶𝑡 =
−3 −1
Adjoint and Inverse (cont.)
THEOREM:
𝑎𝑑𝑗(𝐴)
If det⁡(𝐴) ≠ 0, then 𝐴 is invertible and 𝐴−1 = (if det 𝐴 = 0, then 𝐴 is not invertible).
det⁡(𝐴)

EXAMPLE:
3 1 0
Let 𝐴 = −2 −4 3 . Find 𝐴−1 .
5 4 −2

3 1 3 1
det 𝐴 = 𝑎13 𝐶13 + 𝑎23 𝐶23 + 𝑎33 𝐶33 = −3 −2 = −3 12 − 5 − 2 −12 + 2
5 4 −2 −4
= −21 + 20 = −1 ≠ 0 ⇒ 𝐴⁡is⁡invertible
Adjoint and Inverse (cont.)
𝐶11 𝐶12 𝐶13
The matrix of cofactors is 𝐶 = 𝐶21 𝐶22 𝐶23 where:
𝐶31 𝐶32 𝐶33

𝐶11 = −1 1+1 𝑀 = 8 − 12 = −4 𝐶12 = −1 1+2 𝑀 = −(4 − 15) = 11


11 12
3 1 0 3 1 0
−2 −4 3 −2 −4 3
5 4 −2 5 4 −2

𝐶13 = −1 1+3 𝑀 = −8 + 20 = 12 2+1


13 𝐶21 = −1 𝑀21 = − −2 − 0 = 2
3 1 0 3 1 0
−2 −4 3 −2 −4 3
5 4 −2 5 4 −2
Adjoint and Inverse (cont.)
𝐶22 = −1 2+2 𝑀 = −6 − 0 = −6 𝐶23 = −1 2+3 𝑀 = − 12 − 5 = −7
22 23
3 1 0 3 1 0
−2 −4 3 −2 −4 3
5 4 −2 5 4 −2
𝐶31 = −1 3+1 𝑀 =3−0=3 3+2 𝑀
31 𝐶32 = −1 32 =− 9−0 = −9
3 1 0 3 1 0
−2 −4 3 −2 −4 3
5 4 −2 5 4 −2
3+3
𝐶33 = −1 𝑀33 = −12 + 2 = −10
3 1 0
−2 −4 3
5 4 −2
Adjoint and Inverse (cont.)
Therefore:
𝐶11 𝐶12 𝐶13 −4 11 12
𝐶 = 𝐶21 𝐶22 𝐶23 = 2 −6 −7
𝐶31 𝐶32 𝐶33 3 −9 −10
And:
−4 2 3
𝑎𝑑𝑗 𝐴 = 𝐶 𝑡 = 11 −6 −9
12 −7 −10
Finally:
𝑎𝑑𝑗(𝐴) −4 2 3 4 −2 −3
𝐴−1 = = − 11 −6 −9 = −11 6 9
det⁡(𝐴)
12 −7 −10 −12 7 10
Adjoint and Inverse (cont.)
REMARK:
𝑎11 𝑎12
For a 2 × 2 matrix 𝐴 = 𝑎 𝑎22
21
Main Diagonal
𝐶 𝐶12 𝑎22 −𝑎21
𝐶 = 11 = −𝑎 𝑎11
𝐶21 𝐶22 12

𝑡
𝑎22 −𝑎12
𝑎𝑑𝑗 𝐴 = 𝐶 = −𝑎 𝑎11
21
𝑎22 −𝑎12 𝑎22 −𝑎12
−1
𝑎𝑑𝑗(𝐴) −𝑎21 𝑎11 −𝑎21 𝑎11
𝐴 = = 𝑎 𝑎 =
det⁡(𝐴) 11 12 𝑎11 𝑎22 − 𝑎12 𝑎21
𝑎21 𝑎22
To get 𝐴−1 , negate the off-diagonal elements, switch the main diagonal elements, and then
divide by det⁡(𝐴)
Determinant and Linear Independence
THEOREM:
det⁡(𝐴) ≠ 0 if and only if the rows and columns of square matrix 𝐴 are linearly independent.

EXAMPLE 𝟏:
1 4 3
The three vectors 𝒗𝟏 = −2 , 𝒗𝟐 = 0 and 𝒗𝟑 = −1 are linearly dependent.
0 8 5
Proof:
1 4 3
Let 𝐴 = −2 0 −1 . Therefore:
0 8 5
1 4 3
det 𝐴 = −2 0 −1 = 1 0 + 8 − 4 −10 − 0 + 3 −16 − 0 = 0
0 8 5

Since 𝐴 is a 3 × 3 matrix and det 𝐴 = 0, then 𝒗𝟏 , 𝒗𝟐 , and 𝒗𝟑 are linearly dependent.


Determinant and Linear Independence (cont.)
EXAMPLE 𝟐:
−2 1 0
The three vectors 𝒗𝟏 = 1 , 𝒗𝟐 = 6 and 𝒗𝟑 = 0 are linearly independent.
4 2 1
Proof:
−2 1 0
Let 𝐴 = 1 6 0 . Therefore:
4 2 1

−2 1 0
det 𝐴 = 1 6 0 = −2 6 − 0 − 1 1 − 0 = −13 ≠ 0
4 2 1

Since 𝐴 is a 3 × 3 matrix and det 𝐴 ≠ 0, then 𝒗𝟏 , 𝒗𝟐 , and 𝒗𝟑 are linearly independent.


Determinant and Linear Independence (cont.)
EXAMPLE 𝟑:
12 3 3 3
0 1 0 2
The four vectors 𝒗𝟏 = , 𝒗𝟐 = , 𝒗𝟑 = and 𝒗𝟒 = are linearly independent.
4 1 2 0
0 1 0 0
Proof:
12 3 3 3
0 1 0 2
Let 𝐴 = . Therefore:
4 1 2 0
0 1 0 0

12 3 3 3
0 1 0 2
det 𝐴 = = 𝑎41 𝐶41 + 𝑎42 𝐶42 + 𝑎43 𝐶43 + 𝑎44 𝐶44 = 𝐶42
4 1 2 0
0 1 0 0
Determinant and Linear Independence (cont.)
12 3 3 3
4+2 0 1 0 2
det 𝐴 = 𝐶42 = −1 𝑀42 = = 12 0 − 4 − 3 0 − 8 = −24 ≠ 0
4 1 2 0
0 1 0 0

Since 𝐴 is a 4 × 4 matrix and det 𝐴 ≠ 0, then 𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 , and 𝒗𝟒 are linearly independent.


Determinant and Linear Independence (cont.)
APPLICATION:
1 3 4
Determine whether the vectors 𝒗𝟏 = 4 , 𝒗𝟐 = 5 and 𝒗𝟑 = 9 constitute a basis in R3 .
1 2 3

1 3 4
Let 𝐴 = 4 5 9 . Therefore:
1 2 3

1 3 4
det 𝐴 = 4 5 9 = 1 15 − 18 − 3 12 − 9 + 4 8 − 5 = 0
1 2 3

Since 𝐴 is a 3 × 3 matrix and det 𝐴 = 0, then 𝒗𝟏 , 𝒗𝟐 , and 𝒗𝟑 are linearly dependent.


Consequently, 𝒗𝟏 , 𝒗𝟐 , and 𝒗𝟑 do not constitute a basis in R3 .
Rank and Nullity of a Matrix
DEFINITION:
The rank of an 𝑚 × 𝑛 matrix 𝐴 is the maximum number of linearly independent rows of 𝐴.

How to Find Matrix Rank:


The maximum number of linearly independent rows in a matrix is equal to the number of non-
zero rows in its row echelon matrix. Therefore, to find the rank of a matrix, we simply transform
the matrix to its row echelon form and count the number of non-zero rows.

EXAMPLE:
0 1 2 1 2 1
Consider matrix 𝐴 = 1 2 1 and its row echelon matrix 𝐴𝑟𝑒𝑓 = 0 1 2 . Because the
2 7 8 0 0 0
row echelon form 𝐴𝑟𝑒𝑓 has two non-zero rows, we know that matrix 𝐴 has two independent
rows; and we know that the rank of matrix 𝐴 is 2.
Rank and Nullity of a Matrix (cont.)
DEFINITION:
Let 𝑟1 , 𝑟2 , … , 𝑟𝑚 denote the rows of 𝐴 , then the row space of 𝐴 is given by
𝑅 𝐴 = 𝑠𝑝𝑎𝑛 𝑟1 , 𝑟2 , … , 𝑟𝑚 . The rank of 𝐴 = 𝑟𝑎𝑛𝑘 𝐴 = 𝑑𝑖𝑚 𝑅 𝐴 (dimension of 𝑅 𝐴 ).

REMARKS:
• REMARK 𝟏: Elementary row operations do not change the rank of the matrix
• REMARK 𝟐: If a matrix 𝐴 is in row echelon form, then the rank of 𝐴 is equal to the number of
leading 1’s
Rank and Nullity of a Matrix (cont.)
EXAMPLE 𝟏:
1 2 3 4
Let 𝐴 = 2 6 7 11 . Find the rank of 𝐴.
2 2 5 5
1 2 3 4 1 2 3 4 Row⁡1
2 6 7 11 → 0 2 1 3 −2Row⁡1 + Row⁡2
2 2 5 5 0 −2 −1 −3 −2Row⁡1 + Row⁡3

1 2 3 4 Row⁡1
1 2 3 4 1 3 1
0 2 1 3 → 0 1 2
Row⁡2
0 −2 −1 −3 2 2
0 0 0 0 Row⁡2 + Row⁡3

Finally, 𝑟𝑎𝑛𝑘 𝐴 = Number⁡of⁡leading⁡1′ s = 2


Rank and Nullity of a Matrix (cont.)
EXAMPLE 𝟐:
−1 0 −1 2
Let 𝐴 = 2 0 2 0 . Find the rank of 𝐴.
1 0 1 −1
−1 0 −1 2 −1 0 −1 2 Row⁡1
2 0 2 0 → 0 0 0 4 2Row⁡1 + Row⁡2
1 0 1 −1 0 0 0 1 Row⁡1 + Row⁡3

−1 0 −1 2 1 0 1 −2 −Row⁡1
0 0 0 4 → 0 0 0 0 Row⁡2 − 4Row⁡3
0 0 0 1 0 0 0 1 Row⁡3

Finally, 𝑟𝑎𝑛𝑘 𝐴 = Number⁡of⁡leading⁡1′ s = 2


Rank and Nullity of a Matrix (cont.)
DEFINITION:
The kernel (or null space) of an 𝑚 × 𝑛 matrix 𝐴 is the set of all 𝑛-dimensional column vectors 𝑥
such that 𝐴𝑥 = 0. That is, ker(𝐴) = Null⁡space⁡of⁡𝐴 = 𝑁 𝐴 = 𝑥 ∈ R𝑛 |𝐴𝑥 = 0 .
The nullity of 𝐴 is the dimension of ker(𝐴), i.e. 𝑁𝐴 = 𝑑𝑖𝑚⁡*𝑁 𝐴 +.

𝑎11 𝑎12 ⋯ 𝑎1𝑗 ⋯ 𝑎1𝑛 𝑥1 0


𝑎21 𝑎22 ⋯ 𝑎2𝑗 ⋯ 𝑎2𝑛 𝑥2 0
⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋮ ⋮
𝑎𝑖1 𝑎𝑖2 ⋯ 𝑎𝑖𝑗 ⋯ 𝑎𝑖𝑛 𝑥𝑖 =
0
⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮
𝑎𝑚1 𝑎𝑚2 ⋯ 𝑎𝑚𝑗 … 𝑎𝑚𝑛 𝑥𝑛 0
Rank and Nullity of a Matrix (cont.)
EXAMPLE:
2 3 5
Let 𝐴 = . Find ker⁡(𝐴).
−4 2 3
𝑎
The kernel of this 2 × 3 matrix consists of all vectors 𝑥 = 𝑏 ∈ R3 for which:
𝑐
𝑎
2 3 5 𝑏 0
=
−4 2 3 𝑐 0

2𝑎 + 3𝑏 + 5𝑐 = 0⁡⁡⁡
Therefore: (homogeneous system of linear equations involving 𝑎, 𝑏, and 𝑐)
−4𝑎 + 2𝑏 + 3𝑐 = 0
Rank and Nullity of a Matrix (cont.)
2𝑎 + 3𝑏 + 5𝑐 = 0⁡⁡⁡ 2 3 5 0
The system can be written in matrix form as:
−4𝑎 + 2𝑏 + 3𝑐 = 0 −4 2 3 0

2 3 5 0 2 3 5 0 Row⁡1

−4 2 3 0 0 8 13 0 2Row⁡1 + Row⁡2

3 5 1
1 0 2
Row⁡1
2 3 5 0 2 2

0 8 13 0 13 1
0 1 0 8
Row⁡2
8
3 5 1
𝑎 + 𝑏 + 𝑐 = 0⁡⁡⁡ 𝑎=− 𝑐
⇒ 2 2 ⇒ 16
13 13
𝑏 + 𝑐 = 0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ 𝑏=− 𝑐
8 8
Rank and Nullity of a Matrix (cont.)
Now we can express an element of the kernel:

1 1
𝑎 − 𝑐 −
16 16
𝑥= 𝑏 = 13 =𝑐 13 ⁡where 𝑐⁡is a scalar.
𝑐 − 𝑐 −
8 8
𝑐 1

The kernel of 𝐴, ker⁡(𝐴), is precisely the solution set of these equations.

1

16
That is, ker 𝐴 = 𝑁 𝐴 = 𝑠𝑝𝑎𝑛 −
13 and 𝑁𝐴 = 𝑑𝑖𝑚 𝑁 𝐴 = 1.
8
1
Rank and Nullity of a Matrix (cont.)
THEOREM:
If 𝐴 is an 𝑚 × 𝑛 matrix, then the sum of the rank of 𝐴 and the nullity of 𝐴 is the number of
columns of 𝐴 i.e. 𝑟𝑎𝑛𝑘 𝐴 + 𝑁𝐴 = 𝑛.

EXAMPLE 𝟏:
−1 2 1
Let 𝐴 = 2 −4 −2 . Find the rank of 𝐴 and the nullity of 𝐴.
−3 6 3

−1 2 1 1 −2 −1 −Row⁡1
2 −4 −2 → 0 0 0 2Row⁡1 + Row⁡2
−3 6 3 0 0 0 −3Row⁡1 + Row⁡3

𝑟𝑎𝑛𝑘 𝐴 = Number⁡of⁡leading⁡1′ s = 1
Rank and Nullity of a Matrix (cont.)
Remark:
The row space of 𝐴 = 𝑅 𝐴 = 𝑠𝑝𝑎𝑛 1, −2, −1 (row space of 𝐴 = span of the set of the
linearly independent rows)
𝑎
To find the nullity of 𝐴, 𝑁𝐴 , we solve 𝐴𝑥 = 0 such that 𝑥 = 𝑏 . Therefore:
𝑐
1 −2 −1 𝑎 0
0 0 0 𝑏 = 0 ⇒ 𝑎 − 2𝑏 − 𝑐 = 0 ⇒ 𝑎 = 2𝑏 + 𝑐
0 0 0 𝑐 0

Hence:
2𝑏 + 𝑐 2 1
𝑥= 𝑏 =𝑏 1 +𝑐 0
𝑐 0 1
Rank and Nullity of a Matrix (cont.)
Therefore:

2 1
1 and 0 span 𝑥 (we can write 𝑥 as a linear combination of these two vectors)
0 1

2 1
Consequently, 𝑁 𝐴 = 𝑠𝑝𝑎𝑛 1 , 0 and 𝑁𝐴 = 𝑑𝑖𝑚 𝑁 𝐴 = 2.
0 1
Check:
𝑟𝑎𝑛𝑘 𝐴 + 𝑁𝐴 = 1 + 2 = 3 = 𝑛 (number of columns of 𝐴)
Rank and Nullity of a Matrix (cont.)
EXAMPLE 𝟐:
1 1 2 3 2
Let 𝐴 = . Find the nullity of 𝐴.
1 1 3 1 4

𝑁𝐴 = 𝑑𝑖𝑚⁡*𝑁 𝐴 + where 𝑁 𝐴 = 𝑥 ∈ R5 |𝐴𝑥 = 0 .

STEP 𝟏: Find the reduced row echelon form (RREF) of 𝐴.

1 1 2 3 2 1 1 2 3 2 Row⁡1

1 1 3 1 4 0 0 1 −2 2 −Row⁡1 + Row⁡2

1 1 2 3 2 1 1 0 7 −2 Row⁡1 − 2Row⁡2

0 0 1 −2 2 0 0 1 −2 2 Row⁡2
Rank and Nullity of a Matrix (cont.)
𝑥1
𝑥2
STEP 𝟐: Find 𝑥 = 𝑥3 such that 𝐴𝑥 = 0.
𝑥4
𝑥5
𝑥1
𝑥2
1 1 0 7 −2 0 𝑥1 + 𝑥2 + 7𝑥4 − 2𝑥5 = 0
𝑥3 = ⇒
0 0 1 −2 2 𝑥4 0 𝑥3 − 2𝑥4 + 2𝑥5 = 0⁡⁡⁡⁡⁡⁡⁡⁡⁡
𝑥5
Solving for the leading variables in terms of the non-leading variables gives:
𝑥1 = −𝑥2 − 7𝑥4 + 2𝑥5
𝑥3 = 2𝑥4 − 2𝑥5 ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡
Rank and Nullity of a Matrix (cont.)
Therefore:

𝑥1 −𝑥2 − 7𝑥4 + 2𝑥5 −1 −7 2


𝑥2 𝑥2 1 0 0
𝑥= 𝑥3 = 2𝑥4 − 2𝑥5 = 𝑥2 0 + 𝑥4 2 + 𝑥5 −2
𝑥4 𝑥4 0 1 0
𝑥5 𝑥5 0 0 1
Rank and Nullity of a Matrix (cont.)
STEP 𝟑: Find 𝑁(𝐴) and deduce the nullity 𝑁𝐴 of 𝐴.

−1 −7 2
1 0 0
𝑁 𝐴 = 𝑠𝑝𝑎𝑛 0 , 2 , −2
0 1 0
0 0 1

Also, these three vectors are linearly independent, thus they form a basis for 𝑁 𝐴 .

Finally, 𝑁𝐴 = 𝑑𝑖𝑚 𝑁 𝐴 = 3.
Rank and Nullity of a Matrix (cont.)
EXAMPLE 𝟑:
1 1 1 1
Let 𝐴 = . Find the nullity of 𝐴.
2 3 4 5

𝑁𝐴 = 𝑑𝑖𝑚⁡*𝑁 𝐴 + where 𝑁 𝐴 = 𝑥 ∈ R4 |𝐴𝑥 = 0 .

STEP 𝟏: Find the reduced row echelon form (RREF) of 𝐴.

1 1 1 1 1 1 1 1 Row⁡1

2 3 4 5 0 1 2 3 −2Row⁡1 + Row⁡2

1 1 1 1 1 0 −1 −2 Row⁡1 − Row⁡2

0 1 2 3 0 1 2 3 Row⁡2
Rank and Nullity of a Matrix (cont.)
𝑥1
𝑥2
STEP 𝟐: Find 𝑥 = 𝑥3 such that 𝐴𝑥 = 0.
𝑥4

𝑥1
1 0 −1 −2 𝑥2 0 𝑥1 − 𝑥3 − 2𝑥4 = 0⁡⁡
𝑥3 = ⇒
0 1 2 3 0 𝑥2 + 2𝑥3 + 3𝑥4 = 0
𝑥4

Solving for the leading variables in terms of the non-leading variables gives:
𝑥1 = 𝑥3 + 2𝑥4 ⁡⁡⁡⁡⁡⁡
𝑥2 = −2𝑥3 − 3𝑥4
Rank and Nullity of a Matrix (cont.)
Therefore:
𝑥1 𝑥3 + 2𝑥4 1 2
𝑥2 −2𝑥3 − 3𝑥4 −2 −3
𝑥= 𝑥 = = 𝑥3 + 𝑥4
3 𝑥3 1 0
𝑥4 𝑥4 0 1
STEP 𝟑: Find 𝑁(𝐴) and deduce the nullity 𝑁𝐴 of 𝐴.

1 2
−2 −3
𝑁 𝐴 = 𝑠𝑝𝑎𝑛 , ⁡
1 0
0 1

Also, these two vectors are linearly independent, thus they form a basis for 𝑁 𝐴 .

Finally, 𝑁𝐴 = 𝑑𝑖𝑚 𝑁 𝐴 = 2.
Theorem
THEOREM:
Let 𝐴 be an 𝑛 × 𝑛 matrix, then each of the following eight statements implies the other seven.
• STATEMENT 𝟏: 𝐴 is invertible
• STATEMENT 𝟐: det⁡(𝐴) ≠ 0
• STATEMENT 𝟑: The rows are linearly independent and the columns as well
• STATEMENT 𝟒: 𝑟𝑎𝑛𝑘 𝐴 = 𝑛
• STATEMENT 𝟓: 𝑁𝐴 = 0
• STATEMENT 𝟔: 𝐴 is row equivalent to 𝐼𝑛 (we can change 𝐴 to 𝐼𝑛 by a sequence of elementary
row operations)
• STATEMENT 𝟕: The only solution to the homogeneous system 𝐴𝑥 = 0 is the trivial solution
𝑥=0
• STATEMENT 𝟖: The system 𝐴𝑥 = 𝑏 has a unique solution for every 𝑛 × 1 matrix 𝑏 (𝑏 is a
vector of 𝑛 rows): 𝑥 = 𝐴−1 𝑏
Linear Transformations
DEFINITION:
Let 𝑉 and 𝑊 be two vector spaces. A mapping 𝑓: ⁡𝑉 → 𝑊 is said to be a linear transformation if:
• CONDITION 𝟏: 𝑓 𝑢 + 𝑣 = 𝑓 𝑢 + 𝑓 𝑣 ⁡∀⁡𝑢, 𝑣 ∈ 𝑉
• CONDITION 𝟐: 𝑓 α𝑣 = α𝑓 𝑣 ⁡∀⁡α ∈ R⁡and⁡𝑣 ∈ 𝑉

EXAMPLE 𝟏:

The mapping 𝑓: 𝑉 → 𝑊 is a linear transformation called the zero transformation.


𝑣→𝑓 𝑣 =0

Proof:
𝑓 𝑢 + 𝑣 = 0 = 0 + 0 = 𝑓 𝑢 + 𝑓 𝑣 (COND. 𝟏 verified)
𝑓 α𝑣 = 0 = α. 0 = α𝑓(𝑣) (COND. 𝟐 verified)
Linear Transformations (cont.)
EXAMPLE 𝟐:

The mapping 𝑓: 𝑉 → 𝑉 is a linear transformation called the identity transformation.


𝑣→𝑓 𝑣 =𝑣
Proof:
𝑓 𝑢 + 𝑣 = 𝑢 + 𝑣 = 𝑓 𝑢 + 𝑓 𝑣 (COND. 𝟏 verified)
𝑓 α𝑣 = α𝑣 = α𝑓(𝑣) (COND. 𝟐 verified)

EXAMPLE 𝟑:
Let 𝐴 be an 𝑚 × 𝑛 matrix and let 𝑓:⁡R𝑛 → R𝑚 be the mapping defined by 𝑓 𝑣 = 𝐴𝑣⁡∀⁡𝑣 ∈ R𝑛 .
Therefore, 𝑓 is a linear transformation called the multiplication by 𝐴.

Proof:
𝑓 𝑢 + 𝑣 = 𝐴 𝑢 + 𝑣 = 𝐴𝑢 + 𝐴𝑣 = 𝑓 𝑢 + 𝑓(𝑣) (COND. 𝟏 verified)
𝑓 α𝑣 = 𝐴 α𝑣 = α(𝐴𝑣) = α𝑓(𝑣) (COND. 𝟐 verified)
Linear Transformations (cont.)
INTEGRAL OPERATOR:
Let 𝐶 0,1 be the set of all continuous functions defined on the interval 0,1 .
1
Let 𝐽: 𝐶 0,1 → R be the mapping defined by 𝐽 𝑓(𝑥) = 0 𝑓 𝑥 𝑑𝑥.
Therefore, 𝐽 is a linear transformation.

Proof:
1 1 1

𝐽 𝑓 𝑥 + 𝑔(𝑥) = 𝑓 𝑥 + 𝑔(𝑥) 𝑑𝑥 = 𝑓 𝑥 𝑑𝑥 + 𝑔 𝑥 𝑑𝑥 = 𝐽 𝑓 𝑥 +𝐽 𝑔 𝑥
0 0 0

1 1

𝐽 α𝑓 𝑥 = α𝑓 𝑥 𝑑𝑥 = α 𝑓 𝑥 𝑑𝑥 = α𝐽,𝑓 𝑥 -
0 0
Linear Transformations (cont.)
DIFFERENTIAL OPERATOR:
Let 𝐷:⁡𝐶 +1 ,0,1- → 𝐶,0,1- be the mapping defined by 𝐷 𝑓(𝑥) = 𝑓′(𝑥).
Therefore, 𝐷 is a linear transformation.

Proof:
𝐷 𝑓 𝑥 + 𝑔(𝑥) = 𝑓 𝑥 + 𝑔(𝑥) ′ = 𝑓 ′ 𝑥 + 𝑔′ 𝑥 = 𝐷 𝑓 𝑥 + 𝐷,𝑔 𝑥 -
𝐷 α𝑓 𝑥 = α𝑓(𝑥) ′ = α𝑓 ′ 𝑥 = α𝐷,𝑓 𝑥 -
Linear Transformations (cont.)
ROTATION:
Let 𝒗′ = (𝑥 ′ , 𝑦 ′ ) be the vector obtained by rotating 𝒗 = (𝑥, 𝑦) an angle θ counterclockwise.

cos π − 𝑥 = − cos 𝑥

𝑥 ′ = −𝑟 cos π − α + θ = 𝑟 cos α + θ
𝑣′ = 𝑟 𝑣 =𝑟
𝑦 𝒗 cos 𝑎 + 𝑏 = cos 𝑎 cos 𝑏 − sin 𝑎 sin 𝑏
𝒗′ 𝑦′
θ
α
𝑥 ′ = 𝑟 cos α cos θ − 𝑟 sin α sin θ = 𝑥 cos θ − 𝑦 sin θ
𝑥′ 𝑥
sin π − 𝑥 = sin 𝑥

𝑦 ′ = 𝑟 sin π − α + θ = 𝑟 sin α + θ

sin 𝑎 + 𝑏 = sin 𝑎 cos 𝑏 + cos 𝑎 sin 𝑏


𝑥 = 𝑟 cos α and 𝑦 = 𝑟 sin α
𝑦 ′ = 𝑟 sin α cos θ + 𝑟 cos α sin θ = 𝑥 sin θ + 𝑦 cos θ
Linear Transformations (cont.)
𝑥 ′ = 𝑥 cos θ − 𝑦 sin θ
𝑦 ′ = 𝑥 sin θ + 𝑦 cos θ

In matrix notation:

𝑥′ cos θ − sin θ 𝑥
= 𝑦
𝑦′ sin θ cos θ

𝑣′ 𝐴 𝑣
Consequently, 𝑣 ′ = 𝐴𝑣.

Therefore, rotation around the origin is a linear transformation.


Linear Transformations (cont.)
PROPERTIES:
If 𝑓: 𝑉 → 𝑊 is a linear transformation then:

PROPERTY 𝟏: 𝑓 0 = 0
Proof: 𝑓 0 = 𝑓 0 + 0 = 𝑓 0 + 𝑓(0) (since 𝑓 is a linear transformation)
⇒ 𝑓 0 = 2𝑓 0 ⇒ 𝑓 0 = 0

PROPERTY 𝟐: 𝑓 −𝑣 = −𝑓(𝑣)
Proof: 𝑓 α𝑣 = α𝑓(𝑣) where α = −1 (since 𝑓 is a linear transformation)

PROPERTY 𝟑: 𝑓 α𝑢 + β𝑣 = α𝑓 𝑢 + β𝑓(𝑣)
Proof: 𝑓 α𝑢 + β𝑣 = 𝑓 α𝑢 + 𝑓 β𝑣 (since 𝑓 is a linear transformation)
𝑓 α𝑢 + 𝑓 β𝑣 = α𝑓 𝑢 + β𝑓(𝑣) (since 𝑓 is a linear transformation)
Linear Transformations (cont.)
DEFINITION (Matrix 𝑴(𝒇) of 𝒇):
Let 𝑓:⁡R𝑛 → R𝑚 be a linear transformation. Let 𝐵 = 𝑒1 , 𝑒2 , … , 𝑒𝑛 be the standard basis of R𝑛 .
The matrix 𝑀(𝑓) of 𝑓 is the 𝑚 × 𝑛 matrix whose columns are the vectors 𝑓(𝑒1 ), 𝑓(𝑒2 ), …, 𝑓(𝑒𝑛 ).

EXAMPLE:
Let 𝑓:⁡R2 → R2 . 𝐵 = 𝑒1 , 𝑒2 is the standard basis of R2 . The matrix 𝑀(𝑓) of 𝑓 is:

𝑚11 𝑚12
𝑀 𝑓 = 𝑚 𝑚22
21

𝑓(𝑒1 ) 𝑓(𝑒2 )
Linear Transformations (cont.)
THEOREM:
𝑓 𝑣 = 𝑀 𝑓 .𝑣

EXAMPLE 𝟏:
Let 𝑓:⁡R2 → R3 be the linear transformation defined by 𝑓 𝑥, 𝑦 = 𝑥 − 2𝑦, 3𝑥, 2𝑥 + 𝑦 .
Find the matrix 𝑀(𝑓) of 𝑓.

𝐵 = 𝑒1 = 1,0 , 𝑒2 = (0,1) is the standard basis of R2 .


𝑓 𝑒1 = 𝑓 1,0 = (1,3,2)
𝑓 𝑒2 = 𝑓 0,1 = (−2,0,1)
1 −2
The matrix 𝑀(𝑓) of 𝑓 is: 𝑀 𝑓 = 3 0
2 1
1 −2 𝑥
Check: 𝑓 𝑥, 𝑦 = 3 0 𝑦
2 1
Linear Transformations (cont.)
EXAMPLE 𝟐:
Let 𝑇:⁡R3 → R2 Is 𝑇 linear? If yes, find 𝑀(𝑇).
𝑥 𝑥 𝑥
𝑦 →𝑇 𝑦 = 𝑦
𝑧 𝑧
𝑥 𝑥′
Let 𝑢 = 𝑦 and 𝑣 = 𝑦′ ∈ R3 .
𝑧 𝑧′
𝑥 + 𝑥′ 𝑥
𝑥 + 𝑥′ 𝑥′
𝑇 𝑢 + 𝑣 = 𝑇 𝑦 + 𝑦′ = = 𝑦 + = 𝑇 𝑢 + 𝑇(𝑣)
𝑦 + 𝑦′ 𝑦′
𝑧 + 𝑧′
α𝑥 α𝑥 𝑥
𝑇 α𝑢 = 𝑇 α𝑦 = α𝑦 = α 𝑦 = α𝑇(𝑢)
α𝑧
Therefore, 𝑇 is a linear transformation.
Linear Transformations (cont.)
𝐵 = 𝑒1 = 1,0,0 , 𝑒2 = 0,1,0 , 𝑒3 = 0,0,1 is the standard basis of R3 .

1
1
𝑇 𝑒1 =𝑇 0 =
0
0
0
0 1 0 0
𝑇 𝑒2 =𝑇 1 = 𝑀 𝑇 =
1 0 1 0
0
0
0
𝑇 𝑒3 =𝑇 0 =
0
1
𝑥 𝑥
1 0 0 𝑦 = 𝑦
Check: 𝑇 𝑢 = 𝑀 𝑇 . 𝑢 =
0 1 0 𝑧

You might also like