0% found this document useful (0 votes)
26 views9 pages

Matrices and Determinants

The document defines and explains basic concepts related to matrices including: - Matrices can be rectangular arrays of numbers and are described by their rows and columns. Common types include vectors, square matrices, diagonal matrices, and unit matrices. - Operations on matrices include transposition, addition, scalar multiplication, and matrix multiplication which follows specific rules about matrix dimensions. - Matrices have applications including summing vectors using post or pre-multiplication, and permuting matrix columns using permutation matrices.

Uploaded by

body81348
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views9 pages

Matrices and Determinants

The document defines and explains basic concepts related to matrices including: - Matrices can be rectangular arrays of numbers and are described by their rows and columns. Common types include vectors, square matrices, diagonal matrices, and unit matrices. - Operations on matrices include transposition, addition, scalar multiplication, and matrix multiplication which follows specific rules about matrix dimensions. - Matrices have applications including summing vectors using post or pre-multiplication, and permuting matrix columns using permutation matrices.

Uploaded by

body81348
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Matrices and Determinants

1. Matrices
1.1. Basic concepts
A matrix is a rectangular array of (real or complex) numbers:
 
a11 a12 · · · a1m
 a21 a22 · · · a2m 
A= .
 
.. .. ..
 ..

. . . 
an1 an2 ··· anm
The numbers in the matrix are called its entries.
The size of a matrix is described by the number of its rows and its columns. A
has n rows and m columns, thus it is an n × m (or: n by m) matrix.
Matrices A and B are equal if aij = bij for any i and j, and A and B are of
the same size.
A matrix with just one column is a column vector :
 
b1
 b2 
b= . 
 
 .. 
bn
A matrix with just one row is a row vector :
 
c = c1 c2 · · · cm
A square matrix has an equal number of rows and columns:
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
S= .
 
.. .. .. 
 .. . . . 
an1 an2 · · · ann
A zero matrix/zero vector is a matrix/vector, all of whose entries are zeroes:
 
0 0 ··· 0
 0 0 ··· 0 
0= . . .
 
.. 
 .. .. .. . 
0 0 ··· 0

1
A square matrix is called diagonal if all of its entries, apart from the leading
diagonal (top left to lower right), are zeroes:
 
a11 0 · · · 0
 0 a22 · · · 0 
D= .
 
. .. .. 
 .. .. . . 
0 0 · · · ann

A diagonal matrix is called unit matrix if all of its entries in the leading diagonal
are 1’s (and all of its other entries are zeroes):
 
1 0 ··· 0
 0 1 ··· 0 
I= . . .
 
 .. .. . . ... 

0 0 ··· 1

A square matrix is symmetrical if aij = aji for any i and j:


 
a11 a12 · · · a1n
 a12 a22 · · · a2n 
F= .
 
.. .. .. 
 .. . . . 
a1n a2n · · · ann

A square matrix is skew-symmetrical if aij = −aji for any i and j. In this case
any entry in the leading diagonal needs to be 0:
 
0 a12 · · · a1n
 −a12 0 · · · a2n 
G= .
 
. .. ..
 .. .. 
. . 
−a1n −a2n ··· 0

2
1.2. Operations on Matrices
1.2.1. The Transpose of a Matrix
The transpose AT of a matrix A is obtained by interchanging the rows and
columns of A.

Property:

(AT )T = A.

1.2.2. Addition of Matrices


If two matrices are of the same size, they can be added by summing the corres-
pondig entries. (Subtraction can be defined similarly.)

Properties:

(a) A + B = B + A (addition is commutative);


(b) (A + B) + C = A + (B + C) (addition is associative);
(c) (A + B)T = AT + BT .

1.2.3. Multiplication by a Scalar


If aij is a typical entry of A, then λA is a matrix, whose corresponding entry
is λaij .

Properties:
(a) (λµ)A = λ(µA);
(b) (λ + µ)A = λA + µA;
(c) λ(A + B) = λA + λB.

1.2.4. Multiplication of Matrices


(a) A row vector and a column vector with the same number of entries can
be multiplied as the scalar product is defined:
 
b1
 b2 
 
 
a1 a2 · · · an ·  .  = a1 b1 + a2 b2 + . . . + an bn
 .. 
bn

(b) The definition can be extended to cover the product AB of any two matri-
ces, provided that the number of columns in A is the same as the number
of rows in B.

3
If A is of size n × m and B is of size m × p then the product AB is matrix
C of size n × p, whose cij entry is the scalar product of the i-th row of A
and the j-th column of B.
 
b11 · · · b1j · · · b1p
 .. .. ..
AB =

 . . . 
bm1 ··· bmj ··· bmp

  
a11 ··· a1m c11 ··· c1j ··· c1p
 .. ..   .. .. .. .. 
 .
  .
.   . . . 
 ai1
 ··· aim  

 ci1 ··· cij ··· cip 
 . ..   .. .. .. .. 
 .. .  . . . . 
an1 ··· anm cn1 ··· cnj ··· cnp

Properties:
(a) If AB exists, BA does not necessarily exist. If both AB and BA exist,
then in general AB 6= BA (multiplication is not commutative);
(b) Taking the product of A and the unit matrix (I) of suitable size, the result
is A:
An×m · Im = An×m = In · An×m ;

(c) Multiplcation is associative:

(A · B) · C = A · (B · C);

(d) The distributive laws are valid:

(A + B) · C = A · C + B · C and A · (B + C) = A · B + A · C;

(e) Transposing reverses the order of products:

(A · B)T = BT · AT .

4
1.3. Applications of Matrices
In practice there is often a need for summing some of the entries of a matrix, in-
terchanging two rows (two columns), etc. Computers use matrix-multiplication
for these tasks.

1.3.1. Summing Vectors


The entries of the rows of a matrix can be added by post-multiplying it by a
column-vector, whose all entries are 1’s (by a summing vector ). Similarly, by
pre-multiplying the matrix by a row-vector with all of its entries 1’s, results in
the summing of the entries in the columns.
E.g.:  
  1
1 4 −3  
A= , B =  1 , C = 1 1
2 −5 3
1
 
2  
AB = , CA = 3 −1 0 .
0

1.3.2. Permutation Matrices


As seen above (in multiplication property b), multiplying a matrix with the unit
matrix of suitable size results in no change of the original matrix. If the columns
of a unit matrix are interchanged (permutated), and a matrix is post-multiplied
by this permutation-matrix , then the suitable columns of the original matrix
will be permutated.
E.g.:  
  0 1 0  
1 4 −3 −3 1 4
· 0 0 1 = .
2 −5 3 3 2 −5
1 0 0
↑ ↑ ↑ ↑ ↑ ↑
c3 c1 c2 c3 c1 c2

Similarly, if the rows of a unit matrix are interchanged (permutated), and a


matrix is pre-multiplied by this permutation-matrix, then the suitable rows of
the original matrix will be permutated.
E.g.:
     
r2 → 0 1 1 4 −3 2 −5 3 ← r2
· =
r1 → 1 0 2 −5 3 1 4 −3 ← r1

5
1.3.3. Other Examples
Any change made in the columns of the unit vector that we post-multiply a
matrix with, results in the same change in the columns of the original matrix.
Similarly, any change made in the rows of the unit vector that we pre-multiply
a matrix with, results in the same change in the rows of the original matrix.

E.g.:
     
1 2 3 1 0 0 1 2 0
B= 4 5 6 , C =  0 1 0 , D =  0 1 0 
7 8 9 0 0 3 0 0 1

In case of BC, the third column of B is multiplied by 3:


 
1 2 9
BC =  4 5 18 
7 8 27

In case of CB, the third row of B is multiplied by 3:


 
1 2 3
CB =  4 5 6 
21 24 27

In case of BD, the double of the first column of B is added to its second column:
 
1 4 3
BD =  4 13 6 
7 22 9

In case of DB, the double of the second row of B is added to its first row:
 
9 12 15
DB =  4 5 6 
7 8 9

6
2. Determinants
2.1. The determinant of a 2 × 2 matrix
If A is a 2 × 2 matrix, then its determinant is the product of the entries of A
on the leading diagonal minus the product of the entries on the other diagonal.
 
a b
A= ; det A = ad − bc
c d

2.2. Matrices of bigger size


For matrices of bigger size the definition is recursive: for 3×3 matrices the value
is found, basically, by extracting 2 × 2 determinants from the 3 × 3 determinant.
If, in an n × n matrix A, we cross out the row and column through entry aij , we
are left with a matrix of size (n − 1) × (n − 1). The determinant of this matrix
is known as the minor of the entry aij .  
1 2 3
2 3
E.g. the minor of entry 4 of matrix B =  4 5 6  is .
8 9
7 8 9
Minors have associated signs, + or −, depending on the position in the determi-
nant of the entry of which they are a minor. The associated sign of the minor of
entry aij is the sign of (−1)i+j . Minors are shown in this diagram (chessboard
pattern):
+ − + ···
− + − ···
+ − + ···
.. .. .. . .
. . . .
The minor of a particular entry, together with its associated sign is called the
cofactor of that entry.
2 3
E.g. in case of B, the cofactor of entry 4 is − = −(2 · 9 − 3 · 8) = 6, the
8 9
4 5
cofactor of entry 3 is = 4 · 8 − 5 · 7 = −3.
7 8

Now the determinant of a matrix can be defined as the sum of the products of
the entries of the first row with their respective cofactors (first-row expansion).

A square matrix is regular if its determinant is not 0.


A square matrix is singular if its determinant is 0.

7
2.3. Properties of Determinants
(a) If two rows are interchanged, the determinant changes sign.
a1 a2 a3 b1 b2 b3
E.g. b1 b2 b3 = − a1 a2 a3
c1 c2 c3 c1 c2 c3
(b) If two rows are identical, the value of the determinant is 0.
a1 a2 a3
E.g. a1 a2 a3 = 0
c1 c2 c3
(c) The determinant can be expanded by any row, instead of the first.
a1 a2 a3
a2 a3 a1 a3 a1 a2
E.g. b1 b2 b3 = −b1 + b2 − b3
c2 c3 c1 c3 c1 c2
c1 c2 c3
(d) If all entries in a row are zeroes, then the value of the determinant is zero.
a1 a2 a3
E.g. 0 0 0 =0
c1 c2 c3
(e) If all the entries of a row of a determinant are multiplied by λ, the resulting
determinant is the λ-multiple of the original.
a1 a2 a3 a1 a2 a3
E.g. b1 b2 b3 =λ b1 b2 b3
λc1 λc2 λc3 c1 c2 c3
(f) If any row is added to or subtracted from any other row, the value of the
determinant is not changed.
a1 a2 a3 a1 a2 a3
E.g. b1 b2 b3 = b1 + a1 b2 + a2 b3 + a3
c1 c2 c3 c1 c2 c3
(g) The value of the determinant is unaltered if a multiple of any row is added
to any other row.
a1 a2 a3 a1 a2 a3
E.g. b1 b2 b3 = b1 + λa1 b2 + λa2 b3 + λa3
c1 c2 c3 c1 c2 c3
(h) The value of the determinant is unaltered when the rows and columns are
completely interchanged.
a1 a2 a3 a1 b1 c1
E.g. b1 b2 b3 = a2 b2 c2
c1 c2 c3 a3 b3 c3
The last property ensures that any property proved for rows is also valid for
columns.

8
3. The Inverse of a Square Matrix

Let A be a square matrix of size n × n. If there exists a matrix A−1 such that

A · A−1 = A−1 · A = In ,

where In is the n × n unit matrix, then A−1 is called the inverse matrix of A .

Properties:
(a) If A is singular, then it does not have an inverse.
(b) If A is regular, then it has one and only one inverse.

(c) If, for n × n matrices, A · B = In , then B = A−1 and A = B−1 .


(d) (A−1 )−1 = A.
(e) (A · B)−1 = B−1 · A−1 .

Finding the inverse of matrix A:

(a) First, the entries of A are replaced by their cofactors, and the resulting
matrix is transposed. This is the adjoint or adjugate of A (adj A).

(b) Secondly, to obtain the inverse, each entry of adj A is divided by the value
of the determinant of A.
adj A
A−1 =
det A

You might also like