0% found this document useful (0 votes)
17 views8 pages

1ZC3_Final_Lectures_Summary (2)

Uploaded by

xujamin90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

1ZC3_Final_Lectures_Summary (2)

Uploaded by

xujamin90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1ZC3 Linear Algebra – Winter 2021 – Page 1

Lecture 1 – Systems of Linear Equations (Section 1.1)


A system of linear equations can have a unique solution, no solution (inconsistent) or infinite solutions.
Augmented Matrix: A coefficient matrix with the right side tacked on, to represent a system.

Elementary Row Operations of a matrix:


a) Multiply a row by a nonzero constant, 𝑟𝑖 → 𝑘𝑟𝑖 .
b) Add to a given row some multiple of another row, 𝑟𝑖 → 𝑟𝑖 + 𝑘𝑟𝑗 .
c) Exchange two rows, 𝑟𝑖 ↔ 𝑟𝑗 .
Combining 1 and 2 (𝑟𝑖 = 𝑎𝑟𝑖 + 𝑏𝑟𝑗 ) is NOT an elementary row operation.

Lecture 2 – Gaussian Elimination (Section 1.2)


Row-Echelon Form of a matrix:
a) The first non-zero entry in each row is a leading 1.
b) Leading 1s occur further to the right for every lower row.
c) Zero rows are at the bottom.
Reduced Row-Echelon Form:
d) In addition, each leading 1 is the only nonzero entry in its column.

Gaussian Elimination: Use elementary row operations to find the row-echelon form. Eliminate entries one
column at a time from the left, below the diagonal entries. To solve the system, recreate the equations and
use back substitution.
Gauss-Jordan Elimination: Continue from Gaussian Elimination to find the reduced row-echelon form.
Eliminate entries one column at a time from the right, above the diagonal entries. Recreate the equations
and solve for leading variables. No back substitution is required.

Homogeneous Linear System: 𝐴𝑥⃗ = 0⃗⃗, always has a trivial solution, where (𝑥⃗ = 0
⃗⃗)
Theorem 1.2.2: A homogeneous systems with more unknowns than equations must have ∞ solutions.

Lecture 3 – Matrix Operations (Section 1.3)


Matrix: A rectangle array of numbers (𝑚 rows by 𝑛 columns, or 𝑛 × 𝑛 if square)
Addition: If 𝐴 and 𝐵 are the same size, then 𝐴 + 𝐵 adds the matrices together element-wise.
Scaling: If 𝑐 is a scalar, then 𝑐𝐴 multiplies each entry in 𝐴 by 𝐶.

Matrix Multiplication: If 𝐴 is an 𝑚 × 𝑟 matrix and 𝐵 is an 𝑟 × 𝑛 matrix, then 𝐶 = 𝐴𝐵 is the 𝑚 × 𝑛 matrix


whose (𝑖, 𝑗)𝑡ℎ entry is the dot product 𝑐𝑖𝑗 = (𝑟𝑜𝑤 𝑖 𝑜𝑓 𝐴) ∙ (𝑐𝑜𝑙𝑢𝑚𝑛 𝑗 𝑜𝑓 𝐵).

The product 𝐶 = 𝐴𝐵 might not exist, when the inner size 𝑟 does not match.
𝐴𝐵 ≠ 𝐵𝐴 in general. When they do, they are said to commute.
A linear system of equations can be written in matrix notation as 𝐴x⃗⃗ = ⃗⃗
b.

Transpose: denoted by 𝐴𝑇 , is the 𝑛 × 𝑚 matrix by interchanging the rows and columns of 𝐴.


Associative Rule: (𝐴𝐵)𝐶 = 𝐴(𝐵𝐶) is true of matrices when the product exists.
Trace: denoted by 𝑡𝑟(𝐴), is the sum of the diagonal entries of an 𝑛 × 𝑛 matrix. Note that 𝑡𝑟(𝐴𝐵) = 𝑡𝑟(𝐵𝐴) and
𝑡𝑟(𝐴 + 𝐵) = 𝑡𝑟(𝐴) + 𝑡𝑟(𝐵)

Lecture 4, 5 – Inverses and Properties (Section 1.4)


Matrix Properties: All properties of numbers except 𝐴𝐵 ≠ 𝐵𝐴, except the following
a) 𝐴𝐵 = 𝐴𝐶 does not imply 𝐵 = 𝐶
b) 𝐴𝐵 = ⃗0⃗ does not imply 𝐴 = ⃗0⃗ or 𝐵 = ⃗0⃗.
1ZC3 Linear Algebra – Winter 2021 – Page 2

Identity Matrix: An 𝑛 × 𝑛 matrix with 1 on the diagonal and 0 elsewhere. It has the property that 𝐴𝐼 = 𝐴 and
𝐼𝐴 = 𝐴 whenever the products are defined.
Inverse Matrix: If 𝐴 is a square matrix, then 𝐵 = 𝐴−1 is the inverse matrix of 𝐴 if 𝐴𝐵 = 𝐵𝐴 = 𝐼. If such 𝐵
exists, then the matrix 𝐴 is called invertible or non-singular.
Theorem 1.4.4: If 𝐵 and 𝐶 are both inverses of 𝐴, then 𝐵 = 𝐶
𝑎 𝑏 1 𝑑 −𝑏
Theorem 1.4.5: Let 𝐴 = [ ]. If and only if 𝑎𝑑 − 𝑏𝑐 ≠ 0, then 𝐴 is invertible and 𝐴−1 = 𝑎𝑑−𝑏𝑐 [ ].
𝑐 𝑑 −𝑐 𝑎

Solving a system: If 𝐴𝑥⃗ = 𝑏⃗⃗ and 𝐴 is invertible, then 𝑥⃗ = 𝐴−1 𝑏⃗⃗.

Theorem 1.4.6: If 𝐴 and 𝐵 are 𝑛 × 𝑛 and invertible, then the 𝐴𝐵 is also invertible and (𝐴𝐵)−1 = 𝐵−1 𝐴−1 .

Theorem 1.4.7: If 𝐴 is invertible, then,


a) 𝐴−1 is invertible and (𝐴−1 )−1 = 𝐴
b) If 𝑛 is a non-negative integer, then 𝐴𝑛 is invertible and (𝐴𝑛 )−1 = (𝐴−1 )𝑛
1
c) If 𝑘 is a nonzero scalar, then (𝑘𝐴)−1 = 𝐴−1 .
𝑘
Theorem 1.4.8: Properties of transposes
a) (𝐴𝑇 )𝑇 = 𝐴
b) (𝐴 + 𝐵)𝑇 = 𝐴𝑇 + B T
c) (𝐴 − 𝐵)𝑇 = 𝐴𝑇 − B T
d) (𝑘𝐴)𝑇 = 𝑘AT
e) (AB)T = B T AT
−1
Theorem 1.4.9: If A is invertible, then (AT ) = (A−1 )T

Lecture 5, 6 – Elementary Matrices (Section 1.5)


Elementary Matrix: denoted by 𝐸, is a matrix obtained from 𝐼 using an elementary row operation.
Theorem 1.5.1: 𝐸𝐴 does the same operation on 𝐴 as on 𝐼.
Theorem 1.5.2: Every elementary matrix is invertible, and E −1 is also elementary.

Theorem 1.5.3: (TFAE) If 𝐴 is an 𝑛 × 𝑛 matrix, then the following are equivalent:


a) 𝐴 is invertible.
b) 𝐴𝑥⃗ = ⃗0⃗ has only the trivial solution (𝑥⃗ = ⃗0⃗).
c) The reduced row-echelon form of 𝐴 is 𝐼.
d) 𝐴 can be written as a product of elementary matrices.

Inversion Algorithm: the inverse can be found by row reducing [𝐴 𝐼 ] into [𝐼 𝐴−1 ].

Lecture 7 – More on Linear Systems (Section 1.6)


Theorem 1.6.2: If 𝐴 is 𝑛 × 𝑛 and invertible, then 𝑥⃗ = 𝐴−1 𝑏⃗⃗ is the only solution to 𝐴𝑥⃗ = 𝑏⃗⃗ for every 𝑛 × 1 𝑏⃗⃗.
Theorem 1.6.3: to do.

Theorem 1.6.4: Additions to TFAE


e) 𝐴𝑥⃗ = 𝑏⃗⃗ is consistent for every 𝑏⃗⃗.
f) 𝐴𝑥⃗ = 𝑏⃗⃗ has exactly one solution for every 𝑏⃗⃗.

Theorem 1.6.5: If 𝐴𝐵 is invertible, and 𝐴 and 𝐵 are square, then 𝐴 and 𝐵 are invertible.

Lecture 8 – Diagonal, Triangular, and Symmetric Matrices (Section 1.7)


Diagonal Matrix: Ab 𝑛 × 𝑛 matrix with zeroes on off-diagonal entries. If all diagonal entries in a diagonal matrix
1
𝐷 is nonzero, then 𝐷 is invertible, and (𝐷 −1 )𝑖𝑖 = . If 𝑘 is a positive integer, then (𝐷 𝑘 )𝑖𝑖 = (𝐷𝑖𝑖 )𝑘 .
𝑑𝑖𝑖
Upper Triangular Matrix: A matrix with all zeroes below the diagonal.
1ZC3 Linear Algebra – Winter 2021 – Page 3

Lower Triangular Matrix: A matrix with all zeroes above the diagonal.
Triangular: A matrix that is either upper or lower triangular.

Theorem 1.7.1:
a) If 𝐴 is upper triangular then 𝐴𝑇 is lower triangular. If 𝐴 is lower triangular then 𝐴𝑇 is upper triangular.
b) If 𝐴 and 𝐵 are lower triangular than 𝐴𝐵 is lower triangular. If 𝐴 and 𝐵 are upper triangular than 𝐴𝐵 is
upper triangular.
c) If 𝐴 is triangular and all diagonal entries are nonzero, then 𝐴 is invertible.
d) Suppose 𝐴 is invertible. If 𝐴 is upper triangular, then 𝐴−1 is upper triangular. If 𝐴 is lower triangular, then
𝐴−1 is lower triangular.

Symmetric Matrix: A square matrix 𝐴 that satisfies 𝐴 = 𝐴𝑇 , where entries are symmetric across the diagonal.
Theorem 1.7.2: If 𝐴 and 𝐵 are 𝑛 × 𝑛 symmetric matrices, then:
a) 𝐴𝑇 and 𝐵𝑇 are symmetric.
b) 𝐴 + 𝐵 and 𝐴 − 𝐵 are symmetric.
c) If 𝑘 is a scalar, then 𝑘𝐴 and 𝑘𝐵 are symmetric.

Theorem 1.7.4: If 𝐴 is invertible and symmetric, then 𝐴−1 is also symmetric.

Skew Symmetric: A square matrix 𝐴 that satisfies 𝐴𝑇 = −𝐴.

Lecture 9 – Determinants (Section 2.1)


𝑎 𝑏 𝑎 𝑏
2 by 2 Determinant: If 𝐴 = [ ], then det(𝐴) = | | = 𝑎𝑑 − 𝑏𝑐.
𝑐 𝑑 𝑐 𝑑

Minor: If 𝐴 is a square matrix, then the minor of 𝑎𝑖𝑗 , denoted by 𝑀𝑖𝑗 , is the determinant of the matrix obtained
from 𝐴 by deleting row 𝑖 and column 𝑗.
Cofactor: denoted by 𝑐𝑖𝑗 , is equal to 𝑐𝑖𝑗 = (−1)𝑖+𝑗 𝑀𝑖𝑗 . It changes sign in every alternating row and column.

Determinant: If 𝐴 is a 𝑛 × 𝑛 matrix then det(𝐴) = 𝑎11 𝑐11 + 𝑎12 𝑐12 + ⋯ + 𝑎1𝑛 𝑐1𝑛 is called the cofactor
expansion of 𝐴 along row 1. The determinant can be obtained using cofactor expansion along any row or
column, with the row/column with the most 0s being the easiest.

Theorem 2.1.2: If 𝐴 is triangular then det(𝐴) is the product of the diagonal entries of 𝐴.

Lecture 10 – Determinants by Row Reduction (Section 2.2)


Theorem 2.2.1: If 𝐴 is a 𝑛 × 𝑛 matrix and has a row or column of zeroes, then det(𝐴) = 0.
Theorem 2.2.2: If 𝐴 is a 𝑛 × 𝑛 matrix then det(𝐴) = det(𝐴𝑇 ).

Theorem 2.2.3: Let 𝐴 be a 𝑛 × 𝑛 matrix:


a) A nonzero scalar can be factored out of any row or column of a determinant
b) If 𝐵 is obtained from 𝐴 by a row/column exchange, then det(𝐵) = − det(𝐴).
c) If 𝐵 is obtained from 𝐴 by adding to a row/column some multiple of another, then det(𝐵) = det(𝐴).

Theorem 2.2.4: If 𝐸 is an elementary matrix then det(𝐸) ≠ 0.

Lecture 11, 12 – Properties of Determinants (Section 2.3)


Lemma 2.3.2: If 𝐸 and 𝐵 are 𝑛 × 𝑛 matrices, and 𝐸 is elementary, then det(𝐸𝐵) = det(𝐸) det (𝐵).

Theorem 2.3.3: (TFAE) 𝐴 is invertible if and only if det(𝐴) ≠ 0.

Theorem 2.3.4: If 𝐴 and 𝐵 are 𝑛 × 𝑛 matrices, then det(𝐴𝐵) = det(𝐴) det (𝐵). This also implies that
det(𝐴𝐵) = det(A)det(B) = det(𝐵𝐴), since order can be changed for scalar multiplication.
1ZC3 Linear Algebra – Winter 2021 – Page 4
1
Theorem 2.3.5: If 𝐴 is invertible then det(𝐴−1 ) = .
det(𝐴)

Adjoint: denoted by adj(𝐴), is the transpose of the matrix of cofactors.


1
Theorem 2.3.6: If A is invertible then 𝐴−1 = det(𝐴) adj(𝐴)

Lecture 12, 13 –Eigenvalues and Eigenvectors (Section 5.1)


Eigenvectors and Eigenvalues: Let 𝐴 be an 𝑛 × 𝑛 matrix. 𝑥⃗ is called an eigenvector of 𝐴 with eigenvalue of 𝜆 if
𝑥⃗ ≠ 0⃗⃗, and 𝐴𝑥⃗ = 𝜆𝑥⃗. Notice that 𝐴𝑥⃗ = 𝜆𝑥⃗ = 𝜆𝐼𝑥⃗, which can be rearranged to (𝜆𝐼 − 𝐴)𝑥⃗ = 0
⃗⃗, where 𝑥⃗ ≠
⃗0⃗. By TFAE, 𝜆𝐼 − 𝐴 has to be singular and not invertible, and therefore det(𝜆𝐼 − 𝐴) = 0. To solve for
eigenvalues, solve det(𝜆𝐼 − 𝐴) = 0 for lambda. To find all the eigenvectors, find nonzero solutions to
(𝜆𝐼 − 𝐴)𝑥⃗ = ⃗0⃗ for each eigenvalue found in the previous step.

Linear Combination: Multiplying vectors by constants and adding the up: 𝑥⃗ = 𝑐1 𝑣 ⃗⃗⃗⃗⃗1 + 𝑐2 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + 𝑐𝑛 ⃗⃗⃗⃗⃗.
𝑣𝑛
Independence: Let 𝑆 = {𝑣1 , 𝑣2 , … , 𝑣𝑛 }. 𝑆 is independent if none of the vectors in 𝑆 can be written as a linear
combination of other vectors in 𝑆.
Eigenspace: 𝑆 is a basis for the eigenspace for an eigenvalue 𝜆 if 𝑆 is independent and every eigenvector 𝑥⃗ can be
written as a linear combination of vectors in 𝑆.

Theorem 5.1.2: If 𝐴 is triangular then the eigenvalues are the diagonal entries of 𝐴.
Theorem 5.1.4: (TFAE) 𝐴 is invertible if and only if 𝜆 = 0 is not an eigenvalue of 𝐴.

Lecture 14, 15 – Diagonalization (Section 5.2)


Similarity: If 𝐴 and 𝐵 are 𝑛 × 𝑛 matrices, then they are similar if there is an invertible 𝑛 × 𝑛 matrix 𝑃, such that
𝐵 = 𝑃−1 𝐴𝑃. If they are indeed similar, then the following are the same for both 𝐴 and 𝐵:
a) Determinant
b) Trace
c) Characteristic Polynomial det(𝜆𝐼 − 𝐴)
d) Eigenvalues
e) Invertibility

Diagonalization: An 𝑛 × 𝑛 matrix 𝐴 is diagonalizable if there exists an invertible matrix 𝑃 and a diagonal matrix
𝐷, such that 𝑃−1 𝐴𝑃 = 𝐷 (in other words, 𝐴 is similar to 𝐷). To find such a 𝑃 and 𝐷, let 𝑃 =
[𝑝1 𝑝2 𝑝3 … ], where 𝑝𝑞 is a basis vector for each eigenspace. Let 𝐷 be a diagonal matrix with all the
eigenvalues 𝜆1 , 𝜆2 , 𝜆3 , … (same order as 𝑝𝑞 ) as the diagonal entries. Then 𝑃−1 𝐴𝑃 = 𝐷 if there are
enough independent eigenvectors to fill the matrix 𝑃.

Algebraic Multiplicity: of eigenvalue 𝜆0 , is the power of (𝜆 − 𝜆0 ) in the characteristic polynomial.


Geometric Multiplicity: The number of vectors in a basis for the eigenspace corresponding to 𝜆0 .

Theorem 5.2.4: Let 𝐴 be a square matrix, then,


a) For every eigenvalue 𝜆 of 𝐴, the G.M. is less than or equal to the A.M.
b) 𝐴 is diagonalizable if and only if det(𝜆𝐼 − 𝐴) can be expressed as a product of linear factors, and for
every eigenvalue 𝜆 G.M. is equal to A.M.

Powers of a Matrix: If 𝐴 is diagonalizable and 𝑘 is a positive integer, then 𝐴𝑘 = 𝑃𝐷 𝑘 𝑃−1 (since all middle 𝑃𝑃−1
terms cancel out), which is much easier to compute because (𝐷 𝑘 )𝑖𝑖 = (𝐷𝑖𝑖 )𝑘 .

Lecture 16,17 – Differential Equations (Section 5.4)


Differential Equation: An equation involving a function and its derivatives (ex. 𝑦 ′′ + 3𝑦 ′ − 2𝑦 = 0). A solution
is a function that satisfies such an equation.
Natural Exponentials: The general solution to 𝑦 ′ = 𝑎𝑦 is 𝑦 = 𝑐𝑒 𝑎𝑥 , where 𝑎 and 𝑐 are constants.
1ZC3 Linear Algebra – Winter 2021 – Page 5

First-Order Linear Systems: A constant first-order homogeneous linear system can be expressed as 𝑦⃗ ′ = 𝐴𝑦⃗,
where 𝐴 is a matrix of constants. If 𝐴 is diagonal, then each individual equation satisfies 𝑦 ′ = 𝑎𝑦 and can
be solved using natural exponentials.
Solution by Diagonalization: Suppose that 𝑦⃗ ′ = 𝐴𝑦⃗, where 𝐴 = 𝑃𝐷𝑃−1 is diagonalizable. Let 𝑦⃗ = 𝑃𝑢 ⃗⃗, 𝑦⃗ ′ =
′ ′
𝑃𝑢⃗⃗ , where 𝑢
⃗⃗ is an unknown vector. Substitute into first equation to get 𝑃𝑢 ⃗⃗ = 𝐴𝑃𝑢⃗⃗. Multiply both sides
by 𝑃−1 to get 𝑢 ⃗⃗′ = 𝑃 −1 𝐴𝑃𝑢
⃗⃗. But 𝑃−1 𝐴𝑃 = 𝐷, so 𝑢 ⃗⃗′ = 𝐷𝑢⃗⃗. This system is now solvable because the
coefficients are diagonal. When 𝑢 ⃗⃗ is solved, 𝑦⃗ can be obtained by the assumption 𝑦⃗ = 𝑃𝑢 ⃗⃗.

Theorem 5.4.1 (restatement of above): Suppose that 𝐴 is diagonalizable with eigenvectors 𝑥⃗1 , 𝑥⃗2 , … , 𝑥⃗𝑛 and
corresponding eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 , then the general solution to the system of differential equations
𝑦⃗ ′ = 𝐴𝑦⃗ is 𝑦⃗ = 𝑐1 𝑥⃗1 𝑒 𝜆1 𝑥 + 𝑐2 𝑥⃗2 𝑒 𝜆2 𝑥 + ⋯ + 𝑐𝑛 𝑥⃗𝑛 𝑒 𝜆𝑛 𝑥 .

Higher-Order ODE: Introduce new functions and create a linear system.

Lecture 18 – Complex Numbers (Section 10.1, 10.2)


Complex Number: A number in the form 𝑧 = 𝑎 + 𝑏𝑖, where 𝑎, 𝑏 are real numbers and 𝑖 = √−1 (and 𝑖 2 = −1).
𝑧 can be represented as a vector in the complex plane, where 𝑦 is the imaginary and 𝑥 is the real axis.
Modulus: If 𝑧 = 𝑎 + 𝑏𝑖, then the modulus of 𝑧 is |𝑧| = √𝑎2 + 𝑏 2 , |𝑧| ≥ 0.
Conjugate: If 𝑧 = 𝑎 + 𝑏𝑖, then the conjugate of 𝑧 is 𝑧̅ = 𝑎 − 𝑏𝑖.

Complex Addition: Like vectors, (𝑎 + 𝑏𝑖) + (𝑐 + 𝑑𝑖) = (𝑎 + 𝑐) + (𝑏 + 𝑑)𝑖.


Complex Multiplication: Expanding, (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = 𝑎𝑐 + 𝑎𝑑𝑖 + 𝑏𝑐𝑖 + 𝑏𝑑𝑖 2 = (𝑎𝑐 − 𝑏𝑑) + (𝑎𝑑 + 𝑏𝑐)𝑖.
𝑎+𝑏𝑖 𝑎+𝑏𝑖 𝑎−𝑏𝑖
Complex Division: 𝑐+𝑑𝑖 = 𝑐+𝑑𝑖 𝑐−𝑑𝑖 (multiply top and bottom by the conjugate of the denominator).

Lecture 19 – Polar Form of a Complex Number (Section 10.3)


Polar Form: 𝑧 = 𝑎 + 𝑏𝑖 can instead be specified by 𝑟 and 𝜃, in the form of 𝑧 = 𝑟(cos 𝜃 + 𝑖 sin 𝜃). 𝜃 represents
the angle from the positive real axis and repeats every 2𝜋 radians. If −𝜋 < 𝜃 ≤ 𝜋, then 𝜃 is called the
principal argument of 𝑧.

Exponential Notation: 𝑒 𝑖𝜃 means the same thing as cos 𝜃 + 𝑖 sin 𝜃. It has the property that if 𝑧1 = 𝑟1 𝑒 𝑖𝜃1 and
𝑧2 = 𝑟2 𝑒 𝑖𝜃2 , then 𝑧1 𝑧2 = 𝑟1 𝑟2 𝑒 𝑖(𝜃1 +𝜃2 ) , an alternate way to multiply complex numbers.
DeMoivre’s Theorem: If 𝑧 = 𝑟𝑒 𝑖𝜃 , then 𝑧 𝑛 = 𝑟 𝑛 𝑒 𝑖𝑛𝜃 .

Lecture 21 – Vectors (Section 3.1, 3.2)


Vector Addition: 𝑤 ⃗⃗⃗ is a linear combination of the vectors 𝑣 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , 𝑣
⃗⃗⃗⃗⃗
𝑘 if there exists scalars 𝑐1 , 𝑐2 , … , 𝑐𝑘 such
that 𝑤
⃗⃗⃗ = 𝑐1 ⃗⃗⃗⃗⃗
𝑣1 + 𝑐2 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + 𝑐𝑘 ⃗⃗⃗⃗⃗.
𝑣𝑘
Norm: If 𝑣⃗ = (𝑣1 , 𝑣2 , … , 𝑣𝑛 ) is in ℝ𝑛 then the norm/length/magnitude of 𝑣⃗ is ‖𝑣⃗‖ = √𝑣12 + 𝑣22 + ⋯ + 𝑣𝑛2 . It is
always positive and is nonzero unless 𝑣⃗ = 0 ⃗⃗. It also has the properties that ‖𝑘𝑣⃗‖ = 𝑘‖𝑣⃗‖, and 1 𝑣⃗
‖𝑣
⃗⃗‖
produces a unit vector in the direction of 𝑣⃗.
Distance: The distance between two vectors 𝑢 ⃗⃗ and 𝑣⃗ is 𝑑(𝑢 ⃗⃗ − 𝑣⃗‖. It is √(𝑥2 − 𝑥1 )2 + (𝑦2 − 𝑦1 )2 in ℝ2 .
⃗⃗, 𝑣⃗) = ‖𝑢

Dot Product: The dot product of two vectors 𝑢 ⃗⃗ and 𝑣⃗ is 𝑢


⃗⃗ ∙ 𝑣⃗ = 𝑢1 𝑣1 + 𝑢2 𝑣2 + ⋯ + 𝑢𝑛 𝑣𝑛 . It has the properties
that 𝑣⃗ ∙ 𝑣⃗ = ‖𝑣⃗‖2, 𝑢
⃗⃗ ∙ 𝑣⃗ = 𝑣⃗ ∙ 𝑢 ⃗⃗ ∙ (𝑣⃗ + 𝑤
⃗⃗, and 𝑢 ⃗⃗⃗) = 𝑢 ⃗⃗ ∙ 𝑣⃗ + 𝑢
⃗⃗ ∙ 𝑤
⃗⃗⃗.

⃗⃗ and 𝑣⃗ are in ℝ𝑛 then |𝑢


Theorem 3.2.4: (The Cauchy-Shwartz Inequality) If 𝑢 ⃗⃗ ∙ 𝑣⃗| ≤ ‖𝑢
⃗⃗‖‖𝑣⃗‖.

⃗⃗∙𝑣
𝑢 ⃗⃗
⃗⃗ ∙ 𝑣⃗ = ‖𝑢
Angle Formula: cos 𝜃 = ‖𝑢⃗⃗‖‖𝑣⃗⃗‖, or 𝑢 ⃗⃗‖‖𝑣⃗‖ cos 𝜃, where 𝜃 is the angle between 𝑢
⃗⃗ and 𝑣⃗.
⃗⃗ and 𝑣⃗ are in ℝ𝑛 then ‖𝑢
Theorem 3.2.5: (The Triangle Inequality) If 𝑢 ⃗⃗ + 𝑣⃗‖ ≤ ‖𝑢
⃗⃗‖ + ‖𝑣⃗‖.
1ZC3 Linear Algebra – Winter 2021 – Page 6

Lecture 22 – Orthogonality, Geometry (Section 3.3, 3.4)


Orthogonality: The vectors 𝑢 ⃗⃗ and 𝑣⃗ are orthogonal/perpendicular if 𝑢
⃗⃗ ∙ 𝑣⃗ = 0
Normal Vectors: If 𝑛⃗⃗ = (𝑎, 𝑏) is perpendicular to a line then the equation of the line is 𝑎𝑥 + 𝑏𝑦 + 𝑐 = 0, with
some constant 𝑐. If 𝑛⃗⃗ = (𝑎, 𝑏, 𝑐) is perpendicular to a two-dimensional plane then the equation of the
plane is 𝑎𝑥 + 𝑏𝑦 + 𝑐𝑧 + 𝑑 = 0, with some constant 𝑑.

Projections: Given two vectors 𝑎⃗ and 𝑢 ⃗⃗, we want to find a new vector 𝑢 ⃗⃗⃗⃗⃗1 = Proj𝑎⃗⃗ 𝑢
⃗⃗, called the the projection of
𝑢
⃗⃗ along 𝑎⃗, such that 𝑢
⃗⃗⃗⃗⃗1 is parallel to 𝑎⃗ and 𝑢
⃗⃗ − 𝑢
⃗⃗⃗⃗⃗1 is perpendicular to 𝑎⃗. In other words, we want to find a
⃗⃗∙𝑎⃗⃗
𝑢 ⃗⃗∙𝑎⃗⃗
𝑢
⃗⃗⃗⃗⃗1 = 𝑡𝑎⃗ and (𝑢
constant 𝑡, such that 𝑢 ⃗⃗ − 𝑡𝑎⃗) ∙ 𝑎⃗ = 0. Rearranging, we get 𝑡 = ‖𝑎⃗⃗‖2 , and 𝑢
⃗⃗⃗⃗⃗1 = ‖𝑎⃗⃗‖2 𝑎⃗. Note
that 𝑢
⃗⃗ − ⃗⃗⃗⃗⃗
𝑢1 is called the component of 𝑢
⃗⃗ orthogonal to 𝑎⃗. Also, if only the magnitude of the projection is
𝑢⃗⃗∙𝑎⃗⃗
needed, the expression can be simplified to ‖Proj𝑎⃗⃗ 𝑢
⃗⃗‖ = ‖𝑎⃗⃗‖

Theorem 3.4.1: Let 𝐿 be the line in ℝ2 or ℝ3 that contains the point ⃗⃗⃗⃗⃗
𝑥0 and is parallel to the nonzero vector 𝑣⃗.
Then, the equation of the line 𝐿 is 𝑥⃗ = ⃗⃗⃗⃗⃗
𝑥0 + 𝑡𝑣⃗, where the components are parametric equations.
Theorem 3.4.2: Similarly, for ℝ3 , a plane can be expressed as 𝑥⃗ = ⃗⃗⃗⃗⃗
𝑥0 + 𝑡1 ⃗⃗⃗⃗⃗
𝑣1 + 𝑡2 ⃗⃗⃗⃗⃗.
𝑣2
Theorem 3.4.3: If 𝐴 is an 𝑚 × 𝑛 matrix then the solution set of the homogeneous linear system 𝐴𝑥⃗ = ⃗0⃗ consist of
all vectors in ℝ𝑛 that are orthogonal to every row vector of 𝐴.

Lecture 23 – Cross Product (Section 3.5)


Cross Product: If 𝑢 ⃗⃗ = (𝑢1 , 𝑢2 , 𝑢3 ) and 𝑣⃗ = (𝑣1 , 𝑣2 , 𝑣3 ) are in ℝ3 then the cross product 𝑢
⃗⃗ × 𝑣⃗ is defined as
𝑖̂ 𝑗̂ ̂
𝑘 𝑖̂ = (1, 0, 0)
⃗⃗ × 𝑣⃗ = |𝑢1 𝑢2 𝑢3 |, where 𝑗̂ = (0, 1, 0) . The result is a vector that is orthogonal to both 𝑢
𝑢 ⃗⃗ and 𝑣⃗.
𝑣1 𝑣2 𝑣3 ̂
𝑘 = (0, 0, 1)
Properties of the Cross Product:
a) 𝑢⃗⃗ × 𝑣⃗ = −(𝑣⃗ × 𝑢 ⃗⃗)
b) 𝑢⃗⃗ × (𝑣⃗ + 𝑤⃗⃗⃗) = 𝑢 ⃗⃗ × 𝑣⃗ + 𝑢⃗⃗ × 𝑤
⃗⃗⃗
c) 𝑢⃗⃗ × 𝑢⃗⃗ = 𝑢 ⃗⃗
⃗⃗ × 0 = 0 ⃗⃗
d) 𝑘(𝑢 ⃗⃗ × 𝑣⃗) = 𝑘𝑢 ⃗⃗ × 𝑣⃗ = 𝑢⃗⃗ × 𝑘𝑣⃗

⃗⃗ × 𝑣⃗‖2 = ‖𝑢
Lagrange’s Identity: ‖𝑢 ⃗⃗ ∙ 𝑣⃗)2
⃗⃗‖‖𝑣⃗‖ − (𝑢

⃗⃗ and 𝑣⃗ are in ℝ3 then ‖𝑢


Theorem 3.5.3: If 𝑢 ⃗⃗ × 𝑣⃗‖ is the area of the parallelogram determined by 𝑢
⃗⃗ and 𝑣⃗
𝑢1 𝑢2 𝑢3
Scalar Triple Product: 𝑢 ⃗⃗⃗) = | 𝑣1 𝑣2 𝑣3 |, is the volume of the parallelopiped formed by 𝑢
⃗⃗ ∙ (𝑣⃗ × 𝑤 ⃗⃗, 𝑣⃗, and 𝑤
⃗⃗⃗.
𝑤1 𝑤2 𝑤3

Lecture 24 – Real Vector Spaces (Section 4.1)


Vector Space: Let 𝑉 be a nonempty set of objects together with two operations called “addition” and “scalar
multiplication”. 𝑉 is called a vector space, and its elements called vectors in 𝑉, if and only if the
following ten axioms are satisfied:
1) Closure under addition: If 𝑢 ⃗⃗ and 𝑣⃗ are in 𝑉, then the sum 𝑢 ⃗⃗ + 𝑣⃗ is also in 𝑉.
2) 𝑢⃗⃗ + 𝑣⃗ = 𝑣⃗ + 𝑢 ⃗⃗ for all 𝑢 ⃗⃗, 𝑣⃗ in 𝑉.
3) 𝑢⃗⃗ + (𝑣⃗ + 𝑤 ⃗⃗⃗) = (𝑢 ⃗⃗ + 𝑣⃗) + 𝑤 ⃗⃗⃗ for all 𝑢
⃗⃗, 𝑣⃗, 𝑤
⃗⃗⃗ in 𝑉.
4) There is an object in 𝑉 called the zero vector, denoted by ⃗0⃗, such that 𝑢 ⃗⃗ + ⃗0⃗ = 𝑢⃗⃗ for all 𝑢
⃗⃗ in 𝑉.
5) For each 𝑢 ⃗⃗ in 𝑉 there is an object in 𝑉 called the negative of 𝑢 ⃗⃗, denoted by −𝑢 ⃗⃗, such that 𝑢 ⃗⃗ + (−𝑢 ⃗⃗.
⃗⃗) = 0
6) Closure under scalar multiplication: If 𝑢 ⃗⃗ is in 𝑉 and 𝑘 is a scalar, then the product 𝑘𝑢 ⃗⃗ is also in 𝑉.
7) 𝑘(𝑢 ⃗⃗ + 𝑣⃗) = 𝑘𝑢 ⃗⃗ + 𝑘𝑣⃗.
8) (𝑘 + 𝑚)𝑢 ⃗⃗ = 𝑘𝑢 ⃗⃗ + 𝑚𝑢 ⃗⃗.
9) 𝑘(𝑚𝑢 ⃗⃗) = (𝑘𝑚)𝑢 ⃗⃗
10) 1𝑢⃗⃗ = 𝑢 ⃗⃗
1ZC3 Linear Algebra – Winter 2021 – Page 7

Theorem 4.1.1: If 𝑉 is a vector space then it also has the following properties:
a) −𝑢 ⃗⃗ = (−1)𝑢 ⃗⃗
b) 0𝑢⃗⃗ = 0 ⃗⃗
c) 𝑘0⃗ = ⃗0⃗

d) If 𝑘𝑢 ⃗⃗ = 0⃗⃗ then either 𝑘 = 0 or 𝑢 ⃗⃗
⃗⃗ = 0

Lecture 25 – Subspaces (Section 4.2)


Subspace: A subset 𝑊 of a vector space 𝑉 is a subspace of 𝑉 if 𝑊 itself is a vector space using the same addition
and scalar multiplication operations defined on 𝑉.
Theorem 4.2.1: If 𝑊 is a nonempty subset of a vector space 𝑉, then 𝑊 is a subspace if and only if the following
conditions hold for 𝑊 (no other axioms need to be checked):
a) Closure under addition
b) Closure under scalar multiplication

Lecture 26 – Span (Section 4.3)


Span: Let 𝑆 = {𝑤
⃗⃗⃗⃗⃗,
1 𝑤⃗⃗⃗⃗⃗⃗,
2 … , ⃗⃗⃗⃗⃗}
𝑤𝑟 be a nonempty set of vectors in a vector space 𝑉. The set of all possible linear
combinations of vectors in 𝑆 is called the span of 𝑆, denoted by span(𝑆). In set notation, the span is
written as {𝑐1 ⃗⃗⃗⃗⃗
𝑤1 + 𝑐2 ⃗⃗⃗⃗⃗⃗ 𝑤𝑟 | 𝑐1 , 𝑐2 , … , 𝑐𝑟 ∈ ℝ}. If 𝑉 = span(𝑆), then the vectors in 𝑆 are said
𝑤2 + ⋯ + 𝑐𝑟 ⃗⃗⃗⃗⃗
to span 𝑉, and every vector in 𝑉 can be written as a linear combination of the vectors in 𝑆.

Theorem 4.3.1: Let 𝑆 = {𝑤


⃗⃗⃗⃗⃗,
1 𝑤⃗⃗⃗⃗⃗⃗,
2 … , ⃗⃗⃗⃗⃗}
𝑤𝑟 be a nonempty set of vectors in a vector space 𝑉. Then 𝑊 = span(𝑆) is
a subspace of 𝑉.

Lecture 27 – Linear Independence (Section 4.4)


Theorem 4.4.1: Let 𝑆 = {𝑣 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗}
𝑣𝑟 be a set of vectors in the vector space 𝑉. 𝑆 is independent if and only if
the equation 𝑘1 ⃗⃗⃗⃗⃗
𝑣1 + 𝑘2 ⃗⃗⃗⃗⃗ 𝑣𝑟 = ⃗0⃗ has only the trivial solution (ie. 𝑘1 = 𝑘2 = ⋯ = 𝑘𝑟 = 0).
𝑣2 + ⋯ + 𝑘𝑟 ⃗⃗⃗⃗

Theorem 4.4.3: Let 𝑆 = {𝑣


⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗, 𝑣𝑟 be a set of vectors in ℝ𝑛 . If 𝑟 > 𝑛 then 𝑆 is linearly dependent.
𝑣2 … , ⃗⃗⃗⃗}

Lecture 28,29 – Coordinates and Basis (Section 4.5)


Basis: Let 𝑆 = {𝑣
⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗}
𝑣𝑟 be a set of vectors in the vector space 𝑉. 𝑆 is called a basis for 𝑉 if and only if 𝑆 is
independent and 𝑆 spans 𝑉.

Theorem 4.5.1: If 𝑆 = {𝑣 ⃗⃗⃗⃗⃗,


1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗}
𝑣𝑟 is a basis for a vector space 𝑉 then every 𝑣⃗ ∈ 𝑉 can be written as the
combination 𝑣⃗ = 𝑐1 ⃗⃗⃗⃗⃗ 𝑣1 + 𝑐2 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + 𝑐𝑟 ⃗⃗⃗⃗
𝑣𝑟 in exactly one way.
Coordinates: Given 𝑣⃗ = 𝑐1 ⃗⃗⃗⃗⃗ 𝑣1 + 𝑐2 ⃗⃗⃗⃗⃗ 𝑣𝑟 in a vector space 𝑉 with the basis 𝑆 = {𝑣
𝑣2 + ⋯ + 𝑐𝑟 ⃗⃗⃗⃗ ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗},
𝑣𝑟
𝑐1 , 𝑐2 , … , 𝑐𝑛 are called the coordinates of 𝑣⃗ relative to 𝑆, and the vector (𝑐1 , 𝑐2 , … , 𝑐𝑛 ) is denoted (𝑣)𝑆 .

Lecture 31 – Gram Shmidt (Section 6.3)


Orthogonal Set: A set of vectors 𝑆 = {𝑣 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗, 𝑣𝑟 in ℝ𝑛 is orthogonal if and only if 𝑣𝑖 ⋅ 𝑣𝑗 = 0 for all 𝑖 ≠ 𝑗
𝑣2 … , ⃗⃗⃗⃗}
less than 𝑟. If in addition, each vector in 𝑆 has length 1, then 𝑆 is called orthonormal.

Theorem 6.3.1: Every set 𝑆 of nonzero orthogonal vectors is linearly independent.


Theorem 6.3.2: Let 𝑆 = {𝑣
⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗, 𝑣𝑟 be a basis for a subspace 𝑊 of ℝ𝑛 and let 𝑢
𝑣2 … , ⃗⃗⃗⃗} ⃗⃗ be any vector in 𝑊.
⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗
⃗⃗ = ‖𝑣⃗⃗⃗⃗⃗‖12 𝑣
a) If 𝑆 is orthogonal then 𝑢 ⃗⃗⃗⃗⃗1 + ‖𝑣⃗⃗⃗⃗⃗‖22 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + ‖𝑣⃗⃗⃗⃗⃗‖𝑟2 ⃗⃗⃗⃗.
𝑣𝑟
1 2 𝑟
⃗⃗ = (𝑢
b) If 𝑆 is orthonormal then 𝑢 ⃗⃗ ⋅ 𝑣
⃗⃗⃗⃗⃗)𝑣 1 + (𝑢
1 ⃗⃗⃗⃗⃗ 𝑣2 ⃗⃗⃗⃗⃗2 + ⋯ + (𝑢
⃗⃗ ⋅ ⃗⃗⃗⃗⃗)𝑣 ⃗⃗ ⋅ ⃗⃗⃗⃗)𝑣
𝑣𝑟 ⃗⃗⃗⃗𝑟

Theorem 6.3.3: If a set 𝑆 of 𝑛 vectors in ℝ𝑛 spans ℝ𝑛 or is independent, then 𝑆 is a basis for ℝ𝑛 .

Orthogonal Projections: Let 𝑊 be a subspace of ℝ𝑛 and let 𝑆 = {𝑣


⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗}
𝑣𝑟 be an orthogonal basis for for 𝑊.
⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗
⃗⃗ ∈ ℝ𝑛 then the projection of 𝑢
If 𝑢 ⃗⃗ onto 𝑊 is ‖𝑣⃗⃗⃗⃗⃗‖12 𝑣
⃗⃗⃗⃗⃗1 + ‖𝑣⃗⃗⃗⃗⃗‖22 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + ‖𝑣⃗⃗⃗⃗⃗‖𝑟2 ⃗⃗⃗⃗,
𝑣𝑟 with these properties:
1 2 𝑟
1ZC3 Linear Algebra – Winter 2021 – Page 8

a) Proj𝑊 𝑢 ⃗⃗ is in 𝑊.
b) 𝑢
⃗⃗⃗⃗⃗1 = 𝑢
⃗⃗ − Proj𝑊 𝑢 ⃗⃗ is orthogonal to every vector in 𝑆.
c) 𝑢
⃗⃗⃗⃗⃗1 = 𝑢
⃗⃗ − Proj𝑊 𝑢 ⃗⃗ is orthogonal to every vector in 𝑊.
d) Proj𝑊 𝑢 ⃗⃗ is independent of the choice of orthogonal basis.
Gram Shmidt Process: Let 𝑆 = {𝑢 ⃗⃗⃗⃗⃗,
1 𝑢⃗⃗⃗⃗⃗, 𝑢𝑟 be a basis for a subspace 𝑊 of ℝ𝑛 . Then to find an orthogonal
2 … , ⃗⃗⃗⃗⃗}
⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗
basis, 𝑆⊥ = {𝑣 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗},
𝑣𝑟 let 𝑣 ⃗⃗⃗⃗⃗1 = ⃗⃗⃗⃗⃗,
𝑢1 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗2 − 2 12 𝑣
𝑣2 = 𝑢 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗3 − 3 12 𝑣
𝑣2 = 𝑢
‖𝑣⃗⃗⃗⃗⃗‖
⃗⃗⃗⃗⃗1 − 3 22 ⃗⃗⃗⃗⃗
‖𝑣
⃗⃗⃗⃗⃗‖ ‖𝑣
𝑣2 through all 𝑟
⃗⃗⃗⃗⃗‖
1 1 2
⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 𝑟 ⃗⃗⃗⃗⃗
1 ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 𝑟 ⃗⃗⃗⃗⃗
2 ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢𝑟 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑟−1
until 𝑣𝑟 = 𝑢𝑟 − ‖𝑣 ⃗⃗⃗⃗⃗‖ 2𝑣⃗⃗⃗⃗⃗1 − ‖𝑣 ⃗⃗⃗⃗⃗‖ 2 ⃗⃗⃗⃗⃗
𝑣2 − ⋯ − ‖𝑣⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗‖2 𝑣
⃗⃗⃗⃗⃗⃗⃗⃗⃗,
𝑟−1 which forms that basis.
1 2 𝑟−1

Lecture 32 – Dimension (Section 4.6)


Dimension: The dimension of a vector space 𝑉 is the number of vectors in a basis for 𝑉.
Theorem 4.6.4: Let 𝑉 be a vector space with dimension 𝑛 and let 𝑆 be a set of vectors in 𝑉 with 𝑛 vectors. Then 𝑆
is a basis for 𝑉 if 𝑆 spans 𝑉 or 𝑆 is independent.

Lecture 33, 34 – Row Space, Column Space, and Null Space (Section 4.8)
Matrix Subspaces: Let 𝐴 be an 𝑚 × 𝑛 matrix.
a) The subspace of ℝ𝑛 spanned by the rows of 𝐴 is the row space of 𝐴.
b) The subspace of ℝ𝑛 spanned by the columns of 𝐴 is the column space of 𝐴.
c) The subspace of ℝ𝑛 consisting of all solutions to the equation 𝐴𝑥⃗ = ⃗0⃗ is the null space of 𝐴.

Theorem 4.8.5: 𝐴𝑥⃗ = 𝑏⃗⃗ is consistent if and only if 𝑏⃗⃗ is in the column space of 𝐴.
Theorem 4.8.3: Elementary row operations do not change the row space and null space of 𝐴. The do however
change the column space of 𝐴.
Theorem 4.8.4: Let 𝑅 be a row-echelon form of 𝐴.
a) The nonzero vectors of 𝑅 form a basis for the row space of 𝐴.
b) The columns of 𝐴 corresponding to those of 𝑅 with leading 1s form a basis for the column space of 𝐴.

Lecture 35,36 – Cryptography (Section 10.13)


Modular Arithmatic: If 𝑚 is a positive integer and 𝑎 and 𝑏 are any integers then 𝑎 is equivalent to 𝑏 modulo 𝑚,
denoted by 𝑎 ≡ 𝑏 (mod 𝑚), if 𝑎 − 𝑏 is an integer multiple of 𝑚.
Result 13.1: Every integer 𝑎 is equivalent modulo 𝑚 to exactly one of ℤ𝑚 = {0,1,2, … , 𝑚 − 1}.

Residue: If 𝑎 ≡ 𝑏 (mod 𝑚) and 𝑏 ∈ ℤ𝑚 then 𝑏 is the residue of 𝑎 modulo 𝑚.


Hill-2 Cipher:
𝑎 𝑏
1) Choose a 2 × 2 enciphering matrix 𝐴 = [ ].
𝑐 𝑑
2) Group letters into pairs and replace each by its numerical value (from 0 to 25)
𝑝1
3) For each pair 𝑝⃗ = [𝑝 ], let 𝑐⃗ = 𝐴𝑝⃗ be the ciphertext vector.
2
4) Replace each entry in 𝑐⃗ by its residue modulo 26.
5) Convert pairs back into plain text.

Reciprocals: If 𝑎 ∈ ℤ𝑚 then there is a number 𝑎−1 ∈ ℤ𝑚 such that 𝑎𝑎−1 ≡ 1 (mod 𝑚) called the reciprocal or
the multiplicative inverse of 𝑎 modulo 𝑚.
Result 13.2: If 𝑎 ∈ ℤ𝑚 then 𝑎 has a reciprocal modulo 𝑚 if and only if 𝑎 and 𝑚 have no common prime factors.
𝑎 𝑏
Result 13.3: Let 𝐴 = [ ] have entries in ℤ𝑚 . If the residue of 𝑎𝑑 − 𝑏𝑐 has a reciprocal modulo 𝑚 then 𝐴 has
𝑐 𝑑
𝑑 −𝑏
an inverse modulo 𝑚, and 𝐴−1 = (𝑎𝑑 − 𝑏𝑐)−1 [ ].
−𝑐 𝑎

Deciphering: Use the same algorithm as the cipher but replace 𝐴 with 𝐴−1.

And that’s the end of *these* notes.

You might also like