1ZC3_Final_Lectures_Summary (2)
1ZC3_Final_Lectures_Summary (2)
Gaussian Elimination: Use elementary row operations to find the row-echelon form. Eliminate entries one
column at a time from the left, below the diagonal entries. To solve the system, recreate the equations and
use back substitution.
Gauss-Jordan Elimination: Continue from Gaussian Elimination to find the reduced row-echelon form.
Eliminate entries one column at a time from the right, above the diagonal entries. Recreate the equations
and solve for leading variables. No back substitution is required.
Homogeneous Linear System: 𝐴𝑥⃗ = 0⃗⃗, always has a trivial solution, where (𝑥⃗ = 0
⃗⃗)
Theorem 1.2.2: A homogeneous systems with more unknowns than equations must have ∞ solutions.
The product 𝐶 = 𝐴𝐵 might not exist, when the inner size 𝑟 does not match.
𝐴𝐵 ≠ 𝐵𝐴 in general. When they do, they are said to commute.
A linear system of equations can be written in matrix notation as 𝐴x⃗⃗ = ⃗⃗
b.
Identity Matrix: An 𝑛 × 𝑛 matrix with 1 on the diagonal and 0 elsewhere. It has the property that 𝐴𝐼 = 𝐴 and
𝐼𝐴 = 𝐴 whenever the products are defined.
Inverse Matrix: If 𝐴 is a square matrix, then 𝐵 = 𝐴−1 is the inverse matrix of 𝐴 if 𝐴𝐵 = 𝐵𝐴 = 𝐼. If such 𝐵
exists, then the matrix 𝐴 is called invertible or non-singular.
Theorem 1.4.4: If 𝐵 and 𝐶 are both inverses of 𝐴, then 𝐵 = 𝐶
𝑎 𝑏 1 𝑑 −𝑏
Theorem 1.4.5: Let 𝐴 = [ ]. If and only if 𝑎𝑑 − 𝑏𝑐 ≠ 0, then 𝐴 is invertible and 𝐴−1 = 𝑎𝑑−𝑏𝑐 [ ].
𝑐 𝑑 −𝑐 𝑎
Theorem 1.4.6: If 𝐴 and 𝐵 are 𝑛 × 𝑛 and invertible, then the 𝐴𝐵 is also invertible and (𝐴𝐵)−1 = 𝐵−1 𝐴−1 .
Inversion Algorithm: the inverse can be found by row reducing [𝐴 𝐼 ] into [𝐼 𝐴−1 ].
Theorem 1.6.5: If 𝐴𝐵 is invertible, and 𝐴 and 𝐵 are square, then 𝐴 and 𝐵 are invertible.
Lower Triangular Matrix: A matrix with all zeroes above the diagonal.
Triangular: A matrix that is either upper or lower triangular.
Theorem 1.7.1:
a) If 𝐴 is upper triangular then 𝐴𝑇 is lower triangular. If 𝐴 is lower triangular then 𝐴𝑇 is upper triangular.
b) If 𝐴 and 𝐵 are lower triangular than 𝐴𝐵 is lower triangular. If 𝐴 and 𝐵 are upper triangular than 𝐴𝐵 is
upper triangular.
c) If 𝐴 is triangular and all diagonal entries are nonzero, then 𝐴 is invertible.
d) Suppose 𝐴 is invertible. If 𝐴 is upper triangular, then 𝐴−1 is upper triangular. If 𝐴 is lower triangular, then
𝐴−1 is lower triangular.
Symmetric Matrix: A square matrix 𝐴 that satisfies 𝐴 = 𝐴𝑇 , where entries are symmetric across the diagonal.
Theorem 1.7.2: If 𝐴 and 𝐵 are 𝑛 × 𝑛 symmetric matrices, then:
a) 𝐴𝑇 and 𝐵𝑇 are symmetric.
b) 𝐴 + 𝐵 and 𝐴 − 𝐵 are symmetric.
c) If 𝑘 is a scalar, then 𝑘𝐴 and 𝑘𝐵 are symmetric.
Minor: If 𝐴 is a square matrix, then the minor of 𝑎𝑖𝑗 , denoted by 𝑀𝑖𝑗 , is the determinant of the matrix obtained
from 𝐴 by deleting row 𝑖 and column 𝑗.
Cofactor: denoted by 𝑐𝑖𝑗 , is equal to 𝑐𝑖𝑗 = (−1)𝑖+𝑗 𝑀𝑖𝑗 . It changes sign in every alternating row and column.
Determinant: If 𝐴 is a 𝑛 × 𝑛 matrix then det(𝐴) = 𝑎11 𝑐11 + 𝑎12 𝑐12 + ⋯ + 𝑎1𝑛 𝑐1𝑛 is called the cofactor
expansion of 𝐴 along row 1. The determinant can be obtained using cofactor expansion along any row or
column, with the row/column with the most 0s being the easiest.
Theorem 2.1.2: If 𝐴 is triangular then det(𝐴) is the product of the diagonal entries of 𝐴.
Theorem 2.3.4: If 𝐴 and 𝐵 are 𝑛 × 𝑛 matrices, then det(𝐴𝐵) = det(𝐴) det (𝐵). This also implies that
det(𝐴𝐵) = det(A)det(B) = det(𝐵𝐴), since order can be changed for scalar multiplication.
1ZC3 Linear Algebra – Winter 2021 – Page 4
1
Theorem 2.3.5: If 𝐴 is invertible then det(𝐴−1 ) = .
det(𝐴)
Linear Combination: Multiplying vectors by constants and adding the up: 𝑥⃗ = 𝑐1 𝑣 ⃗⃗⃗⃗⃗1 + 𝑐2 ⃗⃗⃗⃗⃗
𝑣2 + ⋯ + 𝑐𝑛 ⃗⃗⃗⃗⃗.
𝑣𝑛
Independence: Let 𝑆 = {𝑣1 , 𝑣2 , … , 𝑣𝑛 }. 𝑆 is independent if none of the vectors in 𝑆 can be written as a linear
combination of other vectors in 𝑆.
Eigenspace: 𝑆 is a basis for the eigenspace for an eigenvalue 𝜆 if 𝑆 is independent and every eigenvector 𝑥⃗ can be
written as a linear combination of vectors in 𝑆.
Theorem 5.1.2: If 𝐴 is triangular then the eigenvalues are the diagonal entries of 𝐴.
Theorem 5.1.4: (TFAE) 𝐴 is invertible if and only if 𝜆 = 0 is not an eigenvalue of 𝐴.
Diagonalization: An 𝑛 × 𝑛 matrix 𝐴 is diagonalizable if there exists an invertible matrix 𝑃 and a diagonal matrix
𝐷, such that 𝑃−1 𝐴𝑃 = 𝐷 (in other words, 𝐴 is similar to 𝐷). To find such a 𝑃 and 𝐷, let 𝑃 =
[𝑝1 𝑝2 𝑝3 … ], where 𝑝𝑞 is a basis vector for each eigenspace. Let 𝐷 be a diagonal matrix with all the
eigenvalues 𝜆1 , 𝜆2 , 𝜆3 , … (same order as 𝑝𝑞 ) as the diagonal entries. Then 𝑃−1 𝐴𝑃 = 𝐷 if there are
enough independent eigenvectors to fill the matrix 𝑃.
Powers of a Matrix: If 𝐴 is diagonalizable and 𝑘 is a positive integer, then 𝐴𝑘 = 𝑃𝐷 𝑘 𝑃−1 (since all middle 𝑃𝑃−1
terms cancel out), which is much easier to compute because (𝐷 𝑘 )𝑖𝑖 = (𝐷𝑖𝑖 )𝑘 .
First-Order Linear Systems: A constant first-order homogeneous linear system can be expressed as 𝑦⃗ ′ = 𝐴𝑦⃗,
where 𝐴 is a matrix of constants. If 𝐴 is diagonal, then each individual equation satisfies 𝑦 ′ = 𝑎𝑦 and can
be solved using natural exponentials.
Solution by Diagonalization: Suppose that 𝑦⃗ ′ = 𝐴𝑦⃗, where 𝐴 = 𝑃𝐷𝑃−1 is diagonalizable. Let 𝑦⃗ = 𝑃𝑢 ⃗⃗, 𝑦⃗ ′ =
′ ′
𝑃𝑢⃗⃗ , where 𝑢
⃗⃗ is an unknown vector. Substitute into first equation to get 𝑃𝑢 ⃗⃗ = 𝐴𝑃𝑢⃗⃗. Multiply both sides
by 𝑃−1 to get 𝑢 ⃗⃗′ = 𝑃 −1 𝐴𝑃𝑢
⃗⃗. But 𝑃−1 𝐴𝑃 = 𝐷, so 𝑢 ⃗⃗′ = 𝐷𝑢⃗⃗. This system is now solvable because the
coefficients are diagonal. When 𝑢 ⃗⃗ is solved, 𝑦⃗ can be obtained by the assumption 𝑦⃗ = 𝑃𝑢 ⃗⃗.
Theorem 5.4.1 (restatement of above): Suppose that 𝐴 is diagonalizable with eigenvectors 𝑥⃗1 , 𝑥⃗2 , … , 𝑥⃗𝑛 and
corresponding eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 , then the general solution to the system of differential equations
𝑦⃗ ′ = 𝐴𝑦⃗ is 𝑦⃗ = 𝑐1 𝑥⃗1 𝑒 𝜆1 𝑥 + 𝑐2 𝑥⃗2 𝑒 𝜆2 𝑥 + ⋯ + 𝑐𝑛 𝑥⃗𝑛 𝑒 𝜆𝑛 𝑥 .
Exponential Notation: 𝑒 𝑖𝜃 means the same thing as cos 𝜃 + 𝑖 sin 𝜃. It has the property that if 𝑧1 = 𝑟1 𝑒 𝑖𝜃1 and
𝑧2 = 𝑟2 𝑒 𝑖𝜃2 , then 𝑧1 𝑧2 = 𝑟1 𝑟2 𝑒 𝑖(𝜃1 +𝜃2 ) , an alternate way to multiply complex numbers.
DeMoivre’s Theorem: If 𝑧 = 𝑟𝑒 𝑖𝜃 , then 𝑧 𝑛 = 𝑟 𝑛 𝑒 𝑖𝑛𝜃 .
⃗⃗∙𝑣
𝑢 ⃗⃗
⃗⃗ ∙ 𝑣⃗ = ‖𝑢
Angle Formula: cos 𝜃 = ‖𝑢⃗⃗‖‖𝑣⃗⃗‖, or 𝑢 ⃗⃗‖‖𝑣⃗‖ cos 𝜃, where 𝜃 is the angle between 𝑢
⃗⃗ and 𝑣⃗.
⃗⃗ and 𝑣⃗ are in ℝ𝑛 then ‖𝑢
Theorem 3.2.5: (The Triangle Inequality) If 𝑢 ⃗⃗ + 𝑣⃗‖ ≤ ‖𝑢
⃗⃗‖ + ‖𝑣⃗‖.
1ZC3 Linear Algebra – Winter 2021 – Page 6
Projections: Given two vectors 𝑎⃗ and 𝑢 ⃗⃗, we want to find a new vector 𝑢 ⃗⃗⃗⃗⃗1 = Proj𝑎⃗⃗ 𝑢
⃗⃗, called the the projection of
𝑢
⃗⃗ along 𝑎⃗, such that 𝑢
⃗⃗⃗⃗⃗1 is parallel to 𝑎⃗ and 𝑢
⃗⃗ − 𝑢
⃗⃗⃗⃗⃗1 is perpendicular to 𝑎⃗. In other words, we want to find a
⃗⃗∙𝑎⃗⃗
𝑢 ⃗⃗∙𝑎⃗⃗
𝑢
⃗⃗⃗⃗⃗1 = 𝑡𝑎⃗ and (𝑢
constant 𝑡, such that 𝑢 ⃗⃗ − 𝑡𝑎⃗) ∙ 𝑎⃗ = 0. Rearranging, we get 𝑡 = ‖𝑎⃗⃗‖2 , and 𝑢
⃗⃗⃗⃗⃗1 = ‖𝑎⃗⃗‖2 𝑎⃗. Note
that 𝑢
⃗⃗ − ⃗⃗⃗⃗⃗
𝑢1 is called the component of 𝑢
⃗⃗ orthogonal to 𝑎⃗. Also, if only the magnitude of the projection is
𝑢⃗⃗∙𝑎⃗⃗
needed, the expression can be simplified to ‖Proj𝑎⃗⃗ 𝑢
⃗⃗‖ = ‖𝑎⃗⃗‖
Theorem 3.4.1: Let 𝐿 be the line in ℝ2 or ℝ3 that contains the point ⃗⃗⃗⃗⃗
𝑥0 and is parallel to the nonzero vector 𝑣⃗.
Then, the equation of the line 𝐿 is 𝑥⃗ = ⃗⃗⃗⃗⃗
𝑥0 + 𝑡𝑣⃗, where the components are parametric equations.
Theorem 3.4.2: Similarly, for ℝ3 , a plane can be expressed as 𝑥⃗ = ⃗⃗⃗⃗⃗
𝑥0 + 𝑡1 ⃗⃗⃗⃗⃗
𝑣1 + 𝑡2 ⃗⃗⃗⃗⃗.
𝑣2
Theorem 3.4.3: If 𝐴 is an 𝑚 × 𝑛 matrix then the solution set of the homogeneous linear system 𝐴𝑥⃗ = ⃗0⃗ consist of
all vectors in ℝ𝑛 that are orthogonal to every row vector of 𝐴.
⃗⃗ × 𝑣⃗‖2 = ‖𝑢
Lagrange’s Identity: ‖𝑢 ⃗⃗ ∙ 𝑣⃗)2
⃗⃗‖‖𝑣⃗‖ − (𝑢
Theorem 4.1.1: If 𝑉 is a vector space then it also has the following properties:
a) −𝑢 ⃗⃗ = (−1)𝑢 ⃗⃗
b) 0𝑢⃗⃗ = 0 ⃗⃗
c) 𝑘0⃗ = ⃗0⃗
⃗
d) If 𝑘𝑢 ⃗⃗ = 0⃗⃗ then either 𝑘 = 0 or 𝑢 ⃗⃗
⃗⃗ = 0
a) Proj𝑊 𝑢 ⃗⃗ is in 𝑊.
b) 𝑢
⃗⃗⃗⃗⃗1 = 𝑢
⃗⃗ − Proj𝑊 𝑢 ⃗⃗ is orthogonal to every vector in 𝑆.
c) 𝑢
⃗⃗⃗⃗⃗1 = 𝑢
⃗⃗ − Proj𝑊 𝑢 ⃗⃗ is orthogonal to every vector in 𝑊.
d) Proj𝑊 𝑢 ⃗⃗ is independent of the choice of orthogonal basis.
Gram Shmidt Process: Let 𝑆 = {𝑢 ⃗⃗⃗⃗⃗,
1 𝑢⃗⃗⃗⃗⃗, 𝑢𝑟 be a basis for a subspace 𝑊 of ℝ𝑛 . Then to find an orthogonal
2 … , ⃗⃗⃗⃗⃗}
⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 ⃗⃗⃗⃗⃗
basis, 𝑆⊥ = {𝑣 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗,
𝑣2 … , ⃗⃗⃗⃗},
𝑣𝑟 let 𝑣 ⃗⃗⃗⃗⃗1 = ⃗⃗⃗⃗⃗,
𝑢1 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗2 − 2 12 𝑣
𝑣2 = 𝑢 ⃗⃗⃗⃗⃗,
1 ⃗⃗⃗⃗⃗ ⃗⃗⃗⃗⃗3 − 3 12 𝑣
𝑣2 = 𝑢
‖𝑣⃗⃗⃗⃗⃗‖
⃗⃗⃗⃗⃗1 − 3 22 ⃗⃗⃗⃗⃗
‖𝑣
⃗⃗⃗⃗⃗‖ ‖𝑣
𝑣2 through all 𝑟
⃗⃗⃗⃗⃗‖
1 1 2
⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 𝑟 ⃗⃗⃗⃗⃗
1 ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢 𝑟 ⃗⃗⃗⃗⃗
2 ⃗⃗⃗⃗⃗⃗∙𝑣
𝑢𝑟 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗
𝑟−1
until 𝑣𝑟 = 𝑢𝑟 − ‖𝑣 ⃗⃗⃗⃗⃗‖ 2𝑣⃗⃗⃗⃗⃗1 − ‖𝑣 ⃗⃗⃗⃗⃗‖ 2 ⃗⃗⃗⃗⃗
𝑣2 − ⋯ − ‖𝑣⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗‖2 𝑣
⃗⃗⃗⃗⃗⃗⃗⃗⃗,
𝑟−1 which forms that basis.
1 2 𝑟−1
Lecture 33, 34 – Row Space, Column Space, and Null Space (Section 4.8)
Matrix Subspaces: Let 𝐴 be an 𝑚 × 𝑛 matrix.
a) The subspace of ℝ𝑛 spanned by the rows of 𝐴 is the row space of 𝐴.
b) The subspace of ℝ𝑛 spanned by the columns of 𝐴 is the column space of 𝐴.
c) The subspace of ℝ𝑛 consisting of all solutions to the equation 𝐴𝑥⃗ = ⃗0⃗ is the null space of 𝐴.
Theorem 4.8.5: 𝐴𝑥⃗ = 𝑏⃗⃗ is consistent if and only if 𝑏⃗⃗ is in the column space of 𝐴.
Theorem 4.8.3: Elementary row operations do not change the row space and null space of 𝐴. The do however
change the column space of 𝐴.
Theorem 4.8.4: Let 𝑅 be a row-echelon form of 𝐴.
a) The nonzero vectors of 𝑅 form a basis for the row space of 𝐴.
b) The columns of 𝐴 corresponding to those of 𝑅 with leading 1s form a basis for the column space of 𝐴.
Reciprocals: If 𝑎 ∈ ℤ𝑚 then there is a number 𝑎−1 ∈ ℤ𝑚 such that 𝑎𝑎−1 ≡ 1 (mod 𝑚) called the reciprocal or
the multiplicative inverse of 𝑎 modulo 𝑚.
Result 13.2: If 𝑎 ∈ ℤ𝑚 then 𝑎 has a reciprocal modulo 𝑚 if and only if 𝑎 and 𝑚 have no common prime factors.
𝑎 𝑏
Result 13.3: Let 𝐴 = [ ] have entries in ℤ𝑚 . If the residue of 𝑎𝑑 − 𝑏𝑐 has a reciprocal modulo 𝑚 then 𝐴 has
𝑐 𝑑
𝑑 −𝑏
an inverse modulo 𝑚, and 𝐴−1 = (𝑎𝑑 − 𝑏𝑐)−1 [ ].
−𝑐 𝑎
Deciphering: Use the same algorithm as the cipher but replace 𝐴 with 𝐴−1.