0% found this document useful (0 votes)
324 views

Notes On MIT 18.06 Linear Algebra: Hello My Love

This document summarizes notes from 10 lectures on linear algebra: 1. The first lecture covers linear equations, coefficient matrices, and finding solutions by looking at linear combinations of column vectors. 2. The second lecture discusses elimination techniques for solving systems of linear equations, including elementary row operations and augmented matrices. 3. The third lecture presents five ways to think about matrix multiplication in terms of column and row spaces. 4. The remaining lectures cover additional topics such as inverses, subspaces, null spaces, row echelon form, solvability conditions, vector independence and spanning sets.

Uploaded by

Hugh laurie
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
324 views

Notes On MIT 18.06 Linear Algebra: Hello My Love

This document summarizes notes from 10 lectures on linear algebra: 1. The first lecture covers linear equations, coefficient matrices, and finding solutions by looking at linear combinations of column vectors. 2. The second lecture discusses elimination techniques for solving systems of linear equations, including elementary row operations and augmented matrices. 3. The third lecture presents five ways to think about matrix multiplication in terms of column and row spaces. 4. The remaining lectures cover additional topics such as inverses, subspaces, null spaces, row echelon form, solvability conditions, vector independence and spanning sets.

Uploaded by

Hugh laurie
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

HELLO MY LOVE

Blog Archive GSoC Résumé Github

Notes on MIT 18.06 Linear Algebra


Jun 14, 2016 • sighingnow| Algebra

Lecture 1
1. Linear equations and linear combination.

{ 2x−y=0−x+2y=3
Coefficient matrix of this linear equations is:

| 2−1−12 |
then, linear equations are Ax=b:

| 2−1−12 || | | |
xy = 03
Draw the row pictures of these two equations, we can easily find the solution: (1,2). Now come
to the column picture:

| | | || |
x 2−1 +y −12 = 03
Now the question is to find somehow to combine these two vector int the right amounts to get the
result vector. It’s a linear combination of columns. the value of x and y is the solution of the
above equations: (1,2).
We can find that all linear combinations of these two column vectors will get any right-hand side
at all. In other words, will fill the whole plane. When it comes to three-dimensional space, all
linear combinations of vectors can fill the whole space, A must be a non-singular matrix, an
invertible matrix.

1. Matrix multiplication by columns and by rows

Represent

| || |
2513 12
as

Classified as Public
| || | | | | | | |
2513 12 =1 21 +2 53 = 127

1. Matrix form of equations

Ax is a combination of columns of A.

Lecture 2
1. elimination

Accept the first equation, multiply that equation by the right number, then subtract it from the
second equation. The purpose is to eliminate x in the second equation, deciding what the
multiplier should be.

1. two step

o elimination: transform coefficient matrix to an upper trianglar matrix U (choose


proper pivot, multiplier and exchange rows if necessary).
o back substitution.
2. argumented matrix: 增广矩阵.

3. Elimination is also a matrix multiplication.

[ 100−310001 ][ ] [ 121381041 = 12102−2041 ]


Multiply the first row by (-3), then add it to the second row. The transform matrix is called
elementary matrix (初等矩阵).

1. Permutation: exchange rows to simplify the trouble.

Exchange rows: multiply on the left

[ ][ ] [ ]
0110 abcd = cdab
Exchange columns: multiply on the right

[ ][ ] [ ]
abcd 0110 = badc

Classified as Public
1. Inverses and reverse transform: how to transform U to A: EX=U, then X=E−1U.

Lecture 3
1. Five ways of matrix multiplication:

1. cij=n∑k=0aikbkj
2. Think matrix multiplication as multiplying a matrix by a vector. The columns
of C (column space) are combinations of columns of A (A times a vector is a
combination of the columns of A, and the numbers in B decide what combination it is).

[a1a2…an]B=AB
3. The rows of C are combinations of rows of B (row space), and the numbers
in A decide what combination it is.

4.
[] b1b2⋮bn =AB

Columns of A times rows of B. AB is the sum of column of A times the rows of B.

[a1a2…an]

5.
[] b1b2⋮bn =AB

Do multiplication by blocks.

[ A1A2A3A4 ][ B1B2B3B4 ] =AB

2. Inverses

1. A left reverse is also right inverse.

Classified as Public
2. Why a matrix may have no inverse ?

Suppose A times other matrix gave the identity, some column of the result matrix must
be multiples (combination) of some other columns. The result can’t be the identity.
Another explanation: if three’s a non-zero matrix X, and AX=O holds, then A must have
no inverse.

3. Conclusion: non-invertible matrices (singular matrices), some combinations of


their columns gives the zero column.
4. A times column j of A inverse if column j of the identity.
5. Gauss-Jordan eliminate: start with the long matrix [AI], eliminate, transform it
to [IE], then E is the inverse of A.
Explanation: let E is the elimination matrix,

E[AI]=[EAE]=[IE]

So, E is the inverse of matrix A.

Lecture 4
1. (AB)−1=B−1A−1. Explanation: AB(B−1A−1)=A(BB−1)A−1=AIA−1=I.
2. Inverse and transpose: (A−1)T=(AT)−1.
3. Transposes and permutation: use matrix multiplication to produce permutations (置换) of
matrices.
4. Permutaion transform matrix P is the identity matrix with reordered rows, PTP=I.

Lecture 5
1. The family of matrices that are unchanged by transposing, ∀P,PTP is a symmetric marix.
2. Vector space: a bunch of vectors where all linear combinations (addition and scale of
multiplication) still stay in the space.
3. Example of vector space: three dimensional space R3.
4. Subspace: some vectors inside the given space that still make up vector space of their
own. It’s a vector space inside a vector space. All subspaces have got to contain the origin,
in other words containts the zero vector.
5. Example of subspace: R2 is subspace of R3.

Lecture 6
1. Union and intersection of subspaces.
o Union of subspaces may be not a subspace.

Classified as Public
o Intersection of subspaces will also a subspaces. (for u∈U and v∈V, u+v will be
in U and V, so, u+v∈U∩V).
2. Column space of matrix: A subspace of all linear combinations of the columns.
3. For equation Ax=b, when b is a linear combinations of A (b is in the column space), the
equation can be solved. In other words, rankA=rank[A b].
4. Null space: the solution of Ax=0.

Lecture 7
1. What’s the algorithm for solving AX=0 ?
2. The algorithm: elimination, elimination doesn’t change solution and null space.
3. The rank of a A: the number of pivots after elimination.
4. Pivot columns: the columns with the pivots. Other columns are called free columns
(without pivots), free variables can be any value. Then we get the solution of original
equation. The number of free variables indicates the dimensions of null space.
5. Special solution: the special means give free variables special value!
6. Reduced row echelon form(简化行阶梯形式): do elimination upwords, transform
upper trianglar matrix to form has zeros above and below the pivots.
Matlab:  rref  (reduced row echelon form).

Lecture 8
1. The condition for Ax=b have a solution (equation is solvable): the rank of coefficient
matrix must be equal to argumented matrix. In other words: b must in the column space
of A.
2. Complete solutions: xparticular+xnullspace. xnullspace means a subspace, and xparticular means
shift.
o xparticular: a particular solution of Ax=b, find by set every free variables to zero.
o xnullspace: A(xp+xn)=Axp+Axn=b+0=b.
3. The rank tells everything about the number of solutions.

Lecture 9
1. A bunch of vectors linear independent: no combination gives the zero vector, except the
zero combination.

If a bunch of column vectors in a matrix A are independent, the null space of A will only
have the zero vector.

2. A bunch of vecotrs spanning a space means the space consists of all linear combination
of those vectors.
3. A bunch of vectors being a basis.
4. Dimension: the rank of a matrix equals to the dimension of it’s column space.

Classified as Public
Lecture 10
1. Column space: the space of column vectors.
2. Row space: the space of row vectors.
3. Null sapce of column vectors.
4. Null space of A transpose (null space of row vectors).
5. Row operations preserve the row space, but change the column space.
6. Let A is a matrix with m rows and n columns, r=Rank(A),

  basis dimension

C(A) pivot columns r

N(A) special solutions of Ax=0 n−r

C(AT) the first r rows after row reduction (not the original A) r

N(AT) special solutions of ATx=0 m−r

Lecture 11
1. Matrix spaces.
2. The basis and dimensions of matrix spaces.
3. Intersection and sum of matrix spaces are also subspaces.
4. Solutins of different equations: a combination of all special solutions.
5. Rank one matrix are linke the building blocks for all matrices.

Lecture 12
1. Graph and the matrix associated with it.
2. Incidence matrix(关联矩阵) of a graph: dependent of some rows means loop in the
subgraph.
3. Matrix application: Kirchoff’s Law.
4. Euler’s Formula: nodes−edges+loops=1 corresponds to dimN(AT)=m−r where A is the
incidence matrix of the graph.

Lecture 13
1. Rectangular matrices.
2. Four subspaces: C(A),N(A),C(AT),N(AT).

Classified as Public
3. For matrix A, N(A) and C(AT) are perpendicular(正交、垂直).

Lecture 14
1. Orthogonality.

o xTy=0
o |x|2+|y|2=|x+y|2
2. Length of vector: |x|2=xTx.
3. Subspace S is orthogonal to subspace T: every vector in S is orthogonal to every vector
in T, ST=0.
4. How to solve equations that can’t be solved: solve Ax=b, but b is not in the column space
of A, but the cloest problem (let b as the cloest vector in column space of A, in other words,
the projection of b on A), the new equation ATAx=ATb can be solved.
5. ATA is invertible exactly if the nullspace N(A) noly has the zero vector, in other
words, A has independent columns.

Lecture 15
1. Let the projection of vector b on vector a as p=ax, because of orthogonality, we can get

aT(b−ax)=0

simplify it,

x=aTbaTa

then let projection matrix P=aaTaTa, then p=Pb.


2. Let the projection of vector b on space A as p=Ax, because of orthogonality, we can get

AT(b−Ax)=0

simplify it,

x=(ATA)−1ATb

then let projection matrix P=A(ATA)−1AT, then p=Pb. If A is a invertible square matrix,


obvirously b is in the column space of A, the projection p is the vector b it self, and the
projection matrix

P=A(ATA)−1AT=A(A−1(AT)−1)AT=I

3. The properties of projection matrix P:


o rank(P)=1.
o PT=P, means P is symmetric.

Classified as Public
o P2=P.
o All eigenvalues of P are 0 and 1, because P2=P⟹λ2=λ.
4. If b is in the column space of A, then p=Pb=b. If b is perpendicular to the column space
of A, then p=Pb=0. Vectors in the null space of A transpose are perpendicular to the column
space of A.
5. Application: list squared, fitting by a line.

Lecture 16
1. If A has independent columns, then ATA is invertible.

o Proof.
 Suppose ATAx=0, then xTATAx=0, implies (Ax)TAx=0, we get Ax=0.
 Because A has independent columns, x must be 0.
 So, ATA is invertible.
o Q.E.D
2. Linear regression.

Target formula:

y=b+cx

Solve the best approximate arguments x=|bc| that makes Y=Ax holds. According to the idea


of projection, it’s easy to get

Ax=A(ATA)−1ATY

Then

ATAx=ATA(ATA)−1ATY

So, the solution for vector x=|bc| is the solution of equation ATAx=ATY.


3. Understand least squares regression from the point of projection: project the
vector Y onto the space of vector X and vector |11…1|, and x=|bc| is the projection matrix.

Lecture 17
1. Orthogonal basis.
2. Orthonormal(标准正交)vectors and orthonormal matrices:

qTiqj= { 0i≠j1i=j

Classified as Public
1. [ ]
For orthonormal matrix Q= q1q2…qn , we have QTQ=I and QT=Q−1.
2. Adhemar construction: a way to construction orthonormal matrices.

Let matrix A=1√2[11 1−1], A is an orthonormal matrix. Then 12[AA A−A] is also an


orthonormal matrix.
3. Graham and Schmidt calculation.

[ ] [
For matrix A= a1a2…an  and corresponding orthonormal matrix Q= q1q2…qn . First,]
orthogonalize:

bi=ai−i−1∑j=1⟨bj,ai⟩⟨bj,bj⟩bj

Then, normalize:

qi=qi|qi|

4. Graham-Schmidt calculation and elimination: A=QR, where Q is the orthonormal


matrix, R is upper trianglar matrix.

Lecture 18
1. Three basic properties of determinant:

o det(I)=1.
o Exchange two rows, reverse the sign of determinant.
o For permutation matrices of I, determinant is 1 or −1.
2. Other useful properties of determinant:

o | |
abcd =ad−bc

o | || |
tatbcd =t abcd
o The determinant is a linear function:

| || || |
a+a′b+b′cd = abcd + a′b′cd

o If a matrix A has two equal rows, determinant of A is zero.

o Elimination doesn’t change the determinant:

Classified as Public
| || ||
abc−kad−kb = abcd + ab−ka−kb = abcd || |
o Determinant of diagonal matrix:

det(A)=n∏i=1di

o det(A)=0 extraly when A is singular.


o For all invertible matrix A,

det(A−1)=1det(A)

o det(AB)=det(A)˙det(B).
o det(kA)=krank(A)˙det(A).
o Transpose doesn’t change determinant:

det(AT)=det(A)

Lecture 19
1. The formula for determinant.
2. Cofactors(代数余子式): cofactor of member ai,j is all the terms in the formual for
determinant involves element ai,j.
3. Cofactors and determinant: det(M)=∑ni,j=1ai,jAi,j

Lecture 20
1. Inverse matrix: A−1=A∗|A| where A∗ is the transpose of the cofactor matrix.
2. Cramer’s rule: the solution of equation Ax=b is xi=det(Ai)A.
3. |det(A)| indicates the volume of box.

Lecture 21
1. Eigenvalues and eigenvectors.
2. ∑λi=trace(A).
3. ∏λi=det(A).
4. For n-dimension unit vector In, eigenvalue is 1, eigenvectors are all vectors in n-
dimension space.
5. If B=A+kI, then λBi=λAi+k. A and B have the same eigenvectors.
6. For all symmetric matrices, their eigenvalues are real numbers.
7. For all anti-symmetric matrices, their eigenvalues are pure imaginary values.

Classified as Public
Lecture 22
1. Diagonalize a matrix: if matrix A has n linear independent eigenvectors, then S−1AS=Λ,
where S is the eigenvector matrix of A, and Λ is the eigenvalue diagonal matrix.
2. Application: Ak=(SΛS−1)k=SΛkS−1.
Example: the fibonacci sequence is Fk+2=Fk+1+Fk, write this equation in linear combination:

uk+1=Auk= [ ]1110 uk

then

uk=Aku0= [ 1−√521+√5211 ][ 1−√52001+√52 ][ k −1√55+√5101√55−√510 ]


[] 01

3. A formula for the nth Fibonacci number:

f(n)=ϕn−(−ϕ)−n5

where

ϕ=1+√52

Lecture 23
1. The solution of first order, first derivate constant coefficient linear equations and the
stability of coefficient matrix.
2. Matrix exponential:

o because ex=1+x+x22+x36+⋯=∑i=0xii!, so for matrix A=S−1ΛS,


we have

eA=1+A+A22+A36+…=1+SΛS−1+SΛ2S−1+SΛ3S−16+…=SeΛS−1

Classified as Public
o because 11−x=1+x2+x3+⋯=∑i=0xi, for matrix A=S−1ΛS,
we have

1I−A=1+A+A2+…=1+SΛS−1+SΛ2S−1+dots=S1I−ΛS−1

3. Matrix exponential for some special kinds of matrices:

o If A is a diagonal matrix,

A= [ ]
then
λ1λ2…λn

o
e A= [ ] eλ1eλ2eλ3eλn

If partitioned matrix

A= [ ]
then
A1A2…An

Classified as Public
2.
e A= [ ] eA1eA2…eAn

For different equation system dudt=Au, if the solution is

u(t)=eAtu(0)=SeΛtS−1u(0)

For example, for different equations

dudt= [ −121−2 ]
or equations

{ du1dt=−u1+2u2du2dt=u1−2u2

two eigenvalues of coefficient matrix A are 0 and −3, the eigenvector matrix S is

[ ]
211−1

The solution of original different equations is

u=13e0t [] [ ]
21 +13e−3t 1−1

3. Change one second order equations to an equivalent first order equation system.

[ ]
For example, for second order different equation y″+by′+ky=0, let u= y′ y , then

Classified as Public
u′= [ ] [ ][ ]
y″y′ = −b−k10 y ′y

4. The key idea of this application of linear combination is using similar matrix Λ=S−1AS to
uncouple dependent equations.

Lecture 24
1. Markov matrix:

1. every entry is greater than or equal to 0 and less than or equal to 1.
2. All columns add to 1. If the probability transition vectors are wrote as row
vectors, all rows add to 1.
2. If A and B are all Markov matrices, then AB is also a Markov matrix.

3. The eigenvalues of markov matrix:

1. λ=1 is an eigenvalue.
Proof: all columns add to 1, then A−I is singular (all columns of matrix A−I add to 0).
2. Every eigenvalue λ of a Markov matrix satisfies |λ|≤1.
Proof: Suppose λ is an eigenvalue of A and x is the corresponding eigenvector,
then Ax=λx. Let k be such that |xj|≤|xk|,∀j,1≤j≤n, then

n ∑j=1Akjxj=λxk
Hence

|λxk|=|λ|⋅|xk|=|n∑j=1Akjxj|≤n∑j=1Akj|xj|≤n∑j=1Akj|xk|=|xk|

So ∀λ,|λ|≤1.

4. The Markov chain stable state: vk=SΛkS−1v0, there is only one entry of Λ equals to 1, all
other eigenvalues |λi|<1.
5. Two functions f(x) and g(x) are orthonormal
means ∫f(x)⋅f(x)dx=1, ∫g(x)⋅g(x)dx=1 and ∫f(x)⋅g(x)dx=0.
6. Fourier series: project a function onto infinite function spaces.

1. The basis is

1,sinx,cosx,sin2x,cos2x,…

2. Fourier expansion:

Classified as Public
f(x)=a02+∞∑k=1coskx+∞∑k=1sinkx

where

{ a0=1π∫π−πf(x)dxak=1π∫π−πcoskxdxbk=1π∫π−πsinkxdx

Lecture 25
1. Real symmetric matrix: A=AT.
2. Theorem: ∀A, if A is a real symmetric matrix, eigenvalues of A are all real.

Proof: suppose λ is a eigenvalue of A and x is the corresponding eigenvector, Ax=λx, thus

% <![CDATA[
\begin{aligned} Ax = \lambda x
     &\implies \bar{A}\bar{x} = \bar{\lambda}\bar{x} \\
     &\implies \bar{x}^T \bar{A}^T = \bar{x}^T\bar{\lambda} \\
     &\implies \bar{x}^T \bar{A}^T x = \bar{x}^T \bar{\lambda} x %]]>

and

Ax=λx⟹ˉxTAx=ˉxTλx

then

ˉxTλx=ˉxTˉλx

and ˉxTx≠0(x≠0), λ is real number, means that eigenvalues of A are all real.


3. Theorem: ∀A, if A is a real symmetric matrix, and p1, p2 are two eigenvectors
of A corresponding to two different eigenvalues, then p1 and p2 are orthogonal (pT1p2=0).
Proof: λ1p1=Ap1, λ2p2=Ap2 and λ1≠λ2, then

λ1pT1=(λ1p1)T=(Ap1)T=pT1AT=pT1A

and

Classified as Public
λ1pT1p2=pT1Ap2=pT1(λ2p2)=λ2pT1p2

we have λ1≠λ2, thus pT1p2=0.

4. Spectral theorem: ∀A, if A is a real symmetric matrix, eigenvectors of A are perpendicular


(orthogonal). Spectrum is the set of eigenvectors of a matrix. A=QΛQ−1=QΛQT
5. For symmetric matrices, product of eigenvalues equals to product of pivots, and both
equal to the determinant.
6. Positive definite matrix(正定矩阵): all the eigenvalues and pivots are positive.

Lecture 26
1. For complex matrix:
o |z|2=ˉzTz=zHz (H means Hermite).
o <x,y>=xˉyT
2. For complex symmetric matrix: ˉAT=A.
3. Unitary matrix(酉矩阵): perpendicular complex matrix, ˉQTQ=I
4. Fourier matrix:

Fn= [ 111…11ww2…wn−11w2w4…w2(n−1)⋮…1wn−1w2(n−1)…w(n−1)(n−1)

where wn=1. Fn is an unitary matrix, F−1n=FHn.


]
5. FFT(Fast fourier transfer).

Lecture 27
1. Positive definite matrix: A is a symmetric matrix and ∀x,xTAx>0.
2. Determinants of leading submatrices are all positive.
3. If a n-dimensions functoin has minimum, it’s first derivatives matrix dudx equals to zero
and second derivatives matrix d2udx2 is positive definite.
4. Principal axis theorem: A is symmetric and A=QΛQT, the eigenvalues (all positive) tell
the lengths of the principal axes and the eigenvectors tell the directions of the principal axes.

Lecture 28

Classified as Public
1. Inverse of a symmetric positive definite matrix is also a positive definite matrix.

2. Least squares: Ax=Y⟹ATAx=ATY, when ATA is positive definite, x will get the best


solutions.
For retangular matrix A∈Rm×n,m>n, ATA is symmetric matrix. When columns of A are linear
independent (rank(A)=n), ATA is positive definite:

xTATAx=(Ax)T(Ax)=|Ax|2≥0

3. Similar matrices: A and B are similar means for some invertible matrix M,

B=M−1AM

Similar matrices have same eigenvalues.

4. A is diagonalizable: A=S−1ΛS, S is the eigenvector matrix of A.


5. Jordan matrix: every Jordan block has one repeated eigenvalue and one eigenvector.

Lecture 29
1. SVD: singular value decomposition. A is a matrix that A∈Rm×n, singular value
decomposition means the linear transformation from an orthonormal basis in row space (V
[ ] [
=  v1v2…vm ) of A into an orthonormal basis in column space (U =  u1u2…un ) of A. Of ]
course, U and V are both invertible.
2. We have Avi=σiui where σi is the scalar factor, called singular value.
Suppose m>n, σi=0,∀i that n<i≤m.
3. We have AV=ΣU, where Σ is a diagonal matrix. Then

A=UΣV−1=UΣVT

thus

ATA=VΣTUTUΣVT=VΣ2VT

because ATA is a symmetric positive definite matrix, and λi is it’s eigenvalues,


σi= λi

obvirously, VT is the eigenvector matrix of ATA.

4. With the same approach we can get

AAT=UΣ2UT

Classified as Public
Lecture 30
1. Linear transformation: every linear transformation leads to a matrix.
2. Mapping: x→T(x) where x is a vector.
3. The two rules of linear transformation:

o T(v+w)=T(v)+T(w)
o T(cv)=cT(v)
4. If the matrix A is the matrix of some linear transformation T, then T(x)=Ax.
5. Coordinates come from a basis, let v=c1v1+c2v2+⋯+cnvn, then the coordinate of v on the
basis

[v1v2…vn]$is$(c1,c2,…,cn)
Lecture 31
1. Lossy compression.
2. Change of basis matrix: B=M−1AM.

Lecture 32
1. What are all the matrices that have orthogonal eigenvectors?
o Symmetric matrices.
o Anti-symmetric matrices.
o Orthogonal matices.
o More generalize, ∀A,AAT=ATA
2. All eigenvalues of projection matrices are 0 and 1 because P2=P⟹λ2=λ.
3. If λ is an arbitrary eigenvalue of an orthogonal matrix Q, then ∀λ,|λ|=1.
Proof:

Qx=λx⟹‖x‖=|λ|‖x‖

4. All symmetric matrices and all orthogonal matrices can be diagonalized.


5. ∀A if A is a symmetric and orthogonal matrix, then 12(A+I) is a projection matrix.
Proof: because ∀P, if P is a projection matrix, then P2=P, and A is symmetric and
orthogonal,

A2=AAT=AA−1=I

thus

(12(A+I))2=12(A+I)

Classified as Public
Lecture 33
1. Left inverses and right inverses

o Let A is a matrix with m×n(m≥n), and the rank of A is n (full column rank),


then ATA is invertible (rank(ATA)=n) and A has the left inverse that A−1left=(ATA)
−1AT and A−1leftA=I. AA−1left=A(ATA)−1AT is the projection matrix that projects onto the
column space of A.
o Let A is a matrix with m×n(m≤n), and the rank of A is m (full row rank),
then AAT is invertible (rank(AAT)=m) and A has the right inverse that A−1right=AT(AAT)
−1 where AA−1right=I. A−1rightA=AT(AAT)−1A is the projection matrix that projects onto
the row space of A.
2. Pseudo-inverses

Let A is a matrix with m×n and rank(A)=r and the pseudo inverse of A is A+ that

o AA+A=A
o A+AA+=A+
o AA+ and A+A are all symmetric matrices

And, A+A is the row space projection matrix and AA+ is the column space projection matrix.
3. Solve the pseudo-inverse

SVD: A=UΣVT⟹A+=VΣ+UT where Σ+=Map[recip,Σ].

Classified as Public

You might also like