0% found this document useful (0 votes)
105 views27 pages

Linear Albegra

The document discusses eigenvalues and eigenvectors of matrices. It defines key concepts such as the characteristic polynomial, algebraic and geometric multiplicities of eigenvalues. It presents procedures to find eigenvalues by solving the characteristic equation. Examples are provided to illustrate how to determine eigenvalues and eigenspaces of matrices and calculate their algebraic and geometric multiplicities. The Multiplicity Theorem states that for any eigenvalue λ of an n×n matrix A, the geometric multiplicity is less than or equal to the algebraic multiplicity.

Uploaded by

Sandeep Saini
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views27 pages

Linear Albegra

The document discusses eigenvalues and eigenvectors of matrices. It defines key concepts such as the characteristic polynomial, algebraic and geometric multiplicities of eigenvalues. It presents procedures to find eigenvalues by solving the characteristic equation. Examples are provided to illustrate how to determine eigenvalues and eigenspaces of matrices and calculate their algebraic and geometric multiplicities. The Multiplicity Theorem states that for any eigenvalue λ of an n×n matrix A, the geometric multiplicity is less than or equal to the algebraic multiplicity.

Uploaded by

Sandeep Saini
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

1

1.1

Eigenvalues and Eigenvectors


Characteristic Polynomial and Characteristic Equation

Procedure. How to nd the eigenvalues? A vector x is an e.vector if x is nonzero and satises Ax = x (A I )x = 0 must have nontrivial solutions (A I ) is not invertible by the theorem on properties of determinants det(A I ) = 0 Solve det(A I ) = 0 for to nd eigenvalues.

Denition. P () = det(A I ) is called the characteristic polynomial. det(A I ) = 0 is called a characteristic equation.

Proposition. A scalar is an e.v. of a n n matrix if satises P () = det(A I ) = 0.

Example. Find the e.v. of A =

0 1 . 6 5

Since A I =

0 1 0 6 5 0

1 , 6 5

we have the characteristic equation det(A I ) = (5 ) + 6 = ( 2)( 3) = 0. So = 2, = 3 are eigenvalues of A.

Theorem. Let A be a nn matrix. Then A is invertible if and only if: a) = 0 is not an e.v. of A; or b) det A = 0.

Proof. For b) we have discussed the proof on the determinant section. For a): (): Let A be invertible det A = 0 det(A 0I ) = 0 = 0 is not an e.v. (). Let 0 be not an e.v of A det(A 0I ) = 0 det A = 0 A is invertible.

Theorem. The eigenvalues of a triangular matrix are the entries of the main diagonal.

Proof. Recall that a determinant of a triangular matrix is a product of main diagonal elements. Hence, if a11 a12 . . . a1n 0 a ... a 22 2n A= . , . . . .. . . . . . . 0 0 . . . ann then the characteristic equation is a11 a12 ... a1n 0 a22 . . . a2n det(A I ) = . . . . . .. . . . . . 0 0 . . . ann = (a11 )(a22 ) . . . (ann ) = 0 a11 , a22 , . . . , ann are the eigenvalues of A. 3 2 3 Example. Find the eigenvalues of A = 0 6 10 0 0 2 3 2 3 Solution. det(A I ) = det 0 6 10 . 0 0 2 Thus the characteristic equation is (3 )(6 )(2 ) = 0 eigenvalues are 3, 6, 2.

Example. Suppose is e.v. of A. Determine an e.v. of A and A3 . What is an e.v. of An ?


2

Solution. Since is e.v. of A nonzero vector x such that Ax = x AAx = Ax = Ax = 2 x. Therefore 2 is an e.v. of A2 . Analogously for A3 . We have Ax = x and A2 x = 2 x AA2 x = A3 x = A2 x = 2 Ax = 3 x. Thus 3 is an e.v. of A3 . In general, n is an e.v. of An .
1.2 Similar Matrices

Denition. A n n matrix B is called similar to matrix A if there exists an invertible matrix P such that B = P 1 AP .

Theorem. If n n-matrices A and B are similar, then they have the same characteristic polynomial and hence the same eigenvalues. Proof. If B = P 1 AP , then B I = P 1 AP P 1 P = P 1 (AP P ) = P 1 (A I )P . Using the multiplicative property of determinant, we have det(B I ) = det(P 1 (A I )P ) = det P 1 det(A I ) det P = det(A I ). Hence, matrices A and B have the same e.v.

Theorem. Hamilton-Caley. (Without proof. Try to prove it as an exercise) If P () = det(A I ) = n + cn1 n1 + . . . + c1 + c0 = 0 then P (A) = An + cn1 An1 + . . . + c1 A + c0 I = 0.
1.3 Algebraic and Geometric Multiplicity of Eigenvalues

Denition. The algebraic multiplicity of an eigenvalue is its multiplicity as a root of the characteristic equation (multa ()). 2 5 Example. Find the polynomial of A = 9 1 and nd e.v. with the algebraic multiplicity. 0 3 1 2 0 0 0 0 3 0 5 1

Solution. The characteristic equation is det(A I ) 2 0 0 0 5 3 0 0 = det 9 1 3 0 1 2 5 1 = (2 )(3 )(3 )(1 ) = 0 Thus the e.v. are 1 = 2, 2,3 = 3 and 4 = 1. The algebraic multiplicity of = 3 is 2, or multa (3) = 2.

Denition. The eigenspace E consists of the zero vector and all eigenvectors corresponding to an e. v. .

Denition. The geometric multiplicity of an e.v. is the dimension of the corresponding eigenspace E (multg ()). Recall that the dimension of a vector space is equal to the number of linearly independent vectors it contains.

Example. Find e.v. and their algebraic and geometric 0 1 1 multiplicity for A = 1 0 1. 1 1 0 Solution. The characteristic equation is det(A I ) 1 1 = det 1 1 = 3 +3+2 = (2)(+1)2 . 1 1 So the e.v. are 1 = 2, 2,3 = 1. Solving the equation (A i I )x = 0 for i = 1, 2, 3 we nd that 1 E=2 = Span{1} 1

E=1

1 0 = Span{ 0 , 1} 1 1

Thus multa (2)=multg (2) = 1 and multa (1)=multg (1) = 2

Example. Find e.v. and their algebraic and geometric 0 0 1 multiplicity for A = 1 0 3. 0 1 3 Solution. The characteristic equation is det(A I ) 0 1 = det 1 3 = 3 32 + 3 1 = ( 1)3 . 0 1 3 So the e.v. are 1,2,3 = 1. Solving the equation (A i I )x = 0 for i = 1, 2, 3 we nd that 1 E=1 = Span{2} 1 Thus multa (1) = 3 and multg (1) = 1.

Multiplicity Theorem.

For any eigenvalue i , i = 1, 2, . . . , n of a n n-matrix A holds multg () multa ().

Proof. Let i be an eigenvalue of A. Let Bi = {v1 , . . . vm } be a basis of the corresponding eigenspace Ei where multg (i ) = m. Note that each vj in Bi is an eigenvector of A corresponding to i . Thus Avj = i vj , j = 1, 2 , . . . , m

Extend Bi to form a basis B = {v1 , . . . vm , vm+1 , . . . , vn }. Note that B is now a basis in the whole n-dimensional space (Rn or C n ) while Bi is only a basis in the eigenspace corresponding to e.v. i . Note that eigenspace Ei is only subspace of the whole n-dimensional space (Ei Rn or Ei C n ).

Let Q = (v1 |v2 | . . . |vm |vm+1 | . . . |vn ) be a matrix which columns are vectors v1 , . . . vm , vm+1 , . . . , vn of the vector basis B . Since these vectors are linearly independent, the matrix Q is invertible. Notice that vj = Qej where ej = (0, 0, . . . , 0, 1, 0 . . . , 0)T . Such vector ej is called a j -th ort. Now using the denition of e.v.: Q1 Avj = Q1 i vj = i Q1 vj = i ej , j = 1, 2, . . . , m.
j

= Q1 AQ = Q1 A[v1 , . . . vm , vm+1 , . . . , vn ] Thus A = [i e1 |i e2 | . . . , |i em |Q1 Avm+1 | . . . |Q1 Avn ]] = i Im C , 0 D where Im is the m m-identity matrix. is similar to the matrix A since A = The matrix A Q1 AQ.

Hence using the property of determinant for the block diagonal matrixes (see the assignment 2) PA () = PA = det(A In ) = det(( i )Im )det(D Inm ) = ( i )m PD (). Here In and Inm are n n- and (n m) (n m)identity matrixes respectively. Thus a characteristic polynomial PA () has a root of i of at least degree m, where m = multg (i ). multg () multa (). 1 0 0 1 Example. Find e.v. and eigenspace of A = 0 0 0 0 Dene the algebraic and geometric multiplicities of 0 0 0 0 . 1 1 0 1 e.v.

Solution. Since the matrix A is upper-triangular, its the only e.v. is 1,2,3,4 = 1. Thus multa () = 4. To nd the eigenspace and geometric multiplicity we need to solve the equation (A 1I )x = 0 and nd basis

10

for the null space of (A 1I ). 0 0 A 1I = 0 0 Hence the solution is x1 , x2 and x3 are arbitrary numbers, x4 = 0. Thus we can choose 3 linearly independent vectors, for example, 1 0 0 0 1 0 { 0 , 0 , 1} 0 0 0 Therefore, multg () = 3 Notice that the eigenspace E1 is a 3-dimensional hyperplane in R4 . 0 0 0 0 0 0 0 0 0 0 1 0

11

Example. Suggest a 4 4-matrix with e.v. = 1 and multa (1) = 4 and multg (1) = 3. Solution. From the Multiplicity Theorem we have the following options multg () = 1, multg () = 2 and multg () = 4. From the previous example it is easy to see that 1 0 0 0 0 1 0 0 if A = I = 0 0 1 0 0 0 0 1 0 0 then multa (1) = 4 and A 1I = 0n = 0 0 Hence the solution of (A 1I )x = 0 is x1 , x2 x3 and x4 are arbitrary numbers. Thus we can choose 4 linearly independent vectors, for example, 0 0 0 1 0 1 0 0 { 0 , 0 , 1 , 0} 1 0 0 0
12

0 0 0 0

0 0 0 0

0 0 . 0 0

Therefore, multg () = 4. Notice that for this case the eigenspace E1 coincides with R4 . To get multg () = 2, it is possible to think backward and choose such a matrix A such that the null space of (A 1I ) has only 2 linearly independent vectors. This would imply that x3 = 0 and x4 = 0 while x1 and x2 are arbitrary. 0 0 0 0 0 0 1 0 For example, A 1I = 0 0 0 1 and 0 0 0 0 0 0 . 1 1 1 0 0 0 1 Notice that A = 0 0 1 0 0 0 trary and , , = 0. 1 0 hence A = 0 0 0 1 0 0 0 1 1 0

0 0 also works for arbi 1

Exercise. Find a 4 4-matrix with e.v. = 1 and multa (1) = 4 and multg (1) = 3.

13

1.4

Trace of a Matrix

Denition. The trace of an n n-matrix A is dened to be Tr(A) = Sp(A) = n i=1 aii , i.e., the sum of the diagonal elements. (Tr is English, Sp is German from Spur.) Properties. Tr(A) = Tr(AT ) Tr(A) = Tr(A) Tr(A + B) = Tr(A) + Tr(B) Tr(AB) = Tr(BA) Proof as an exercise. Theorem. Let A be a n n-matrix and 1 , 2 , . . . , n be its e.v. Then Tr(A) =
n i=1 i

and det(A) =

n i=1 i .

Proof. Let for simplicity assume that A is similar to a diagonal matrix D = diag{1 , 2 , . . . , n }. Hence A = P 1 DP . From the properties of trace tr(A) = tr(P1 DP) = tr(PP1 D) = tr(D) = From the properties of determinants det(A) = det(P1 DP) = det(P)det(D)det(P1 ) = det(D) =
n i=1 i . 14 n i=1 i .

Example. Find eigenvalues of A = calculation.

a a a a

without

Solution. Notice that det(A) = 1 2 = 0 and Tr(A) = 1 + 2 = 2a 1 = 0 and 2 = 2a.


1.5 Diagonalization

Denition. The matrix is diagonal if all its entries are only on the main diagonal. 1 0 Example. I = . . . 0 0 1 . . . 0 ... ... ... ... 0 1 0 0 0 , D = 0 2 0 . . . . 0 0 3 1

If a matrix is diagonal, it is trivial to compute Dk , det D, etc. 2 0 . Compute D2 , D3 , and 0 3

Example 1. Let D = Dk , k N.

15

2 0 0 3 2 Analogously, D = 0 2k k In general, D = 0 Solution. D =

2 0 4 0 = = 0 3 0 9 0 22 0 23 == 3 0 32 0 0 . 3k

22 0 . 0 32 0 . 33

Example 2. Let A = P DP 1 , nd a formula for Ak , k N. Solution. By induction, A2 = (P DP 1 )(P DP 1 ) = (P D2 P 1 ). If it is true for (n1), then An = (P DP 1 )(P D(n1) P 1 ) = P D n P 1 . Denition. A n n-matrix is said to be diagonalizable if A is similar to a diagonal matrix, i.e. P invertible such that A = P DP 1 , where D is diagonal. Theorem . A n n-matrix is diagonalizable i A has n independent eigenvectors. In fact, A = P DP 1 , with D a diagonal matrix, i the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are e.v. of A, corresponding to eigenvectorscolumns of P .

16

Proof. (): Given A = P DP 1 . Notice that if P is a n n-matrix with columns v1 , . . . , vn , and if D is any diagonal matrix with diagonal entries 1 , . . . , n , then AP = A v1 | v2 | . . . | vn = Av1 | Av2 | . . . | Avn , while 1 0 . . . 0 0 ... 0 2 PD = P . . . . . . . . .. . . 0 0 . . . n = 1 v1 | 2 v2 | . . . | n vn . If A = P DP 1 P A = P D Av1 | Av2 | . . . | Avn = 1 v1 | 2 v2 | . . . | n vn Av1 = v1 , av2 = 2 v2 , . . . , Avn = n vn . Since P is invertible, v1 , v2 , . . . , vn are linearly independent non-zero vectors. Hence by denition 1 , 2 , . . . , n are e.v. of A are v1 , v2 , . . . , vn are corresponding eigenvectors of A.
17

(): Given n linearly independent eigenvectors v1 , v2 , . . . , vn , use them to construct P and use n eigenvalues 1 , 2 , . . . , n (not necessarily distinctive) to construct a diag D. Then by denition of e.v. Av1 = 1 v1 , Av2 = 2 v2 , . . . , Avn = n vn . Av1 | Av2 | . . . | Avn = 1 v1 | 2 v2 | . . . | n vn AP = P D. Since P is invertible (all v1 , v2 , . . . , vn are linearly independent) A = P DP 1 .

Denition. Linearly independent vectors v1 , v2 , . . . , vn form an eigenvector basis in Rn . 2 0 0 Example. Diagonalize A = 1 2 1 if possible. 1 0 1 1) Find the eigenvalues of A:

18

2 0 0 det(A I ) = det 1 2 1 1 0 1 = (2 )2 (1 ) = 0. Thus, 1 = 1, 2,3 = 2 are e.v. 2) Find three linearly independent eigenvectors if possible. By solving (A i I )x = 0, i = 1, 2, 3, we get 0 0 1 v1 = 1 , v2 = 1 , v3 = 0 , 1 0 1 corresponding to 1 = 1, 2,3 = 2. Vectors v1 , v2 , v3 are clearly linearly independent for a basis. 3) Construct P from v1 , v2 , v3 : 0 0 1 P = 1 1 0 . 1 0 1
19

4) Construct D from the corresponding eigenvalues 1 , 2 , 3 : 1 0 0 D = 0 2 0 . 0 0 2 5) Check your results by verifying that AP = P D: 2 0 0 0 0 1 0 0 2 AP = 1 2 1 1 1 0 = 1 2 0 , 1 0 1 1 0 1 1 0 2 0 0 1 1 0 0 0 0 2 P D = 1 1 0 0 2 0 = 1 2 0 . 1 0 1 0 0 2 1 0 2 2 4 6 Example. Diagonalize A = 0 2 2 if possible. 0 0 4 Solution. 1) Since A is triangular, it is clear that 1 = 4, 2,3 = 2 are e.v.

20

2) Solve (A i I )x = 0, i = 1, 2, 3, and nd the eigenbasis. 5 Eigenvector for 1 = 4 is v = 1, 1 1 = 2 is v = 0. 0

Eigenvector for 2,3

Thus dimension of eigenspace corresponding to 2,3 = 1 is 1 (multg (2) = 1 and multa (2) = 1) P is singular (it is not enough eigenvectors to form a basis for R3 ) A is not diagonalizable. 5 0 4 Example. Diagonalize A = 0 3 1 if possible. 0 0 2 Solution. Since it is a triangular matrix, the e.v. are 1 = 5, 1 = 5, 3 = 2. For each 1 , 2 , 3 we can nd corresponding eigenvectors

21

4 1 0 7 . v1 = 0 , v2 = 1 , v3 = 1 5 0 0 1 Clearly v1 , v2 , v3 are linearly independent form an eigenvector basis in R3 P = v1 | v2 | v3 is invertible A is diagonalizable and D = diag(5, 3, 2).

2 0 0 Why A = 1 2 1 (1 = 1, 2,3 = 2) and 1 0 1 5 0 4 A = 0 3 1 (1 = 5, 2 = 3, 3 = 2) 0 0 2 are diagonalizable? Theorem 1. If v1 , v2 , . . . , vr are eigenvectors corresponding to distinct eigenvalues 1 , 2 , . . . , r for a n n-matrix A, then v1 , v2 , . . . , vn are linearly independent.
22

Proof. From contradiction. Suppose v1 , v2 , . . . , vr are linearly dependent. Let (p 1) be the last index such that vp is a linear combinations of the preceding linearly independent vectors (p n). Then there exist c1 , c2 , . . . , cp1 such that i, i N, ci = 0 and c1 v1 + c2 v2 + . . . + cp1 vp1 = vp . () Multiply both sides of this equation by A c1 Av1 + c2 Av2 + . . . + cp1 Avp1 = Avp . Using the fact that Avi = i vi , we get c1 1 v1 + c2 2 v2 + . . . + cp1 p1 vp1 = p vp .

()

Multiply () by p and subtract result from (): c1 1 v1 c1 p v1 + c2 2 v2 c2 p v2 + . . . + cp1 p1 vp1 cp1 p vp1 = p vp p vp = 0 c1 (1 p )v1 +c2 (2 p )v2 +. . .+cp1 (p1 p )vp1 = 0. Notice that by construction v1 , v2 , . . . , vp1 are linearly independent by denition of linear independence
23

c1 (1 p ) = c2 (2 p ) = . . . = cp1 (p1 p ) = 0. But i, i N : ci = 0 i, i N : i p = 0 contradiction since by statement of Theorem 1 = 2 = . . . = n . Corollary. If a n n-matrix A has n distinct eigenvalues, A is disgonalizable. Proof. If A has distinct eigenvalues 1 , 2 , . . . , n eigenvectors v1 , v2 , . . . , vn are linearly independent and form an eigenvector basis p = v1 | v2 | . . . | vn is invertible and A is diagonalizable. Diagonalization theorem 2. A n n-matrix A is diagonalizable i multg (i ) =multa (i ), i = 1, 2, . . . , n, i.e. eigenvectors of A form a basis in Rn . Proof. (): Let multg (i ) =multa (i ), i = 1, 2, . . . , n each eigenspace of dimension ni has ni linearly independent vectors but

24

ni = multa (i ) and multa (1 ) + multa (2 ) + . . . + multa (n ) = n

n1 + n2 + . . . + nn = n there are n linearly independent eigenvectors which form an eigenvector basis in Rn A is diagonalizable by Theorem (). (): Let A be diagonalizable. Then by the Theorem* n linearly independent eigenvectors v1 , v2 , . . . , vn which form the matrix P such that A = P 1 DP and D is diagonal. Let B be a set of {v1 , v2 , . . . , vn }. If all e.v. 1 , 2 , . . . , n have algebraic multiplicity 1 (multa i = 1, i = 1, 2, . . . , n) then clearly multa i = multg i = 1, i = 1, 2, . . . , n since dimension of any vector space cannot be less than 1 (trivial case). (General case.) Assume for simplicity that only one eigenvalue k such that multa k = p and all other e.v. 1 , 2 , . . . , n1 , i = k have single algebraic multiplicity (multa i = 1).

25

Since clearly eigenspaces do not intersect, i.e. eigenvector vj corresponding to the e.v. j do not belong to the eigenspace Ei corresponding to the e.v. i (j = j ) (verify it at home) the eigenvectors v1 , v2 , . . . , vnp , i = k , corresponding to 1 , 2 , . . . , n1 , i = k , do not belong to E . k vectors which belong to the set B/{v1 , v2 , . . . , vnp }i=k belong to the eigenspace E . Note that these vectors are k linearly independent by statement of the Theorem and dimB/{v1 , v2 , . . . , vnp }i=k = dimE = p = multa k. k 2 0 0 Example. Why is A = 2 6 0 diagonalizable? 3 2 1 Solution. A has 3 disticnt e.v. 1 = 2, 2 = 6, A is diagonalizable. 2 0 0 2 Example. Diagonalize if possible A = 24 12 0 0 3 = 1 0 0 2 0 0 0 . 0 2

Solution. A has two e.v. of multa = 2: 1,2 = 2, 3,4 = 2. Solve (A i E )x = 0 and nd eigenvectors.

26

Basis for 1,2 multg (2) = 2. Basis for 3,4

1 0 0 , v2 = 1. Thus = 2: v1 = 6 3 6 0 0 0 0 , v4 = 0. Thus = 2: v3 = 1 0 0 1

multg (2) = 2. Hence by The Diagonalization Theorem, since multa (2) = multg (2) = 2 and multa (2) = multg (2) = 2, the matrix A is diagonalizable.

27

You might also like