0% found this document useful (0 votes)
75 views3 pages

Applied Linear Algebra: Problem Set-5: Instructor: Dwaipayan Mukherjee

This document contains 33 multi-part problems related to linear algebra concepts such as eigenvalues, eigenvectors, invariant subspaces, matrices, and linear operators. The problems cover a wide range of foundational and advanced topics including orthogonal projections, nilpotent matrices, pseudoinverses, Jordan forms, and exponentials of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views3 pages

Applied Linear Algebra: Problem Set-5: Instructor: Dwaipayan Mukherjee

This document contains 33 multi-part problems related to linear algebra concepts such as eigenvalues, eigenvectors, invariant subspaces, matrices, and linear operators. The problems cover a wide range of foundational and advanced topics including orthogonal projections, nilpotent matrices, pseudoinverses, Jordan forms, and exponentials of matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Applied Linear Algebra: Problem set-5

Instructor: Dwaipayan Mukherjee∗


Indian Institute of Technology Bombay, Mumbai- 400076, India

1. What are all possible eigenvalues of (a) an orthogonal projection matrix, and (b) a square matrix whose kernel
is equal to its image?
2. Show that for a nilpotent matrix A ∈ Fn×n such that An = 0, the matrix A − I must be nonsingular.
3. For A ∈ Cn×n , if every vector in Cn is an eigenvector of A, show that A must be a scalar multiple of the
identity matrix.
4. Suppose for a symmetric matrix A ∈ Rn×n , U is an A-invariant subspace. Show that U⊥ must also be A-
invariant.
5. For a matrix, A ∈ Cn×n , show that if rank(A) = r, then A can have at most r + 1 distinct eigenvalues.
6. For A ∈ Rn×n , B ∈ Rn×m and C ∈ Rp×n , show that the following subspaces are A-invariant: (a) C =
im([B AB . . . An−1 B]), and (b) Ō = ker([C T AT C T . . . (An−1 )T C T ]T ).
7. Let W ⊆ Rn be an A-invariant subspace such that ⟨w1 , w2 , . . . , wk ⟩ = W. Show that ∃ S ∈ Rk×k such that
AW = W S. where W = [w1 w2 . . . wk ]. Further, prove that A and S share at least r eigenvalues, where
r = rank(W ).
8. For β ∈ C, which is not an eigenvalue of A ∈ Cn×n , show that v ∈ Cn is an eigenvector of A if and only if it is
an eigenvector of (A − βI)−1 .
9. Show that for a matrix A ∈ Fn×n , with n distinct eigenvalues, there are 2n A-invariant subspaces.
||Ax||2
10. The spectral norm of a matrix, A ∈ Rm×n , is defined as ||A||2 := maxx̸=0 , where x ∈ Rn and the vector
||x||2
norms in Rn and Rm result from the conventional inner products on those spaces. If the non-zero singular
values of A are given by σ1 ≥ σ2 ≥ . . . ≥ σr > 0 (r ≤ min(m, n)), then prove that ||A||2 = σ1 .
P  21
m Pn
11. The Frobenius norm of a matrix A ∈ Rm×n is given by ||A||F := i=1 j=1 |aij |2 . If the non-zero singular
1
values of A are given by σ1 ≥ σ2 ≥ . . . ≥ σr > 0 (r ≤ min(m, n)), then prove that ||A||F = (σ12 + σ22 + . . . σr2 ) 2 .
 
Σr×r 0
12. For a matrix A ∈ Rm×n whose SVD is given by A = U V T , the Moore-Penrose pseudoinverse is
0 0
 −1 
† Σr×r 0 T
given by A = V U . Prove the following:
0 0
(a) A−1 = A† when A is invertible
(b) (A† )† = A
(c) (A† )T = (AT )†
(d) A† = (AT A)−1 AT when A has full column rank and A† = AT (AAT )−1 when A has full row rank.
(e) (AT A)† = A† (AT )† and (AAT )† = (AT )† A†
13. (a) Argue why complex matrices A and AH (for matrices over C, (·)H denotes entry-wise conjugation of the
transpose of a matrix) must have eigenvalues that are conjugates of one another.
(b) If u and v are eigenvectors of complex matrices A and AH , respectively, for two eigenvalues that are not
complex conjugates of one another, show that u and v must be orthogonal to each other.
(c) For a matrix A ∈ Rn×n , prove that if λ is an eigenvalue with algebraic multiplicity of one (also called
a simple eigenvalue), the left and right eigenvectors corresponding to λ cannot be orthogonal, irrespective of
whether A is diagonalizable or not (left eigenvector is defined as v ̸= 0 such that v ∗ A = λv ∗ for some eigenvalue
λ).
14. Suppose we have an ordered basis, B = {v1 , v2 , . . . , vn }, for V such that ⟨v1 , v2 , . . . , vk ⟩ is A-invariant for
all k = 1, 2, . . . , n. What can you say about the structure of the matrix representation of a linear operator
φ : V → V, given by [φ]B ?
∗ Asst. Professor, Electrical Engineering, Office: EE 214D, e-mail: [email protected]

1
15. For a vector space V over C and a polynomial p ∈ C[s], consider a linear operator A : V → V and z ∈ C. Show
that z is an eigenvalue of p(A) if and only if z = p(λ) for some eigenvalue, λ, of A.
16. For a linear operator A : V → V on a finite dimensional vector space, a positive integer m, and a vector v ∈ V,
suppose Am−1 v ̸= 0, but Am v = 0. Show that {v, Av, A2 v, . . . , Am−1 v} is linearly independent.
17. Prove that A = uv T ∈ Rn×n , for u, v ∈ Rn , is diagonalizable if and only if v T u ̸= 0.
18. Using the principle of mathematical induction, show that a set of eigenvectors corresponding to distinct eigen-
values is linearly independent.
19. If A ∈ Cn×n has a set of n orthonormal eigenvectors, it must be a Hermitian matrix. Prove it or give a
counterexample if it is not true.
20. Which of the following are ideals and why (or why not)?
(a) {all polynomials in C[x] having the constant term equal to zero}
(b) {all polynomials in C[x] containing only even degree terms}
(c) {0, 2, 4} ⊆ Z6
(d) {all polynomials in Z[x] with even coefficients}
(e) R ⊆ R[x]
21. Explain why for the Jordan form representation of a nilpotent matrix, N ∈ Cn×n , the number of Jordan blocks
of size i × i or greater is equal to rank(N i−1 )−rank(N i ).
22. For a non-zero vector v ∈ Cn , and a matrix A ∈ Cn×n , the sequence of vectors {v, Av, A2 v, . . . , Aj v} is called
a Krylov sequence (named after the Russian mathematician Alexei Nikolaevich Krylov), and the subspace Kj
spanned by vectors in the sequence is called a Krylov subspace. Suppose a Krylov P subspace for vector v, say
n
Kn−1 , spans Cn , while the characteristic polynomial for A is given by χA (x) = i
i=0 αi x . Represent the
matrix A in terms of the ordered basis given by the corresponding Krylov sequence. What is the minimal
polynomial, µA (x), and why?
Pk=n
23. For A ∈ Rn×n , suppose {w1 , w2 , . . . , wn } is a set of n linearly independent eigenvectors and for w = k=1 kwk ,
Pk=n
we have Aw = k=1 (k 2 + k)wk . Then the minimal polynomial, µA (x), and characteristic polynomial, χA (x),
of A are identical. Prove or disprove this assertion.
24. Is Z[x] a PID? Justify your answer.
25. Show that for any matrix A ∈ Cn×n , we have limk→∞ Ak = 0 if and only if the spectral radius (largest modulus
among all eigenvalues) of A is less than unity.
26. Show that a matrix is nilpotent if and only if all its eigenvalues are zero. Can a non-zero diagonalizable matrix
be nilpotent? Justify your answer. Suppose for N ∈ Rn×n , we have trace(N k ) = 0 ∀k ∈ Z+ . Show that N
must be nilpotent.
27. Consider two diagonalizable matrices, A, B ∈ Rn×n . Show that the following statements are equivalent: 1.
AB = BA, and 2. There exists an invertible matrix S ∈ Rn×n such that S −1 AS and S −1 BS are both diagonal.
 
−π/2 π/2
28. Evaluate cos(A), when A = .
π/2 −π/2

29. Show that eA is an orthogonal matrix whenever A = −AT .


30. (Population migration) Consider the current populations of two cities, A and B, to be given by a0 and b0 ,
respectively. We assume (quite unrealistically, of course!) that the total population of the two cities remains
constant year after year. Suppose every year a fraction 0 < pA < 1 of the population in city A migrates to city
B, while a fraction 0 < pB < 1 of the population in city B migrates to city A.
(a) Write down a discrete time representation for the population in the twocities after k + 
1 years,in terms
a(k + 1) a(k)
of the population after k years and other relevant parameters, in the form =Φ , where
  b(k + 1) b(k)
a(k)
∈ R2 denotes the population of the two cities after k years. Argue why Φ(·) is a linear map.
b(k)
   
a(k) k a0
(b) Show that =Φ .
b(k) b0
(c) Obtain the eigenvalues and eigenvectors of Φ(·) (or of [Φ(·)]B , i.e., the representation of Φ(·) under some
basis).
(d) What happens to the populations of the two cities in the long run (as k → ∞)? If the total initial popula-
tion remained the same, i.e., a0 + b0 = c, but we had instead started with â0 people in city A, and b̂0 people
in city B (with â0 + b̂0 = c), how would the final population be redistributed?

2
31. Show that det (eA ) = etrace(A) .
32. Show that whenever AB = BA, we have eA+B = eA eB for A, B ∈ Rn×n . Provide a counterexample to this
assertion if AB ̸= BA.

33. (Competition vs cooperation)


(a) Consider two species, say P and Q, in an environment competing for the same resources. Thus, each
species’ population increases in proportion to its current population and decreases in proportion to that of
its competitor. Suppose the constants of proportionality for both P and Q are 0.02 (for increase) and −0.01
(for decrease), respectively, while their initial populations are 10,000 and 20,000, respectively. Obtain the
expressions for the population evolution of the two species. Does any of the two species go extinct within some
finite time? If so, which one?
(b) Suppose two symbiotic species, say R and S, inhabit the same environment so that each species’ population
increases in proportion to the current population of its symbiotic neighbour, while it decreases in proportion
to its own current population. Further, suppose the two constants of proportionality (for both increase and
decrease, respectively) are 0.01 and −0.01 for both the species. If the initial populations of R and S are 10,000
and 20,000, respectively, obtain the expressions for the population evolution of the two species. What can you
say about the long run trend of either population ?
34. For a symmetric matrix, A ∈ Rn×n with zeros on its diagonal, suppose aij = 0 or 1 for i ̸= j, show that the
largest eigenvalue, λM is bounded by mini Rowsumi (A) ≤ λM ≤ maxi Rowsumi (A), where Rowsumi denotes
the sum of the entries of the ith row.
 
1 3 5
35. Obtain the minimal polynomial for the matrix A =  2 4 7. Hence, evaluate its eigenvalues.
3 6 8

36. (Graph theory) The distance between two nodes u, v ∈ V in an undirected graph G = (V, E), denoted dG (u, v),
is the length of the shortest path between them (i.e., the number of edges in that path). The diameter of
an undirected graph, G, denoted as dia(G) is the maximum distance between any pair of vertices in G. Show
that for a connected, undirected graph G, with n vertices, the adjacency matrix, A(G), satisfies the following
properties:
(a) If u, v are vertices of G, with dG (u, v) = m, then In , A(G), . . . , A(G)m are linearly independent.
(b) If A(G) has k distinct eigenvalues and dia(G) = q, then k > q.
37. Suppose the minimal polynomial of a matrix A ∈ Cn×n , say µA (x), admits a coprime factorization given by
µA (x) = (x − λ)k p(x), while the algebraic multiplicity of the eigenvalue, λ, equals m. Show that dim(Ker(A −
λI)k ) = m.
38. If A ∈ Cn×n is a diagonalizable matrix with characteristic polynomial χA (x) = (x − 1)k1 (x + 1)k2 xk3 , then the
rank of A can be increased by adding or subtracting the identity matrix to it. Prove or disprove the assertion.
39. Suppose A ∈ Rn×n has a characteristic polynomial given by χA (x) = (x − λ1 )k1 (x − λ2 )k2 and rank(A − λ1 I) =
n − k1 . Then A must be diagonalizable. Prove it or give a counterexample if it is not true.

40. Prove that for an operator φ : V → V, with dim(V) = n < ∞, we have V = Ker(φn ) ⊕ im(φn ).

[I was advised] to read [Camille] Jordan’s ‘Cours d’analyse’; and I shall never forget the astonishment with which
I read that remarkable work, the first inspiration for so many mathematicians of my generation, and learnt for the
first time as I read it what mathematics really meant.
—‘A Mathematician’s Apology (1940)’ G. H. Hardy

You might also like