0% found this document useful (0 votes)
106 views3 pages

2011 002 Mid-1 Solution

The document contains solutions to mathematical economics exam questions. 1) It finds the projection matrix P that projects vectors onto a line, and describes the column space and nullspace of P geometrically and with bases. It also lists the eigenvectors and eigenvalues of P. 2) It finds the matrix representation of a linear map T with respect to different bases. It also discusses properties of linear independence and spanning sets. 3) It proves that the union of subspaces forms a basis, and shows a matrix is positive definite if its nullspace is the zero vector. It also defines a linear map on polynomials and finds its matrix representation.

Uploaded by

Sahil Singhania
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views3 pages

2011 002 Mid-1 Solution

The document contains solutions to mathematical economics exam questions. 1) It finds the projection matrix P that projects vectors onto a line, and describes the column space and nullspace of P geometrically and with bases. It also lists the eigenvectors and eigenvalues of P. 2) It finds the matrix representation of a linear map T with respect to different bases. It also discusses properties of linear independence and spanning sets. 3) It proves that the union of subspaces forms a basis, and shows a matrix is positive definite if its nullspace is the zero vector. It also defines a linear map on polynomials and finds its matrix representation.

Uploaded by

Sahil Singhania
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Delhi School of Economics

M.A. ECONOMICS SUMMER SEMESTER 2011


COURSE 002. INTRODUCTORY MATHEMATICAL ECONOMICS
Midterm 1 Solutions
1. Prove the followings:
!
(a) Find the matrix P that projects every vector b in R3 onto the line in the direction of !
a = (2; 1; 3):
Solution: The general formula for the orthogonal projection onto the column space of a matrix
A is
P = A(AT A) 1 AT :
2
3
2
3
4 2 6
2
1 4
2 1 3 5.
Here, A = 4 1 5 so that P = 14
6 3 9
3
Remarks:
Since were projecting onto a one-dimensional space, AT A is just a number and we can write
things like P = (AAT ) = (AT A). This wont work in general.
You dont have to know the formula to do this. The ith column of P is, pretty much by defnition,
the projection of ei (e1 = (1; 0; 0), e2 = (0; 1; 0); e3 = (0; 0; 1)) onto the line in the direction of a.
And this is something you should know how to do without a formula.
RUBRIC: There was some leniency for computational errors, but otherwise there werent many
opportunities for partial credit.
(b) What are the column space and nullspace of P ? Describe them geometrically and also give a
basis for each space.
Solution: The column space is the line in R3 in the direction of a = (2, 1, 3). One basis for it is
[2 1 3]T and theres not really much choice in giving this basis (you can rescale by a non-zero
constant).
The nullspace is the plane in R3 that is perpendicular to a = (2, 1, 3) (i.e., 2x + y + z = 0.)
3 2
3
2
1
3
One basis for it is 4 0 5 ; 4 2 5 though there are a lot of di erent looking choices for it (any
0
2
two vectors that are perpendicular to a and not in the same line will work).
RUBRIC: 6 points for giving a correct basis, and 4 points for giving the complete geometric
description. Note that it is not correct to say e.g., N (P ) = R2 . It is correct to say that N (P )
is a (2-dimensional) plane in R3 , but this is not a complete geometric description unless you say
(geometrically) which plane it is: the one perpendicular to a/to the line through a.
(c) What are all the eigenvectors of P and their corresponding eigenvalues? (You can use the geometry
of projections, not a messy calculation.) The diagonal entries of P add up to _______.
Solution: The diagonal entries of P add up to 1 = the sum of the eigenvalues .
Since P is a projection, its only possible eigenvalues are = 0 (with multiplicity equal to the
dimension of the nullspace, here 2) and = 1 (with multiplicity equal to the dimension of the
column space, here 1). So, a complete list of eigenvectors and eigenvalues is:
= 0 with multiplicity 2. The eigenvectors for = 0 are precisely the vectors in
the null space. That is, all linear combinations of [3 0 2]T and [-1 2 0]T :.
= 1 with multiplicity 1. The eigenvectors for = 1 are precisely the vectors in
the column space. That is, all multiples of [2 1 3]T .
RUBRIC: Points for the sum of eigenvalues, points for a full list (with multiplicities) of eigenvalues,
and points for a complete description of all eigenvectors. In light of the emphasized "all", youd
lose 1 point if you gave two eigenvectors for = 0 and didnt say that all (at least non-zero) linear
combinations were also eigenvectors for = 0.

2. (a) Let S be the standard basis of R2 , and B be the basis {(1,4), (2,9)}. Let T: R2 ! R2 be the
linear map dened by T (x; y) = (2x y; x 2y); for all x; y 2 R.
(i) Find the coordinates of each of the standard basis vectors in basis B.
(ii) Find the matrix [T ]B for T relative to basis B.
Solution: (i) e1 = (1; 0) = c1 (1; 4) + c2 (2; 9) = (c1 + 2c2 ; 4c1 + 9c2 )
thus c1 + 2c2 = 1 and 4c1 + 9c2 = 0
9
) c1 = 9; and c2 = 4 so that [e1 ]B =
4
Similarly, e2 = (0; 1) = c1 (1; 4) + c2 (2; 9) = (c1 + 2c2 ; 4c1 + 9c2 )
thus c1 + 2c2 = 0 and 4c1 + 9c2 = 1
2
) c1 = 2; and c2 = 1 so that [e2 ]B =
1
(ii) Mat(T; B )=C

AC =
|

9
4

2
1

{z
}|
[I]SB

2
1

1
2

{z
}|
[T ]SS

1
4

{z

2
9

} |
[I]BS

4
1

{z

13
4

(b) True or False (Explain your reasoning):


(i) If {u1 , u2 } is linearly independent and {v1 , v2 } is linearly independent, then {u1 , u2 , v1 , v2 }
is linearly independent.
(ii) If {u1 , u2 } is a spanning set of V and {v1 , v2 } is another spanning set of V , then {u1 , u2 , v1 ,
v2 } is also a spanning set of V .?
Solution: (i) False! For a counter example, let u1 = (1, 0), u2 = (0, 1) and v1 = (2, 0) and v2
= (0, 2).
(ii) True! Since {u1 , u2 } is a spanning set of V , any vector v 2V can be written as a linear
combination of u1 and u2 . Thus any vector v 2V can also be written as a linear combination of
u1 and u2 and v1 and v2 (for instance, the coe cients of v1 and v2 can even be 0). Here v1 and
v2 can be any vectors they do not need to form a spanning set themselves.
3. (a) Assume that V = U W for two subspaces U and W of V . Let {u1 , . . . , um } be a basis for U
and let {w1 , . . . ,wn } be a basis for W. Prove that {u1 , . . . , um ,w1 , . . . ,wn } is a basis for V
. (Hint: what do you know about dim U W?).
Solution: We know that dim V = dim U W = dim U + dimW = m + n. Thus to show that
{u1 , . . . , um ,w1 , . . . ,wn } is a basis for V , it su ces to show that these vectors span V (since
there are m + n = dim V of them). Let v 2V and write v = u + w for u 2U and w 2W, which
we can do since V = U + W. Now we can write u = c1 u1 +
+ cm um and w = d1 w1 +
+
dn wn for scalars ci , dj 2 F, since {u1 , . . . , um } is a basis for U and {w1 , . . . ,wn } is a basis
for W. Finally,
v = u + w = c 1 u1 +
+ cm um + d1 w1 +
+ dn wn 2 span(u1 , . . . , um ,w1 , . . . ,wn ),
which shows that these m + n vectors span V .
(b) Let T : Rn ! Rk be a real matrix (not necessarily square). If the nullspace of T is {0}, show
that the matrix T T is invertible and positive denite.
Solution. First we show that T T is positive denite. For any vector x 2 Rn
hx; T T xi = hT x; T xi = jjT xjj2 0:
This also shows that hx; T T xi = 0 only when T x = 0. Since the nullspace of T is {0}, this occurs
only when x = 0, so T T is positive denite.
The above computation contains the proof that the nullspace of the square matrix T T is {0};
thus it is invertible.
OR,
(a) Let P3 (F ) be a vector space of polynomials with coe cients in F and degree
P3 (F ) ! P3 (F ) be dened as T (f (x)) = f (x + 1); for all f (x) 2 P3 (F )
2

3, and let T :

For example: T (x2 + x) = (x + 1)2 + (x + 1)


(i) T is a linear map.
Solution. T (cf (x)) = cf (x + 1) = cT (f (x)) for all c 2 F and f (x) 2 P3 (F )
T (f (x) + g(x)) = f (x + 1) + g(x + 1) = T (f (x)) + T (g(x)) for all f; g 2 P3 (F )
) T is linear.
(ii) Compute M at(T ; B) i.e.[T ]BB where B is basis f1; x; x2 ; x3 g of P3 (F ):
2
3
j
j
j
j
Solution. M at(T ; B) = 4 T (1)B T (x)B T (x2 )B T (x3 )B 5
j
j
j
j
2
3
2 3
1
1
6 0 7
6 1 7
7
7
T (1) = 1 = 6
T (X + 1) = X + 1 = 6
4 0 5 ;
4 0 5 ;
0 B
0 B
2
3
2
1
6 2 7
6
3
3
3
2
7
6
T (X 2 ) = (x + 1)2 = x2 + 2x + 1 = 6
4 1 5 ; T (X ) = (x + 1) = x + 3x + 3x + 1 = 4
0 B
3
2
1 1 1 1
6 0 1 2 3 7
7
Therefore, M at(T ; B) = 6
4 0 0 1 3 5
0 0 0 1

(b) Prove the statement: A set X


x1 ; ::::; xm 2 X:
Soltion: Refer to prelims note.

3
1
3 7
7
3 5
1 B

Rn is convex i it contains any convex combination of vectors

You might also like