0% found this document useful (0 votes)
7 views

Lecture 12

Uploaded by

moongokatrina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lecture 12

Uploaded by

moongokatrina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Lecture # 12 - Derivatives of Functions of Two or More Vari-

ables (cont.)
Some Definitions: Matrices of Derivatives

• Jacobian matrix

— Associated to a system of equations

— Suppose we have the system of 2 equations, and 2 exogenous variables:

y1 = f 1 (x1 , x2 )
y2 = f 2 (x1 , x2 )

∗ Each equation has two first-order partial derivatives, so there are 2x2=4 first-order
partial derivatives

— Jacobian matrix: array of 2x2 first-order partial derivatives, ordered as follows


 
∂y1 ∂y1
∂x1 ∂x2
 
J = 
∂y2 ∂y2
∂x1 ∂x2

— Jacobian determinant: determinant of Jacobian matrix

Example 1 Suppose y1 = x1 x2 , and y2 = x1 + x2 . Then the Jacobian matrix is


 
x2 x1
J = 
1 1

and the Jacobian determinant is |J| = x2 − x1

— Caveat: Mathematicians (and economists) call ’the Jacobian’ to both the matrix and
the determinant

1
— Generalization to system of n equations with n exogenous variables:

y1 = f 1 (x1 , x2 )
y2 = f 2 (x1 , x2 )
..
.
y2 = f 2 (x1 , x2 )

Then, the Jacobian matrix is:


 ∂y1 ∂y1 ∂y1

∂x1 ∂x2 ··· ∂xn
 
 
 ∂y2 ∂y2 ∂y2 
 ∂x1 ∂x2 ··· ∂xn 
 
J =



 .. .. .. .. 
 . . . . 
 
 
∂yn ∂yn ∂yn
∂x1 ∂x2 ··· ∂xn

2
• Hessian matrix:

— Associated to a single equation

— Suppose y = f (x1 , x2 )
∂y ∂y
∗ There are 2 first-order partial derivatives: ∂x1 , ∂x2
∂y ∂y
∗ There are 2x2 second-order partial derivatives: ∂x ,
1 ∂x2

— Hessian matrix: array of 2x2 second-order partial derivatives, ordered as follows:


 2 2

∂ y ∂ y
 ∂x21 ∂x2 ∂x1 
H [f (x1 , x2 )] = 



∂y2 ∂2y
∂x1 ∂x2 ∂x22

Example 2 Example y = x41 + x22 x21 + x32 . Then the Hessian matrix is
 
12x21 + 2x22 4x1 x2
H [f (x1 , x2 )] =  
4x1 x2 2
2x1 + 6x2

— Young’s Theorem: The order of differentiation does not matter, so that if z =


h (x, y) :
µ ¶ µ ¶
∂ ∂z ∂ ∂z d2 z d2 z
= = =
∂x dy ∂y ∂x ∂y∂x ∂x∂y

3
— Generalization: Suppose y = f (x1 , x2 , x3 , ..., xn )

∗ There are n first-order partial derivatives


∗ There are nxn second-order partial derivatives

— Hessian matrix: nxn matrix of second-order partial derivatives, ordered as follows


 ∂2y ∂2y ∂2y

2 ∂x ∂x · · · ∂x ∂x
 ∂x1 2 1 n 1

 
 ∂2y 2 2 
 ∂ y
· · · ∂ y 
 ∂x1 ∂x2 ∂x2 2 ∂xn ∂x 2 
 
H [f (x1 , x2 , ..., xn )] =  
 .. .. .. .. 
 . 
 . . . 
 
 
∂2y ∂2y ∂2y
∂x1 ∂xn ∂x2 ∂xn ··· ∂x2
n

4
Chain Rules for Many Variables

• Suppose y = f (x, w) , while in turn x = g (t) and w = h (t) . How does y change when t
changes?
dy ∂y dx ∂y dw
= +
dt ∂x dt ∂w dt

• Suppose y = f (x, w) , while in turn x = g (t, s) and w = h (t, s) . How does y change when
t changes? When s changes?
∂y ∂y ∂x ∂y ∂w
= +
∂t ∂x ∂t ∂w ∂t
∂y ∂y ∂x ∂y ∂w
= +
∂s ∂x ∂s ∂w ∂s

• Notice that the first point is called the total derivative, while the second is the ’partial
total’ derivative

Example 3 Suppose y = 4x − 3w, where x = 2t and w = t2


dy dy
=⇒ the total derivative dt is dt = (4) (2) + (−3) (2t) = 8 − 6t

Example 4 Suppose z = 4x2 y, where y = ex


dz dz ∂z dx ∂z dy
¡ ¢
=⇒ the total derivative dx is dx = ∂x dx + ∂y dx = (8xy) + 4x2 (ex ) = 8xy + 4x2 y =
4xy (2 + x)

Example 5 Suppose z = x2 + 12 y2 where x = st and y = t − s2


∂z ∂z ∂x ∂z ∂y
=⇒ ∂t = ∂x ∂t + ∂y ∂t = (2x) (s) + 12 (2) (y) (1) = 2xs + y = 2s2 t + t − s2
∂z ∂z ∂x ∂z ∂y
=⇒ ∂s = ∂x ∂s + ∂y ∂s = (2x) (t) + 12 (2) (y) (2s) = 2xt + 2sy = 2st2 + 2st − 2s3

5
Derivatives of implicit functions

• So far, we have had functions like y = f (x) or z = g (x, w) , where a (endogenous) variable is
expressed as a function of other (exogenous) variables =⇒ explicit functions. Examples:
y = 4x2 , or z = 3xw + ln w

• Suppose we instead have a equation y2 − 2xy − x2 = 0. We can write F (y, x) = 0, but


we cannot express y explicitly as a function of x. However, it is possible to define a set of
conditions so that an implicit function y = f (x) exists:

1. The function F (y, x) has continuous partial derivatives Fy , Fx


2. Fy 6= 0

• Derivative of an implicit function. Suppose we have a function F (y, x) = 0, and we know


an implicit function y = f (x) exists. How do we find how much y changes when x changes?
dy
(i.e., we want fx = dx )

— Find total differential for F (y, x) = 0 =⇒ Fy · dy + Fx · dx = d0 = 0


— Find total differential for y = f (x) =⇒ dy = fx · dx
— Replace dy = fx · dx into Fy · dy + Fx · dx = 0 :

Fy · dy + Fx · dx = 0
Fy · (fx · dx) + Fx · dx = 0
[Fy · fx + Fx ] dx = 0

— Since dx 6= 0, then the term in brackets has to be zero:


Fx
Fy · fx + Fx = 0 =⇒ fx = −
Fy
— Alternative notation:
∂F
dy ∂x
= − ∂F
dx ∂y

∂F
dy
Example 6 F (y, x) = y 2 − 2xy − x2 = 0. Then dx = − ∂F
∂x
= − −2y−2x
2y−2x =
y+x
y−x
∂y

∂F x
dy
Example 7 F (y, x) = y x + 1 = 0. Then dx = − ∂F
∂x
= − yxyx−1
ln y
= − xy ln y
∂y

6
• Generalization: One Implicit Equation

— Suppose F (y, x1 , x2 ) = 0. Then


∂F
dy
= − ∂x
∂F
1
dx1 ∂y
∂F
dy
= − ∂x
∂F
2
dx2 ∂y

Example 8 Suppose y 3 x + 2yw + xw2 = 0. Then


∂F
dy ∂x y 3 + w2
= − ∂F =−
dx ∂y
3y 2 x + 2w
∂F
dy ∂w 2y + 2xw
= − ∂F =−
dw ∂y
3y 2 x + 2w

— Suppose F (y, x1 , x2 , x3 , ..., xn ) = 0. Then


∂F
dy
= − ∂x
∂F
i
, for any i = 1, 2, 3, ..., n
dxi ∂y

You might also like