0% found this document useful (0 votes)
3 views

Lecture 18

The lecture discusses optimization problems with equality constraints, introducing the Lagrange multiplier method to maximize an objective function subject to a constraint. It provides examples demonstrating the formulation of the Lagrangian function and the derivation of first-order conditions to find stationary points. Additionally, it covers second-order conditions and the bordered Hessian for determining whether these points are maxima or minima.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 18

The lecture discusses optimization problems with equality constraints, introducing the Lagrange multiplier method to maximize an objective function subject to a constraint. It provides examples demonstrating the formulation of the Lagrangian function and the derivation of first-order conditions to find stationary points. Additionally, it covers second-order conditions and the bordered Hessian for determining whether these points are maxima or minima.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Lecture # 18 - Optimization with Equality Constraints

• So far, we have assumed in all (economic) optimization problems we have seen that the
variables to be chosen do not face any restriction.

• However, in other occassions such variables are required to satisfy certain constraints. Ex-
amples:

— A consumer chooses how much to buy of each product, such that it satisfies his budget
constraint

— A firm would look to minimize its cost of production, subject to a given output level.

• What do we do? Use the Lagrange multiplier method

— Suppose we want to maximize the function f (x, y) where x and y are restricted to
satisfy the equality constraint g (x, y) = c

max f (x, y) subject to g (x, y) = c

∗ The function f (x, y) is called the objective function

— Then, we define the Lagrangian function, a modified version of the objective func-
tion that incorporates the constraint:

Z (x, y, λ) = f (x, y) + λ [c − g (x, y)]

where the term λ is a(n unknown) constant called a Lagragian multiplier, associated
to the constraint

∗ Notice that Z (λ, x, y) = f (x, y) when the constraint holds, i.e., when g (x, y) = c,
regardless of the value of λ

1
— So Z (x, y, λ) is an unconstrained function (in three variables), so we can find its
maximum by finding the first order conditions:
∂Z
= c − g (x, y) = 0
∂λ
∂Z
= fx − λgx = 0
∂x
∂Z
= fy − λgy = 0
∂y
The first equation automatically ensures that the constraint is satisfied

— So we have a system of 3 equations and 3 unknowns → Find the stationary point

∗ So we obtain the stationary points of the constraint function f (·) (with two choice
variables), by looking at the stationary points of the unscontrained function Z (·)
(three choice variables, one of which is associated with the constraint).

2
Example 1 Suppose we want to find the extrema of f (x, y) = xy subject to the constraint
x+y =6
The Lagrangian is: Z (x, y, λ) = xy + λ [6 − x − y] , so the first order conditions are:

x+y = 6
∂Z
= y−λ=0
∂x
∂Z
= x−λ=0
∂y
There is then a stationary point at x∗ = y ∗ = λ∗ = 3

Example 2 Suppose a consumer has utility function U (x, y) = Axα y 1−α and faces the budget
constraint px · x + py · y = m
The Lagrangian is: Z (x, y, λ) = Axα y 1−α +λ [m − px · x − py · y] , so the first order conditions
are:

px · x + py · y = m
∂Z
= αAxα−1 y1−α − λpx = 0
∂x
∂Z
= (1 − α) Axα y −α − λpy = 0
∂y
We can express the last two equations as follows:

αAxα−1 y 1−α (1 − α) Axα y −α


λ= =
px py

Simplifying:

αpy y = (1 − α) px x

Replacing it in the budget constraint, we obtain the demand functions:


m
x (px , py , m) = α
px
m
y (px , py , m) = (1 − α)
py

3
General case:

• I just introduced an example where the objective function has:

— Two choice variables: f (x, y)

— One constraint: g (x, y) = c

• Suppose we have:

— Four choice variables: f (x1 , x2 , x3 , x4 )

— Two constraints: g1 (x1 , x2 , x3 , x4 ) = c1 g2 (x1 , x2 , x3 , x4 ) = c2

• Then the Lagrangian function is:

Z = f (x1 , x2 , x3 , x4 ) + λ1 [c1 − g1 (x1 , x2 , x3 , x4 )] + λ2 [c2 − g2 (x1 , x2 , x3 , x4 )]

• By getting the first order conditions of Z, we get the stationary points of f (·) that satisfy
the constraints.

4
Second Order Conditions

• The second order conditions for a constrained optimization are slightly more complicated
than for an unconstraint one. As such, we will only look at the case of two choice variables
and one constraint.

• Suppose f (x, y) AND g (x, y) are both twice differentiable in an interval I, and suppose
(x∗ , y ∗ ) is an interior, stationary point of I, that satisfies the first-order conditions of
Z (λ, x, y) = f (x, y) + λ [c − g (x, y)] .

• In this case, the Hessian for the choice variables is:


 
Zxx Zxy
H [Z] =  
Zxy Zyy

where:

— Zxx = fxx − λgxx

— Zyy = fxx − λgxx

— Zxy = fxy − λgxy

• Define the bordered Hessian as follows:


 
0 gx gy
 
 
H=  gx Zxx Zxy 

 
gy Zxy Zyy

• Then:
¯ ¯
— (x∗ , y ∗ ) is a maximum point if ¯H ¯ > 0 when evaluated at x = x∗ , y = y∗ , λ = λ∗ .
¯ ¯
— (x∗ , y ∗ ) is a minimum point if ¯H ¯ < 0 when evaluated at x = x∗ , y = y ∗ , λ = λ∗ .

5
Example 3 Suppose we want to find the extrema of f (x, y) = xy subject to the constraint
x + y = 6. We found there is a stationary point at x∗ = y ∗ = λ∗ = 3.

For the bordered Hessian we need five derivatives:

— Zxx = fxx − λgxx = 0

— Zyy = fxx − λgxx = 0

— Zxy = fxy − λgxy = 1


— gx = 1

— gy = 1

As a result, the bordered Hessian is:


 
0 1 1
 
 
H =
 1 0 1 

 
1 1 0
¯ ¯
and its determinant is ¯H ¯ = 2 > 0, so the stationary point is a maximum.

6
Example 4 Suppose a consumer has utility function U (x, y) = Axα y 1−α and faces the
budget constraint px · x + py · y = m. We got that there is a stationary point that satisfies
the constraint at:
m
x (px , py , m) = α
px
m
y (px , py , m) = (1 − α)
py

For the bordered Hessian we need five derivatives:

— Zxx = fxx − λgxx = −α (1 − α) Axα−2 y 1−α < 0

— Zyy = fxx − λgxx = −α (1 − α) Axα y −α−1 < 0

— Zxy = fxy − λgxy = α (1 − α) Axα−1 y −α > 0


— gx = px

— gy = py

As a result, the bordered Hessian is:


 
0 px py
 
 
H =  px −α (1 − α) Axα−2 y 1−α α (1 − α) Axα−1 y −α 

 
py α (1 − α) Axα−1 y−α −α (1 − α) Axα y −α−1
¯ ¯
and its determinant is ¯H ¯ = 2α (1 − α) Axα−1 y −α px py +α (1 − α) Axα y −α−1 (px )2 +α (1 − α) Axα−2 y1−α
0 for any value of A > 0 and α ∈ [0, 1] . So the stationary point is a maximum.

You might also like