0% found this document useful (0 votes)
121 views

Econ 605 - Static Optimization

This document discusses static optimization techniques used in economics. It covers: 1) The components of a static optimization problem including an objective function and feasible set. The goal is to maximize or minimize the objective function subject to constraints. 2) Necessary first-order conditions for an extreme point (maximum or minimum), which require calculating partial derivatives and finding where they are equal to zero. 3) Sufficient conditions for concave/convex functions, where stationary points are guaranteed to be extreme points. Local extreme points are discussed when functions are not concave/convex. 4) Second-order conditions involving the Hessian matrix that provide further information about maxima, minima or saddle points. Examples are provided to demonstrate these

Uploaded by

Belaynew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

Econ 605 - Static Optimization

This document discusses static optimization techniques used in economics. It covers: 1) The components of a static optimization problem including an objective function and feasible set. The goal is to maximize or minimize the objective function subject to constraints. 2) Necessary first-order conditions for an extreme point (maximum or minimum), which require calculating partial derivatives and finding where they are equal to zero. 3) Sufficient conditions for concave/convex functions, where stationary points are guaranteed to be extreme points. Local extreme points are discussed when functions are not concave/convex. 4) Second-order conditions involving the Hessian matrix that provide further information about maxima, minima or saddle points. Examples are provided to demonstrate these

Uploaded by

Belaynew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Addis Ababa University

College of Business and


Economics
Econ 605: Mathematics for
Economists
2020/21 AY
Tewodros Negash Kahsay (PhD)

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
3. Static Optimization
• Economic analysis heavily relies on static optimization
– Producers seek input combinations that maximize profits
or minimize costs
– Consumers seek commodity bundles that maximize utility
subject budget constraint.
• In static optimization problem
– There is an objective function 𝑓(𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 ) , a
function of 𝑛 variables whose value is to be optimized
(maximized or minimized).
– There is also an admissible set (of feasible set) 𝑆 that
is a subset of ℝ𝑛 (set of real 𝑛 – vectors).
• Then the problem is to find maximum or minimum
points of 𝑓 in 𝑆
max (min) 𝑓(𝒙)Tewodros
subject to 𝒙 𝜖 𝑆
Negash (PhD), Addis Ababa
University, Department of Economics
3.1 Extreme points
• Let 𝑓 be a function of n variables 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 defined
on a set 𝑆 in ℝ𝑛 .
• Suppose that 𝒙∗ = (𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ ) belongs to 𝑆 and the
value of 𝑓at 𝒙∗ is greater than or equal to the values of
𝑓 at all other points 𝒙 = (𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 ). That is
𝑓(𝒙∗ ) ≥ 𝑓(𝒙) for all 𝒙 in 𝑆 (∗)
• Then 𝒙∗ is called a (global) maximum point for 𝑓 in 𝑆
and 𝑓(𝒙∗ ) is called the maximum value.
• If the inequality in (∗) is strict, then 𝒙∗ is a strict
maximum point for 𝑓 in 𝑆.
• We define (strict) minimum point and minimum value
by reversing the inequality sign in (∗)
• We use extreme points and extreme values are used to
indicate both maxima and
Tewodros Negashminima.
(PhD), Addis Ababa
University, Department of Economics
Necessary First – Order Conditions
• Let 𝑓 be defined on a set 𝑆 in ℝ𝑛 and let 𝒙∗ =
(𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ ) be an interior point in 𝑆 at which 𝑓 has
partial derivatives. A necessary condition for 𝒙∗ to be
an extreme point (maximum or minimum point) for 𝑓 is
that 𝒙∗ is a stationary point for 𝑓. That is it satisfies the
equations
𝑓𝑖′ (𝒙∗ ) = 0 , 𝑖 = 1,2, ⋯ , 𝑛 (1)
• A stationary point of 𝑓 is a point where all the first –
order partial derivatives are 0
• Interior stationary points for concave or convex
functions are automatically extreme points
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Sufficient Conditions with concavity/convexity
• Suppose that the function 𝑓(𝒙) is defined in a convex
set 𝑆 in ℝ𝑛 and let 𝒙∗ be an interior point of 𝑆. Assume
also that f is continuously differentiable around 𝒙∗ .
(a) If 𝑓 is concave in 𝑆, then 𝒙∗ is a (global) maximum
point for 𝑓 in 𝑆 iff 𝒙∗ is a stationary point for 𝑓
(b) If 𝑓 is convex in S, then 𝒙∗ is a (global) minimum
point for 𝑓 in 𝑆 iff 𝒙∗ is a stationary point for 𝑓.
Example
(1) Find all (global) extreme points of
𝑓(𝑥, 𝑦, 𝑧) = 𝑥2 + 2𝑦2 + 3𝑧2 + 2𝑥𝑦 + 2𝑥𝑧
(2) Find the extreme value(s) of
2
𝑓(𝑥1, 𝑥2, 𝑥3) = 2𝑥Tewodros 2 +𝑥 𝑥 +𝑥 2 +2
1 +Negash
𝑥1𝑥(PhD),
2+ 4𝑥 2
Addis Ababa 1 3 3
University, Department of Economics
3.2 Local Extreme Points
• Suppose one is trying to find the maximum of a function
that is not concave, or a minimum of a function that is
not convex. Then the sufficient conditions with
concavity/convexity stated above cannot be used.
• Instead one possible procedure is to identify local
extreme points and then compare the values of the
function at the different local extreme points in the
hope of finding a local maximum (minimum)
• The point 𝒙∗ is a local maximum point of 𝑓 in 𝑆 if
𝑓(𝒙) ≤ 𝑓(𝒙∗ ) for all 𝒙 in S sufficiently close to 𝒙∗ (∗)
• If the inequality in (∗) is strict for 𝒙 ≠ 𝒙∗ , then 𝒙∗ a
strict local maximum point for 𝑓 in 𝑆
• A (strict) local minimum point is defined by reversing
the inequality in (∗)Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
• Local extreme points – local maximum or minimum
point
• Local extreme values – local maximum or minimum
values.
• The necessary first – order conditions apply to local
extreme values as well.
A local extreme point in the interior of the domain of
a differentiable function must be a stationary point
• A stationary point 𝒙∗ of 𝑓 that is neither a maximum
point nor a minimum point is called a saddle point of 𝑓

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
Saddle point
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
• Recall the second – order conditions for local extreme
point for functions of two variables
• If 𝑓(𝑥, 𝑦) is a twice continuously differentiable function
with (𝑥 ∗ , 𝑦 ∗ ) as an interior stationary point, then
𝑓 (𝑥 ∗ , 𝑦 ∗ ) 𝑓 (𝑥 ∗ , 𝑦 ∗ )
11 12
• 𝑓11 (𝑥 ∗ , 𝑦 ∗ ) > 0 & ∗ ∗ ∗ ∗ >0
𝑓21 (𝑥 , 𝑦 ) 𝑓22 (𝑥 , 𝑦 )
⇒ local min at (𝑥 ∗ , 𝑦 ∗ )
𝑓11 (𝑥 ∗ , 𝑦 ∗ ) 𝑓12 (𝑥 ∗ , 𝑦 ∗ )
• 𝑓11 (𝑥 ∗ , 𝑦 ∗ ) < 0 & >0
𝑓21 (𝑥 ∗ , 𝑦 ∗ ) ∗
𝑓22 (𝑥 , 𝑦 ) ∗

⇒ local max at (𝑥 ∗ , 𝑦 ∗ )
𝑓11 (𝑥 ∗ , 𝑦 ∗ ) 𝑓12 (𝑥 ∗ , 𝑦 ∗ )
• ∗ ∗ <0
𝑓21 (𝑥 ∗ , 𝑦 ∗ ) 𝑓22 (𝑥 , 𝑦 )
(𝑥 ∗ , Negash
⇒ Tewodros 𝑦 ∗ )(PhD),isAddis
a saddle
Ababa
point
University, Department of Economics
General case: Second – order sufficient conditions for local
extreme points
• Suppose that 𝑓(𝒙) = 𝑓(𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 ) is defined on a set 𝑆
and that 𝒙∗ is an interior stationary point. Assume that 𝑓 is
twice continuously differentiable around 𝒙∗ . Let the Hessian
matrix be defined by
𝑓11 (𝒙) 𝑓12 (𝒙) ⋯ 𝑓1𝑘 (𝒙)
𝑓21 (𝒙) 𝑓22 (𝒙) ⋯ 𝑓2𝑘 (𝒙)
𝐷𝑘 𝒙 = , 𝑘 = 1, … , 𝑛
⋮ ⋮ ⋱ ⋮
𝑓𝑘1 (𝒙) 𝑓𝑘2 (𝒙) ⋯ 𝑓𝑘𝑘 (𝒙)
Then we consider the 𝑛 leading principal minors
𝑎 𝐷𝑘 𝒙∗ > 0, 𝑘 = 1, ⋯ , 𝑛 ⇒ 𝒙∗ is a minimum point
𝑏 −1 𝑘 𝐷𝑘 𝒙∗ > 0, 𝑘 = 1, ⋯ , 𝑛 ⇒ 𝒙∗ is a maximum
point
(𝑐) 𝐷𝑛 𝒙∗ ≠ 0 and neither (𝑎) nor (𝑏) is satisfied ⇒ 𝒙∗ is a
Tewodros Negash (PhD), Addis Ababa
saddle point University, Department of Economics
• Alternative way of formulating (𝑎) and (𝑏) above is
– A sufficient condition for an interior stationary point
𝒙∗ of 𝑓(𝒙) to be a minimum point is that the Hessian
matrix 𝑓′′(𝒙∗ ) is positive definite at 𝒙∗
– A sufficient condition for an interior stationary point
𝒙∗ of 𝑓(𝒙) to be a maximum point is that the Hessian
matrix 𝑓′′(𝒙∗ ) is negative definite at 𝒙∗
Example
• Given the function 𝑓(𝑥, 𝑦, 𝑧) = 𝑥3 + 3𝑥𝑦 + 3𝑥𝑧 + 𝑦3 +
3𝑦𝑧 + 𝑧3
(𝑎) Find the stationary point(s) and
(𝑏) Determine whether the function is at a local max,
local min or a saddle point at the stationary point(s)
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Necessary Second – Order Conditions for Local Extrema
• Suppose the function 𝑓(𝒙) = 𝑓(𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 ) is defined
on set S and assume that 𝒙∗ = (𝑥1∗ , 𝑥2∗ , … , 𝑥𝑛∗ ) is an
interior stationary point of 𝑓(𝒙)
• The condition that the Hessian matrix 𝑓′′(𝒙∗ ) is negative
definite [i.e. 𝑓′′(𝒙∗ ) < 0] is sufficient for 𝑓 to have a
local maximum at the stationary point 𝒙∗ . But the
condition is not necessary.
• For example, 𝑓 𝑥, 𝑦 = −𝑥 4 − 𝑦 4 has a global maximum
at (0,0) and yet 𝑓11 0,0 = 0 so that 𝑓 ′′ 𝒙∗ is not
negative definite.
• However, we claim that 𝑓 ′′ 𝒙∗ has to be negative
semidefinite in order for 𝒙∗ to be a local maximum point.
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Second Order Necessary condition for local extreme
points
• Suppose that 𝑓(𝒙) = 𝑓(𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 ) is defined on a
set 𝑆 and 𝒙∗ is an interior stationary point in 𝑆. Assume
that 𝑓 is twice continuously differentiable around 𝒙∗ . Let
∆𝑘(𝒙) denote an arbitrary principal minor of order 𝒌 of
the Hessian matrix. Then:
– (𝑎) 𝒙∗ is a local minimum point ⇒ ∆𝑘(𝑥) ≥ 0
for all principal minors of order 𝑘 = 1, … , 𝑛.
– (𝑏) 𝒙∗ is a local maximum point ⇒ (−1)𝑘∆𝑘(𝑥) ≥ 0
for all principal minors of order 𝑘 = 1, … , 𝑛.
• That is, a second – order necessary condition for 𝑓(𝒙) to
have a minimum (maximum) at 𝒙∗ is that the Hessian
matrix of 𝑓 at 𝒙∗ is positive (negative) semidefinite.
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Example:
• Determine the extreme points of the following
functions and classify them into max, min,
saddle point
(a)𝑓(𝑥1, 𝑥2, 𝑥3) = 𝑥12 + 𝑥22 + 3𝑥32 – 𝑥1𝑥2 +
2𝑥1𝑥3 + 𝑥2𝑥3
(b)𝑓(𝑥1, 𝑥2, 𝑥3, 𝑥4) = 20𝑥2 + 48𝑥3 + 6𝑥4 +
8𝑥1𝑥2 − 4𝑥12 − 12𝑥32 − 𝑥42 − 4𝑥23

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
3.3 Equality Constraints: The Lagrange Problem
• A general optimization problem with equality constraints is
of the form
𝑔1 (𝑥1 ,….,𝑥𝑛 ) = 𝑏1
Max(min) 𝑓(𝑥1, 𝑥2, … , 𝑥𝑛 ) s.t ൝ ⋯⋯⋯⋯ 𝑚<𝑛 (1)
𝑔𝑚 (𝑥1 ,….,𝑥𝑛 ) = 𝑏𝑚
where the 𝑏𝑗 are constants
• In vector formulation, the problem is
Max(min) 𝑓(𝒙) subject to 𝑔𝑗 𝒙 = 𝑏𝑗 ,
𝑗 = 1, … , 𝑚 ( 𝑚 < 𝑛) (2)
• Defining the vector functions 𝑔 = (𝑔1, 𝑔2, … , 𝑔𝑚 ) and 𝑏 =
(𝑏1, 𝑏2, … , 𝑏𝑚 ), the constraint can be expressed as the
vector equality 𝑔(𝒙) = 𝒃
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
• The standard procedure for solving this problem is first
to define the Lagrange function or Lagrangian
𝐿(𝑥) = 𝑓(𝒙) + 𝜆1(𝑏1 – 𝑔1(𝒙)) + … + 𝜆𝑚(𝑏𝑚 – 𝑔𝑚 (𝒙)) (3)
where 𝜆1, … , 𝜆𝑚 are Lagrange multipliers
Necessary first – order conditions are
𝜕𝐿(𝒙) 𝜕𝑓(𝒙) 𝑚 𝜕𝑔𝑖 (𝒙)
= + σ𝑗=𝑖 𝜆𝑗 =0, i = 1, ⋯ , 𝑛 (4)
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑖

• The 𝑛 equation in (4) and the 𝑚 equation in (1) are


solved for 𝑛 + 𝑚 variables 𝑥1, … . , 𝑥𝑛 and 𝜆1, … , 𝜆𝑚
• The resulting solution vectors (𝑥1, … . , 𝑥𝑛 ) are then the
candidates for optimality

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
Necessary and Sufficient Conditions
(a) Suppose that the functions 𝑓 and 𝑔1, … , 𝑔𝑚 are
defined on a set 𝑆 in ℝ𝑛 , and that 𝒙∗ = (𝑥1∗ , … , 𝑥𝑛∗ ) is
an interior point of 𝑆 that solves problem (1). Suppose
further that 𝑓 and 𝑔1, … , 𝑔𝑚 are differentiable around
𝒙∗ , and that the 𝑚 𝑥 𝑛 matrix of first partial derivatives
of the constraint functions
𝜕𝑔1 (𝒙∗ ) 𝜕𝑔1 (𝒙∗ )

𝜕𝑥1 𝜕𝑥𝑛
• 𝑔′ 𝒙∗ = ⋮ ⋮ ⋮ has rank 𝑚
𝜕𝑔𝑚 (𝒙∗ ) 𝜕𝑔𝑚 (𝒙∗ )

𝜕𝑥1 𝜕𝑥𝑛

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
Then there exists unique numbers
𝜆1, … , 𝜆𝑚 such that the necessary first – order
conditions in(4) are valid
(b) If there exist numbers λ1, …, λm and an
admissible 𝒙∗ which together satisfy the first –
order conditions (4), and if the Lagrangian
𝐿(𝒙) defined by (3) is concave (convex) in 𝒙,
then 𝒙∗ solves the maximization
(minimization) problem (1)

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
Example : Solve the problem
• 𝑀𝑎𝑥 𝑓 𝑥, 𝑦, 𝑧 = 𝑥 + 2𝑧
𝑔1 𝑥, 𝑦, 𝑧 = 𝑥 + 𝑦 + 𝑧 = 1
s. t ൝
𝑔2 𝑥, 𝑦, 𝑧 = 𝑥 2 + 𝑦 2 + 𝑧 = 7Τ4
Interpreting the Lagrange Multipliers
• The optimal values of 𝑥1, … , 𝑥𝑛 in problem (1) depend on
the vector of constants 𝒃 = (𝑏1, … , 𝑏𝑚).
• If 𝒙∗ (𝒃) = (𝑥1∗ (𝒃), … , 𝑥𝑛∗ (𝒃) ) denotes the vector of
optimal values of the choice variables, then the
corresponding value
𝑓 ∗ (𝒃) = 𝑓(𝑥1∗ (𝒃), … , 𝑥𝑛∗ (𝒃) ) is the value function of 𝑓
and is called the (optimal) value function of problem (1)
• The values of the Lagrange multipliers also depend on b
𝜆𝑗 = 𝜆𝑗 (𝒃), for 𝑗 = 1, … , 𝑚
𝜕𝑓∗ (𝒃)
• Then = 𝜆𝑗 (𝒃), 𝑗 = 1, ⋯ , 𝑚 (5)
𝜕𝑏𝑗
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
• The Lagrange multiplier 𝜆𝑗 = 𝜆𝑗 (𝒃) for the 𝑗𝑡ℎ
constraint is the rate at which the optimal value of
the objective function changes w.r.t. changes in the
constant 𝑏𝑗
• In economics, the number 𝜆𝑗 (𝒃) is referred to as a
shadow price (marginal value) imputed on to a unit
of resource 𝑗.
Example
(a) Solve the problem
Max 100 − 𝑥 2 − 𝑦 2 − 𝑧 2 subject to 𝑥 + 2𝑦 + 𝑧 = 𝑎
(b) Compute the optimal value function 𝑓 ∗ (𝑎) and
verify that (5) holds
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Local Second – Order Conditions
– Local second – order conditions for the general
optimization problem with equality constraint
– Consider the case with only one constraint
• Local max (min) 𝑓(𝒙) = 𝑓 (𝑥1, … . , 𝑥𝑛) subject to
𝑔(𝒙) = 𝑔 (𝑥1, … . , 𝑥𝑛) = 𝑏 (𝟏)
– The Lagrangian is 𝐿 = 𝑓(𝒙) + 𝜆(𝒃 – 𝑔(𝒙))
– Suppose 𝒙∗ satisfies the first – order conditions, so
there exists a 𝜆 such that the Lagrangian is stationary
at 𝒙∗
• For 𝑘 = 2 ⋯ 𝑛, Define the bordered Hessian determinant
0 𝑔1 (𝒙∗ ) ⋯ 𝑔𝑘 (𝒙∗ )
∗ ∗) ∗
∗ 𝑔 1 (𝒙 ) 𝐿11 ( 𝒙 ⋯ 𝐿 1𝑘 (𝒙 )
• 𝐵𝑘 (𝒙 )= (𝟐)
⋮ ⋮ ⋱ ⋮
𝑔𝑘 (𝒙∗ ) 𝐿Tewodros
(𝒙 ∗ ) (PhD), Addis
Negash

𝑘1 Department of Economics 𝑘𝑘
University, 𝐿
Ababa
(𝒙 ∗)
• Then we have the following results
𝑎 𝐵𝑘 (𝒙∗ ) < 0 for 𝑘 = 2, … , 𝑛 ⇒ 𝒙∗ solves the local
min problem in (1) (3)
𝑏 −1 𝑘𝐵𝑘 (𝒙∗ ) > 0 for 𝑛 = 2, … , 𝑛 ⇒ 𝒙 ∗ solves
the local max problem in (1) (4)
• Consider the general optimization problem with several
equality constraints
Local max (min) 𝑓(𝒙) subject to
𝑔𝑗(𝒙) = 𝑏𝑗 , 𝑗 = 1, … , 𝑚 (𝑚 < 𝑛) (5)
• The Lagrangian is
𝑚

𝐿 𝒙 = 𝑓 𝒙 + ෍ 𝜆𝑗 (𝑏𝑗 − 𝑔𝑗 𝒙 )
𝑗=1
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
The bordered Hessian looks like the following
0 ⋯ 0 𝑔11 (𝒙∗ ) ⋯ 𝑔𝑘1 (𝒙∗ )
⋮ ⋱ ⋮ ⋮ ⋱ ⋮
0 ⋯ 0 𝑔1𝑚 (𝒙∗ ) ⋯ 𝑔𝑘𝑚 (𝒙∗ )
𝐵𝑘 𝒙∗ = (6)
𝑔11 (𝒙∗ ) ⋯ 𝑔11 (𝒙∗ ) 𝐿11 (𝒙∗ ) ∗
⋯ 𝐿1𝑘 (𝒙 )
⋮ ⋱ ⋮ ⋮ ⋱ ⋮
𝑔𝑘1 (𝒙∗ ) ⋯ 𝑔11 (𝒙∗ ) 𝐿𝑘1 (𝒙∗ ) ⋯ 𝐿𝑘𝑘 (𝒙∗ )

Second – Derivative Test: The General Case


• suppose that functions 𝑓 and 𝑔1, … , 𝑔𝑚 are defined on a set 𝑆,
and let 𝑥 ∗ be an interior point in 𝑆 satisfying the necessary
conditions. Suppose that f and g1,…,gm are twice differentiable
around 𝑥 ∗ . Then:
(a) 𝐼𝑓 (−1)𝑚 𝐵𝑘 (𝒙∗ ) > 0 for 𝑘 = 𝑚 + 1, … , 𝑛, then 𝒙∗
solves the local minimization problem in (5).
(b) If −1 𝑘 𝐵𝑘 (𝒙∗ ) > 0 for 𝑘 = 𝑚 + 1, … , 𝑛, then 𝒙∗
solves the local maximization problem in (5).
Various bordered leadingTewodros
principal minors
Negash (PhD), Addis Ababa can be formed from
University, Department of Economics
• 𝐵2 (𝒙∗ ) - has 𝐿𝐿22 as the last element of its principal
diagonal
• 𝐵3 (𝒙∗ ) - has 𝐿𝐿33 as the last element of its principal
diagonal
• 𝐵𝑛 (𝒙∗ ) = 𝐵𝑘 (𝒙∗ ) - has 𝐿𝐿𝑛𝑛 as the last element of its
principal diagonal
• With this symbology, the second – order conditions in (𝑎)
and (𝑏) can be stated in terms of the signs of the following
bordered leading principal minors
𝐵𝑚+1 , 𝐵𝑚+2 , ⋯ , 𝐵𝑚+𝑛 = 𝐵𝑘
• The sing factor (−1)𝑚 is the same for all 𝑛 in (𝑎) while the
sign factor (−1)𝑛 in (𝑏) varies with 𝑛(= 𝑚 + 1 … 𝑛).

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
Example:
• Find the local max (min) of
g1(x,y,z) = x+2y+z = 30
𝑓(𝑥, 𝑦, 𝑧) = 𝑥2 + 𝑦2 + 𝑧2 s.t. ቊ
g2(x,y,z) = 2x – y – 3z = 10

Tewodros Negash (PhD), Addis Ababa


University, Department of Economics
3.4 Inequality Constraints: Nonlinear Programming
• Replacing the equality constraints with inequality
constraints, results in the following nonlinear
programming problem
g1(x1,….,xn) ≤ b1
Max 𝑓(𝑥1, … . , 𝑥𝑛) subject to൝ …………………… (1)
gm(x1,….,xn) ≤ bm
Where 𝑏1 , … , 𝑏𝑚 are constants
• A vector 𝒙 = (𝑥1, … , 𝑥𝑛) that satisfies all the
constraints is called admissible or feasible
• The set of all admissible vectors is the admissible or the
feasible set
• Minimizing 𝑓(𝒙) is equivalent to maximizing – 𝑓(𝒙)
• The constraint 𝑔𝑗(𝑥) ≥ 𝑏𝑗 can be rewritten as
– 𝑔𝑗(𝑥) ≤ −𝑏𝑗 Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
The procedure for solving (1) involves
• Defining the Lagrangian exactly as before
𝐿 𝑥 = 𝑓 𝑥 + 𝜆1 𝑏1 − 𝑔1 𝑥 + ⋯ + 𝜆𝑚 𝑏𝑚 − 𝑔𝑚 𝑥
Where 𝜆1 ⋯ 𝜆𝑚 are the Lagrange multipliers
• Again the first – order partial derivatives of the Lagrangian
are equated to 0.
𝑚
𝜕𝐿(𝒙) 𝜕𝑓(𝒙) 𝜕𝑔𝑖 𝒙
= − ෍ 𝜆𝑗 = 0, 𝑖 = 1, ⋯ , 𝑛 (2)
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗=1
• In addition, we introduce the complementary slackness
conditions
𝜆𝑗 ≥ 0, with 𝜆𝑗 = 0 if 𝑔𝑖 𝒙 < 𝑏𝑗 , 𝑗 = 1, ⋯ , 𝑚 (3)
If 𝜆𝑗 > 0 , we must have 𝑔𝑖 𝒙 = 𝑏𝑗
• Finally, the inequality constraints themselves have to be
satisfied. Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Kuhn – Tucker Conditions
→ Maximization Problem
• Max 𝑧 = 𝑓(𝑥1, 𝑥2 … . , 𝑥𝑛)
Subject to 𝑔1 𝑥1, … . , 𝑥𝑛 ≤ 𝑏1
……………………
𝑔𝑚(𝑥1, … . , 𝑥𝑛) ≤ 𝑏𝑚
and 𝑥1, 𝑥2 … . , 𝑥𝑛 ≥ 0
(a) Form the Lagrangian function
𝑚

𝐿 = 𝑓(𝑥1, 𝑥2 … . , 𝑥𝑛) + ෍ 𝜆𝑖 [𝑏𝑖 – 𝑔𝑖 (𝑥1, 𝑥2 … . , 𝑥𝑛)]


𝑖=1
(b) The Kuhn – Tucker conditions
𝐿𝑥𝑖 ≤ 0 𝑥𝑖 ≥ 0 𝑎𝑛𝑑 𝑥𝑖 𝐿𝑥𝑖 = 0
𝐿𝜆𝑖 ≥ 0 𝜆𝑖 ≥ 0 𝑎𝑛𝑑 𝜆𝑖 𝐿𝜆𝑖 = 0
𝑥𝑖𝐿𝑥𝑖 = 0 𝑎𝑛𝑑 𝜆𝑖 𝐿𝜆𝑖 = 0 are complementary slackness
conditions Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Kuhn – Tucker conditions: Minimization Problem
• Min 𝑧 = 𝑓(𝑥1, 𝑥2 … . , 𝑥𝑛)
Subject to 𝑔1(𝑥1, … . , 𝑥𝑛) ≥ 𝑏1
………………………
𝑔𝑚(𝑥1, … . , 𝑥𝑛) ≥ 𝑏𝑚
and 𝑥1, 𝑥2 … . , 𝑥𝑛 ≥ 0
(a) Form the Lagrangian function
𝐿 = 𝑓(𝑥1, 𝑥2 … . , 𝑥𝑛) + σ𝑚 𝑖=1 𝑖 𝑖 𝜆 [𝑏 – 𝑔 𝑖 (𝑥 , 𝑥 … . , 𝑥 )]
1 2 𝑛
(b) The Kuhn – Tucker condition
𝐿𝑥𝑖 ≥ 0 𝑥𝑖 ≥ 0 𝑎𝑛𝑑 𝑥𝑖𝐿𝑥𝑖 = 0
𝐿𝜆𝑖 ≤ 0 𝜆𝑖 ≥ 0 𝑎𝑛𝑑 𝜆𝑖 𝐿𝜆𝑖 = 0
• If 𝑔𝑗 (𝑥 ∗ ) = 𝑏𝑖 , the constraint 𝑔𝑗(𝑥) ≤ 𝑏𝑖 is active or
binding at 𝑥 ∗ . Tewodros Negash (PhD), Addis Ababa
University, Department of Economics
Examples
(1) Max 𝑓 𝑥, 𝑦 = − 𝑥 − 2 2 − 𝑦 − 3 2
subject to 𝑥 ≤ 1, 𝑦 ≤ 1
and 𝑥, 𝑦 ≥ 0
(2) Maximize 𝑓(𝑥, 𝑦) = 𝑥𝑦 + 𝑥2
subject to 𝑔1(𝑥, 𝑦) = 𝑥2 + 𝑦2 ≤ 2
𝑔2(𝑥, 𝑦) = – 𝑦 ≤ – 1
and 𝑥, 𝑦 ≥ 0
(3) Min 𝐶(𝑥1, 𝑥2) = (𝑥1 – 4)2 + (𝑥2 – 4)2
subject to 𝑔1(𝑥1, 𝑥2) = 2𝑥1 + 3𝑥2 ≥ 6
𝑔2(𝑥1, 𝑥2) = – 3𝑥1 – 2𝑥2 ≥ – 12
and 𝑥1, 𝑥2 ≥ 0
Tewodros Negash (PhD), Addis Ababa
University, Department of Economics

You might also like