0% found this document useful (0 votes)
58 views

Mathematics For Economics (ECON 104)

- This document summarizes optimization techniques for problems with equality and inequality constraints in multiple variables. - It provides the first-order and second-order conditions for local and global maxima/minima with and without constraints. Specifically, it discusses using Lagrange multipliers to solve constrained optimization problems. - An example solves a consumer utility maximization problem with a budget constraint to demonstrate applying the Lagrangian method and showing the Hessian is negative semidefinite to prove the solution is a global maximum.

Uploaded by

Experimental BeX
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Mathematics For Economics (ECON 104)

- This document summarizes optimization techniques for problems with equality and inequality constraints in multiple variables. - It provides the first-order and second-order conditions for local and global maxima/minima with and without constraints. Specifically, it discusses using Lagrange multipliers to solve constrained optimization problems. - An example solves a consumer utility maximization problem with a budget constraint to demonstrate applying the Lagrangian method and showing the Hessian is negative semidefinite to prove the solution is a global maximum.

Uploaded by

Experimental BeX
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Mathematics for Economics (ECON

104)

Lecture 9: Optimization with Equality Constraints II and


Optimization with Inequality Constraints
(Ch. 14)
Sufficient Condition: Global Optimum
(Ch. 14.5)

2
Theorem: Suppose that f and g are continuously differentiable
functions, defined on an open convex subset S ⊆ R2. Suppose
that there exists a number λ∗ such that (x∗, y ∗) is a stationary
point of
L(x, y) = f (x, y) − λ∗(g(x, y) − c).
Lastly, suppose that g(x∗, y ∗) = c. Then

• If the Hessian matrix associated with L(x, y, λ∗) is negative


semidefinite (NSD) “for all (x, y),” then (x∗, y ∗) solves the
constrained maximization problem.

• If the Hessian matrix associated with L(x, y, λ∗) is positive


semidefinite (PSD) “for all (x, y),” then (x∗, y ∗) solves the
constrained minimization problem.

3
Example: Consider the consumer’s utility maximization prob-
lem:
max f (x, y) = xαy β subject to pxx + py y = I.
x,y
Assume that x ≥ 0, y ≥ 0, α > 0, β > 0 and α + β ≤ 1. Then, the
stationary point (from FOCs of the Lagrangian),
!
α I β I
(x∗, y ∗) = , ,
α + β px α + β p y
is a global max point. To see that, consider

L(x, y, λ) = xαy β − λ (pxx + py y − I ) .


It suffices to show that the Hessian matrix associated with L(x, y, λ)
is NSD for all (x, y).

4
Example (continued): Find

L01 = αxα−1y β − λpx, L02 = βxαy β−1 − λpy


L0011 = α (α − 1) xα−2y β
00 α−1 β−1 00
L12 = αβx y = L21
L0022 = β (β − 1) xαy β−2
The Hessian matrix is
!
α (α − 1) xα−2y β αβxα−1y β−1
HL(x, y) = ,
αβxα−1y β−1 β (β − 1) xαy β−2
and thus

α (α − 1) xα−2y β ≤ 0; β (β − 1) xαy β−2 ≤ 0; and

5
|HL(x, y)| = αβ [(α − 1) (β − 1) − αβ ] x2α−2y 2β−2
= αβ [1 − α − β ] x2α−2y 2β−2 ≥ 0.

This means that HL(x, y) is negative semidefinite for all non-


negative values of x and y.

6
Summary

• Without constraint:
Local Max or Min Global Max or Min
FOC fi0(x∗, y ∗) = 0 for all i fi0(x∗, y ∗) = 0 for all i
SOC H is ND or PD at (x∗, y ∗) H is NSD or PSD for all (x, y)

• With constraint:
Local Max or Min Global Max or Min
FOC L0i(x∗, y ∗) = 0 for all i L0i(x∗, y ∗) = 0 for all i
SOC |H̄| > 0 or < 0 at (x∗, y ∗) HL is NSD or PSD for all (x, y)

where HL denotes the Hessian matrix associated with L(x, y, λ∗)


given the Lagrange multiplier λ∗ we obtained.
7
Optimization with 3 Variables and 2
Equality Constraints (Ch. 14.6)

8
The Lagrangian method can be extended to a problem with three
variables and two equality constraints:

(
g 1(x, y, z) = c1
max f (x, y, z) subject to
x,y,z g 2(x, y, z) = c2.

9
The Lagrangian for this problem is
   
L = f (x, y, z) − λ1 g 1(x, y, z) − c 1 − λ2 g 2(x, y, z) − c 2 .

FOCs with 3 Variables and 2 Equality Constraints:


∂g 1 ∂g 2
! !
∂f
L01 = − λ1 − λ2 =0
∂x ∂x ∂x
1 2
! !
0 ∂f ∂g ∂g
L2 = − λ1 − λ2 =0
∂y ∂y ∂y
1 2
! !
∂f ∂g ∂g
L03 = − λ1 − λ2 =0
∂z ∂z ∂z

10
Theorem (Necessity for Lagrangian Method): Suppose that
f , g 1 and g 2 are continuously differentiable and (x∗, y ∗, z ∗) is an
interior point in the domain S and it is also a local maximum (or
minimum) point of f subject to g 1(x, y, z) = c1 and g 2(x, y, z) =
c2. Assume also that the following constraint qualification is
satisfied: for any feasible point (x, y, z) satisfying g 1(x, y, z) = c1
and g 2(x, y, z) = c2, two vectors ∇g 1(x, y, z) and ∇g 2(x, y, z) are
not parallel to each other.

Then, there exists a unique pair of numbers (λ1, λ2) such that
0 ∂f (x∗, y ∗, z ∗) ∂g 1(x∗, y ∗, z ∗) ∂g 2(x∗, y ∗, z ∗)
L1 = − λ1 − λ2 = 0;
∂x ∂x ∂x
0 ∂f (x∗, y ∗, z ∗) ∂g 1(x∗, y ∗, z ∗) ∂g 2(x∗, y ∗, z ∗)
L2 = − λ1 − λ2 = 0;
∂y ∂y ∂y
0 ∂f (x∗, y ∗, z ∗) ∂g 1(x∗, y ∗, z ∗) ∂g 2(x∗, y ∗, z ∗)
L3 = − λ1 − λ2 = 0.
∂z ∂z ∂z
11
Example: Consider the problem:

max f (x, y) = x + y
(x,y,z)
g 1(x, y, z) = x2 + 2y 2 + z 2 = 1
(
subject to
g 2(x, y, z) = x + y + z = 1.

Define

D1 = {(x, y, z) ∈ R3| x2 + 2y 2 + z 2 = 1},


D2 = {(x, y, z) ∈ R3| x + y + z = 1}.

Since g 1 and g 2 are both the sum of polynomials, they are con-
tinuous. Thus, D1 and D2 are closed.

12
Define D = D1 ∩ D2, which is also closed. Why?

It suffices to show that D1 is bounded for D to be bounded.


Define
n o
B= 3 2 2 2
(x, y, z) ∈ R | x + y + z ≤ 1 .
For any (x, y, z) ∈ D1,

x2 + y 2 + z 2 = x2 + 2y 2 + z 2 − y 2 = 1 − y 2 ≤ 1 ⇒ (x, y, z) ∈ B
So, B contains D1 so that D1 is bounded.

f is continuous because it is the sum of polynomials. So, by the


extreme-value theorem, the solution to the problem is guaranteed
to exist.

13
To find the solutions, we set up the Lagrangian:
 
2 2 2
L(x, y, z, λ1, λ2) = x+y−λ1 x + 2y + z − 1 −λ2 (x + y + z − 1) .

The first-order conditions are


0
L1 = 1 − 2λ1x − λ2 = 0 (1)
0
L2 = 1 − 4λ1y − λ2 = 0 (2)
0
L3 = −2λ1z − λ2 = 0 (3)
and the constraints are

x2 + 2y 2 + z 2 = 1 (4)
x + y + z = 1. (5)

14
From (1) and (2), we obtain

2λ1(x − 2y) = 0.
Then, we divide our argument into two cases: (i) λ1 = 0 or (ii)
x = 2y.

Case (i) λ1 = 0: we obtain λ2 = 1 from (2) and λ2 = 0 from


(3). Contradiction.

Thus, we ignore this case.

15
Case (ii) x = 2y: Plugging this relationship into the constraints
(4) and (5), we get

6y 2 + z 2 = 1 (6)
3y + z = 1. (7)
From these equations (6) and (7), we get

6y 2 + (1 − 3y )2 = 1
⇒ 15y 2 − 6y = 0
⇒ 3y(5y − 2) = 0.
Thus, y = 0 or y = 2/5.

16
We summarize:
1
y ∗ = 0, x∗ = 0, z ∗ = 1, λ∗1 = − , λ∗2 = 1
2
∗ 2 ∗ 4 ∗ 1 ∗ 1 ∗ 1
y = , x = , z = − , λ1 = , λ2 = .
5 5 5 2 5

 
Hence, (x∗, y ∗, z ∗) = 4, 2, −1
is the candidate for the maximum
5 5 5
point (note that the objective function is x + y).

17
Claim: Constraint qualification is satisfied

Compute
∂g 1/∂x ∂g 2/∂x
       
2x 1
∇g 1(x, y, z) =  ∂g 1/∂y  =  4y  and ∇g 2(x, y, z) =  ∂g 2/∂y  =  1 
       

∂g 1/∂z 2z ∂g 2/∂z 1

Suppose, on the contrary, that there exist a feasible vector


(x, y, z) and α 6= 0 such that ∇g 1(x, y, z) = α∇g 2(x, y, z), i.e.,
constraint qualification is violated.

This implies that (x, y, z) = (α/2, α/4, α/2).

18
Since (x, y, z) is a feasible point, we must satisfy g 2(x, y, z) = 1.
This implies that (x, y, z) = (2/5, 1/5, 2/5).

Then, we confirm
4 2 4 2
g 1(x, y, z) = x2 + 2y 2 + z 2 = + + = 6= 1.
25 25 25 5

Therefore, (2/5, 1/5, 2/5) is not a feasible point. This shows


that the constraint qualification is satisfied. 
 
Hence, (x∗, y ∗, z ∗) = 4, 2, −1
is the (global) maximum point for
5 5 5
the constrained optimization problem.

19
Example: Consider the problem:
min x2 + y 2 + z 2
x,y,z
(
x + 2y + z = 30
subject to
2x − y − 3z = 10.
We set up the Lagrangian:
L = x2 + y 2 + z 2 − λ1 (x + 2y + z − 30) − λ2 (2x − y − 3z − 10) .
The first-order conditions are
2x − λ1 − 2λ2 = 0 (1)
2y − 2λ1 + λ2 = 0 (2)
2z − λ1 + 3λ2 = 0, (3)
and the constraints are:
x + 2y + z = 30 and 2x − y − 3z = 10.
Each equation (1), (2), and (3) imply
1
x = (λ1 + 2λ2)
2
1
y = (2λ1 − λ2)
2
1
z = (λ1 − 3λ2).
2
Substituting these into two equality constraints yields
3
x + 2y + z = 3λ1 − λ2 = 30 (4)
2
3
2x − y − 3z = − λ1 + 7λ2 = 10. (5)
2

20
So, we have two unknowns (λ1, λ2) and two equations. (4) +
(5) ×2 yields λ2 = 4.

Substituting this into (5), we also obtain λ1 = 12.

When λ1 = 12 and λ2 = 4, we have (x, y, z) = (10, 10, 0).

So, (x∗, y ∗, z ∗) = (10, 10, 0) is the unique solution candidate.

21
Claim: (x∗, y ∗, z ∗) is the global min point.

Proof: We show this directly. Using the two equality con-


straints, we express y and z as a function x:
y = 20 − x
z = x − 10.

Substituting these into the objective function, we obtain


x2 + y 2 + z 2 = x2 + (20 − x)2 + (x − 10)2
= 3x2 − 60x + 500
= 3(x − 10)2 + 200.
Since (x − 10)2 is always nonnegative, choosing x = 10 is the
global minimum point of f subject to these two equality con-
straints. This indeed coincides with the answer we obtained
using the Lagrangian method. 
22
Optimization with an Inequality
Constraint (Ch. 14.8)

23
Many models in economics have inequality constraints.

Consider the problem:

max f (x, y) subject to g(x, y) ≤ c.


x,y

We say that (x, y) is a feasible point if g(x, y) ≤ c.

Denote the optimal choice by (x∗, y ∗). Define f ∗(c) = f (x∗, y ∗).
Then, we already established the following interpretation of λ:
df ∗(c)
= λ.
dc

24
Case 1: g(x∗, y ∗) = c

At the optimal choice (x∗, y ∗), the constraint g(x∗, y ∗) = c.

If we relax the constraint by raising c, then it will change the


optimal choice (x∗, y ∗) and increases the maximum value of the
function. ⇒ λ > 0.

Hence, the optimal choice is bounded by the constraint.

We then say “the constraint is binding.”

25
Case 2: g(x∗, y ∗) < c

The optimal choice (x∗, y ∗) is the highest peak of f (x, y).

If we relax the constraint by raising c, the optimal choice is


not affected, and the maximum value of the function therefore
remains the same. ⇒ λ = 0.

We then say “the constraint is not binding (or slack).”

26
Theorem [Necessity of Kuhn-Tucker Conditions]: Suppose
that (x∗, y ∗) solves the problem:
max f (x, y) subject to g(x, y) ≤ c,
x,y
where f and g are continuously differentiable. The following
constraint qualification is satisfied: for any feasible point (x, y)
satisfying g(x, y) = c,
0 0
 
g1(x, y), g2(x, y) =6 (0, 0).
Form
L = f (x, y) − λ(g(x, y) − c).
Then, there is a unique λ∗ ∈ R such that
L01(x∗, y ∗) = 0,
L02 (x∗, y ∗) = 0,
λ∗ ≥ 0, g(x∗, y ∗) ≤ c and λ∗ [g(x∗, y ∗) − c] = 0.

27
Proof

Step 1: L01(x∗, y ∗) = L02(x∗, y ∗) = 0.

Consider the case where the constraint is binding: g(x∗, y ∗) = c.


In this case, the optimal choice is in the equality constraint
g(x∗, y ∗) = c.

Thus, the optimization problem with an inequality constraint is


the same as the optimization problem with an equality constraint.

In the optimal choice with an equality constraint,


L01(x∗, y ∗) = L02(x∗, y ∗) = 0
Observe also that
∇f (x∗, y ∗) = λ · ∇g(x∗, y ∗).
28
Consider the case where the constraint is slack: g(x∗, y ∗) < c.

The optimization problem with an inequality constraint is the


same as the optimization problem without constraint.

Thus, at the solution, we have fi0(x∗, y ∗) = 0 for all i = 1, 2.

Since λ = 0, L0i(x∗, y ∗) = fi0(x∗, y ∗) for all i = 1, 2. Hence,


L0i(x∗, y ∗) = 0 for all i = 1, 2.

29
Step 2: λ ≥ 0 and g(x∗, y ∗) ≤ c.

The inequality g(x∗, y ∗) ≤ c is obviously true.

Suppose that c increases to c0 > c. The constraint set expands


and has more options than before.

The best choice with more options cannot be worse than the
one with less options.

⇒ the maximum value cannot decrease when c rises. Hence,


λ < 0 cannot be true.

30
Step 3: λ[g(x∗, y ∗) − c] = 0.

If λ > 0 (as in the first case of Step 1), then g(x∗, y ∗) = c.

And, if g(x∗, y ∗) < c (as in the second case of Step 1), then
λ = 0. 

31
We reproduce the conditions:
L01(x∗, y ∗) = 0
L02 (x∗, y ∗) = 0
λ ≥ 0, g(x∗, y ∗) ≤ c and λ [g(x∗, y ∗) − c] = 0.
These (necessary) conditions are known as the Kuhn-Tucker
conditions, after Harold W. Kuhn and Albert W. Tucker.

The inequalities, λ ≥ 0 and g(x∗, y ∗) ≤ c, together with λ[g(x∗, y ∗)−


c] = 0 are called complementary slackness conditions; at most
one (not both) can be slack (inequality), and equivalently at
least one must be an equality.

NOTE: It is possible that both λ = 0 and g(x∗, y ∗) − c = 0

32
Sufficiency of Kuhn-Tucker Conditions: Single Vairable
(Ch. 14.9)

In an optimization problem with an inequality constraint, a fea-


sible point x∗ satisfying the Kuhn-Tucker conditions is sufficient
for a global optimum:

00
• if Lxx(x, λ) ≤ 0 for “all x” where λ is obtained by the Kuhn-
Tucker conditions, then x∗ is the global max.

00
• if Lxx(x, λ) ≥ 0 for “all x” where λ is obtained by the Kuhn-
Tucker conditions, then x∗ is the global min.

33
Sufficiency of Kuhn-Tucker Conditions: One Constraint
(Ch. 14.9)

In an optimization problem with an inequality constraint, a


feasible point (x∗, y ∗) satisfying the Kuhn-Tucker conditions is
sufficient for a global optimum:

• if the Hessian matrix associated with L(x, y, λ) where λ is ob-


tained by the Kuhn-Tucker conditions is negative semidef-
inite (NSD) for “all (x, y),” then (x∗, y ∗) is a global max
point.

• if the Hessian matrix associated with L(x, y, λ) where λ is ob-


tained by the Kuhn-Tucker conditions is positive semidef-
inite (PSD) for “all (x, y),” then (x∗, y ∗) is a global min
point.
34
Example 1: (one variable and one constraint) Consider the
problem:
max −(x − 2)2 subject to x ≥ 1.
x
The problem can be rewritten as

max −(x − 2)2 subject to 1 − x ≤ 0.


x
The Lagrangian is

L(x, λ) = −(x − 2)2 − λ (1 − x)


The Kuhn-Tucker conditions are

L0x = −2 (x − 2) + λ = 0
λ ≥ 0, x ≥ 1 and λ(1 − x) = 0.

35
From λ(1 − x) = 0, we have either λ = 0 or x = 1. Consider two
cases:

Case 1: λ = 0.

We have 2(x − 2) = 0 ⇒ x = 2.

Case 2: x = 1.

We have 2 + λ = 0 ⇒ λ = −2. This violates λ ≥ 0.

Hence, the solution of K-T conditions is (x∗, λ∗) = (2, 0).

00
Since Lxx(x, λ∗ = 0) = −2 < 0 for all x, x∗ = 2 is the solution to
the constrained optimization problem.
36
Example 2: (one variable and one constraint) Consider the
problem:
max −(x − 2)2 subject to x ≥ 3.
x
The Lagrangian is

L(x, λ) = −(x − 2)2 − λ (3 − x)


The Kuhn-Tucker conditions are

L0x = −2 (x − 2) + λ = 0
λ ≥ 0, x ≥ 3 and λ(3 − x) = 0.
From λ(3 − x) = 0, we have either λ = 0 or x = 3.

37
Case 1: λ = 0

We get 2(x − 2) = 0 ⇒ x = 2, which, however, cannot be a


solution, since it does not satisfy the constraint: x ≥ 3.

Case 2: x = 3

From the first equation, we have λ = 2.

Thus, the solution that satisfies K-T conditions is (x∗, λ∗) =


(3, 2).

00
Since Lxx(x, λ∗ = 2) = −2 < 0 for all x, x∗ = 3 is the solution to
the constrained optimization problem.
38
Example 3: (two variable and one constraint) Consider the
problem:
max xy s.t. x2 + y 2 ≤ 1.
x,y
Set up the Lagrangian:

L(x, y, λ) = xy − λ(x2 + y 2 − 1).


The first-order conditions are

L01 = y − 2λx = 0
L02 = x − 2λy = 0
λ ≥ 0, x2 + y 2 ≤ 1 and λ(x2 + y 2 − 1) = 0.

From λ(x2 + y 2 − 1) = 0, we consider two cases.

39
Case 1: λ = 0

We get x = y = 0 from first two equations. This satisfies all the


first-order conditions, and so it is a candidate for the solution.

Case 2: x2 + y 2 = 1

From the first two equations, we get


y x
λ= = or x2 = y 2.
2x 2y
Then we get
2 2 1 1 1
x = y = or x = ± √ , y=± √ .
2 2 2

40
Now the candidates are:
1 1 1
x = √ , y =√ , λ=
2 2 2
1 1 1
x = −√ , y = −√ , λ =
2 2 2
1 1 1
x = √ , y = −√ , λ = −
2 2 2
1 1 1
x = −√ , y=√ , λ=− .
2 2 2

We disregard the last two cases since λ < 0.

41
Hence, we have three candidates that satisfy K-T conditions:

x∗ = 0, y ∗ = 0, λ∗ = 0
1 1 1
x∗ = √ , y ∗ = √ , λ∗ =
2 2 2
∗ 1 ∗ 1 ∗ 1
x = −√ , y = −√ , λ = .
2 2 2

Since the objective function is xy, the solutions are the last two.

Note also that the constraint qualification is satisfied in the so-


lution. This is because (0, 0) is the only feasible point that vi-
olates the constraint qualification, as ∇g 1(x, y) = (2x, 2y). But
x2 + y 2 < 1 at (x, y) = (0, 0).

42
Define
D = {(x, y) ∈ R2| x2 + y 2 ≤ 1}
D is bounded, as it is a closed ball itself. Let g(x, y) = x2 + y 2.
Since g(x, y) is the sum of polynomials, it is continuous. Hence,
D is closed. Thus, D is compact. f (x, y) = xy is the product of
polynomials so that f is continuous.

By the extreme value theorem, the constrained maximization


problem has a solution.

√ √ √ √
We thus conclude that (x∗, y ∗) = (1/ 2, 1/ 2), (−1/ 2, −1/ 2)
are the solutions and f achieves 1/2 as the maximum value.

43
m Inequality Constraints where m ≥ 2
(Ch. 14.9)

44
Consider the problem:

max f (x, y) s.t. g 1(x, y) ≤ c1, ..., g m(x, y) ≤ cm.


(x,y)

Suppose that (x∗, y ∗) maximizes f on those constraints and that


f and each g j are continuously differentiable.

We say that (x, y) is a feasible point if g 1(x, y) ≤ c1, . . . , g m(x, y) ≤


cm .

Let (x, y) be a feasible point. Define

J(x, y) = {k| g k (x, y) = ck for some k ∈ {1, . . . , m}}


as the set of constraints which are binding at (x, y).
45
Constraint Qualification

There are three cases we need to consider, as we handle only


two variables (x, y) which cannot satisfy three distinct equality
constraints simultaneously.

For any feasible point (x, y), the constraint qualification requires
us to check the following three cases:

Case 1: |J(x, y)| = 0, i.e., no constraints are binding.

In this case, the constraint qualification is automatically satisfied.

46
Case 2: |J(x, y)| = 1, i.e., only one constraint is binding.

Assume that g j (x, y) = cj . Then, the constraint qualification


requires
∇g j (x, y) 6= (0, 0).

Case 3: |J(x, y)| = 2, i.e., two constraints are binding.

Assume that g j (x, y) = cj and g k (x, y) = ck . Then, the constraint


qualification requires that ∇g j (x, y) and ∇g k (x, y) be not parallel
to each other.

47
We set up the Lagrangian:

L(x, y, λ1, ..., λm)


 
= f (x, y) − λ1 g (x, y) − c1 − · · · − λm (g m(x, y) − cm) .
1

If the constraint qualification is satisfied for any feasible point,


there exists a unique λ = (λ1, . . . , λm) such that
∂L(x, y, λ1, . . . , λm)
= 0
∂x
∂L(x, y, λ1, . . . , λm)
= 0
h
∂y i
j j
λj ≥ 0, g (x, y) ≤ cj and λj g (x, y) − cj = 0 for all j = 1, ..., m.

48
Minimization with Inequality Constraints

The inequality constraints in a “minimization” problem are usu-


ally presented as

g 1(x, y) ≥ c1, . . . , g m(x, y) ≥ cm instead of g 1(x, y) ≤ c1, . . . , g m(x, y) ≤ cm.


An increase in ck tightens the constraint g k (x, y) (by making it
more difficult to satisfy). In general, in the minimization problem

min f (x, y) s.t. g 1(x, y) ≥ c1, . . . , g m(x, y) ≥ cm,


(x,y)
we form

L = f (x, y) − λ1(g 1(x, y) − c) − · · · − λm(g m(x, y) − cm).

49
Sufficiency of Kuhn-Tucker Conditions: m Constraints
(Ch. 14.9)

In an optimization problem with inequality constraints, a fea-


sible point (x∗, y ∗) satisfying the Kuhn-Tucker conditions is suf-
ficient for a global optimum:

• If the Hessian matrix associated with L(x, y, λ1, . . . , λm) where


λ1, . . . , λm are obtained by the Kuhn-Tucker conditions is
negative semidefinite (NSD) for “all (x, y),” then (x∗, y ∗)
is a global maximum point.

• If the Hessian matrix associated with L(x, y, λ1, . . . , λm) where


λ1, . . . , λm are obtained by the Kuhn-Tucker condition is pos-
itive semidefinite (PSD) for “all (x, y),” then (x∗, y ∗) is a
global minimum point.
50

You might also like