0% found this document useful (0 votes)
203 views

Optimality Conditions in Nonlinear Optimization: I I N I N I N

This document discusses optimality conditions in nonlinear optimization problems. It defines a general nonlinear optimization problem with inequality and equality constraints and assumes the functions are continuously differentiable. It describes the Mangasarian-Fromovitz and Slater's constraint qualification conditions. The theorem states that if a point is a local minimum, and the constraint qualification condition holds, then there exist multipliers such that the KKT conditions hold. If the objective and constraint functions are convex or affine, then any point satisfying the KKT conditions is optimal.

Uploaded by

rohitashav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
203 views

Optimality Conditions in Nonlinear Optimization: I I N I N I N

This document discusses optimality conditions in nonlinear optimization problems. It defines a general nonlinear optimization problem with inequality and equality constraints and assumes the functions are continuously differentiable. It describes the Mangasarian-Fromovitz and Slater's constraint qualification conditions. The theorem states that if a point is a local minimum, and the constraint qualification condition holds, then there exist multipliers such that the KKT conditions hold. If the objective and constraint functions are convex or affine, then any point satisfying the KKT conditions is optimal.

Uploaded by

rohitashav
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

26:711:564 OPTIMIZATION MODELS IN FINANCE

Professor Andrzej Ruszczynski

Optimality Conditions in Nonlinear Optimization


Let us consider the nonlinear optimization problem

min f (x)
gi (x) 0, i = 1, . . . , m, (1)
hi (x) = 0, i = 1, . . . , p.

We assume that the functions f : Rn R, gi : Rn R, i = 1, . . . , m, and hi : Rn R,


i = 1, . . . , p, are continuously differentiable. We call such a nonlinear optimization problem
smooth. The feasible set of this problem is denoted X.
Let I 0 (x) be the set of active inequality constraints at x:

I 0 (x) = {i : gi (x) = 0}.

We say that problem (1) satisfies at x the MangasarianFromovitz condition if

the gradients hi (x), i = 1, . . . , p, are linearly independent; and

there exists a direction d such that

gi (x), di < 0 i I 0 (x),


hi (x), di = 0, i = 1, . . . , p.

Problem (1) is said to satisfy Slaters condition, if the functions gi , i = 1, . . . , m are convex, the
functions hi , i = 1, . . . , p are affine, and there exists a feasible point xS such that gi (xS ) < 0,
i = 1, . . . , m.
In what follows we shall say that problem (1) satisfies the constraint qualification condition,
if either the MangasarianFromovitz or Slaters condition holds true.

Theorem
Let x be a local minimum of problem (1), where the functions f , gi and hi are continuously
differentiable. Assume that at x the constraint qualification condition is satisfied. Then there
exist multipliers i 0, i = 1, . . . , m, and i R, i = 1, . . . , p, such that
m
X p
X
f (x) + i gi (x) + i hi (x) = 0, (2)
i=1 i=1

and
i gi (x) = 0, i = 1, . . . , m. (3)
Conversely, if the functions f and gi are convex, and the functions hi are affine, then every point
x X satisfying these conditions for some 0 and Rp is an optimal solution of (1).

You might also like