0% found this document useful (0 votes)
100 views

4 Handling Constraints: F (X) X R C J 1, - . - , M C 0, K 1, - . - , M

The document discusses methods for solving constrained optimization problems by converting them into unconstrained problems using Lagrange multipliers. It introduces the Lagrangian function and the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for an optimum of a constrained problem. An example of a problem with a single equality constraint is provided to illustrate applying the Lagrangian method and deriving the KKT conditions.

Uploaded by

verbicar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

4 Handling Constraints: F (X) X R C J 1, - . - , M C 0, K 1, - . - , M

The document discusses methods for solving constrained optimization problems by converting them into unconstrained problems using Lagrange multipliers. It introduces the Lagrangian function and the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for an optimum of a constrained problem. An example of a problem with a single equality constraint is provided to illustrate applying the Lagrangian method and deriving the KKT conditions.

Uploaded by

verbicar
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

4

Handling Constraints

Engineering design optimization problems are very rarely unconstrained. The constraints in these problems are most often nonlinear an therefore it is important that we learn about methods for the solution of nonlinearly constrained optimization problems. As we will see, such problems can be converted into a sequence of unconstrained problems and we can then use the methods of solution that we are already familiar with. Recall the statement of a general optimization problem, minimize w.r.t subject to

f (x) xR
n

c j (x) = 0, ck (x) 0,

j = 1, . . . , m k = 1, . . . , m

Example 4.1: Graphical Solution of a Constrained Optimization Problem


2

minimize

f (x) = 4x1 x1 x2 2.5 c1(x) = x2 1.5x1 + 2x1 1 0, c2(x) = x2 + 2x1 2x1 4.25 0
2 2 2 2

w.r.t x1, x2 subject to

4.1

Optimality Conditions for Constrained Problems

The optimality conditions for nonlinearly constrained problems are important because they form the basis for algorithms for solving such problems.

4.1.1

Nonlinear Equality Constraints

Suppose we have the following optimization problem with equality constraints,

minimize w.r.t subject to

f (x) xR
n

c j (x) = 0,

j = 1, . . . , m

To solve this problem, we could solve for m components of x by using the equality constraints to express them in terms of the other components. The result would be an unconstrained problem with n m variables. However, this procedure is only feasible for simple explicit functions. Lagrange devised a method to solve this problem. . .

At a stationary point, the total dierential of the objective function has to be equal to zero, i.e.,

df =

f f f T dx1 + dx2 + + dxn = f dx = 0. x1 x2 xn

(4.1)

For a feasible point, the total dierential of the constraints (c 1, . . . c m ) must also be zero, and so c j c j T d cj = dx1 + + dxn = c dx = 0. (4.2) x1 xn

j and Lagrange suggested that one could multiply each constraint variation by a scalar subtract it to from the objective function, 0 1 m n m X X X j f j d j c @ A dxi = 0 df cj = 0 xi xi j =1 i=1 j =1

(4.3)

Note that the components of variation vector dx are independent and arbitrary, since we have already accounted for the constraint. Thus, for this equation to be satised, we need a vector such that the expression inside the parenthesis vanishes, i.e.,

X c f j j = 0, xi x i j =1

(i = 1 , 2, . . . , n )

(4.4)

We dene the Lagrangian function as

) = f (x) L(x,

m X j =1

jc j (x)
(4.5)

) = f (x) T c L(x, (x)

If x is a stationary point of this function, using the necessary conditions for unconstrained problems, we obtain

L xi L j

X c f j j = 0, xi xi j =1 c j = 0,

(i = 1 , . . . , n )

(j = 1 , . . . , m ).

These rst order conditions are known as the KarushKuhnTucker (KKT) conditions and are necessary conditions for the optimum of a constrained problem. Note that the Lagrangian function is dened such that minimizing it with respect to the design variables and the Lagrange multipliers, we obtain the constraints of the original function. We have transformed a constrained optimization problem of n variables and m constraints into an unconstrained problem of n + m variables.

Example 4.2: Problem with Single Equality Constraint


2

Consider the following equality constrained problem: minimize subject to

f (x) = x1 + x2 c 1(x) = x1 + x2 2 = 0
2 2

By inspection we can see that the this feasible region for problem is a circle of radius 2. The solution x is obviously (1, 1)T . From any other point in the circle it is easy to nd a way to move in the feasible region (the boundary of the circle) while decreasing f .

-1

-2 -2 -1 0 1 2

In this example, the Lagrangian is

1(x2 + x2 2) L = x1 + x2 1 2
And the optimality conditions are

(4.6)

x L L
1

= =

" # " 1 # 1 21x1 x1 2 1 = 0 = 1 x2 1 21x2 2


1 2 2 1 = 1 x1 + x2 2 = 0 2

(4.7) (4.8)

= 1 corresponds to the minimum, and the positive value of the Lagrange In this case 1 2 multiplier corresponds to the maximum. We can distinguish these two situations by checking for positive deniteness of the Hessian of the Lagrangian.

Also note that at the solution the constraint normal c 1(x) is parallel to f (x), i.e., there such that is a scalar 1 c f (x ) = ( x ). (4.9) 1 1 We can derive this expression by examining the. . . rst-order Taylor series approximations to the objective and constraint functions. To retain feasibility with respect to c 1(x) = 0 we require that

c 1(x + d) c 1(x + d)
Linearizing this we get,

= =

0
=0

(4.10)
T T

c (x) +c 1 (x)d + O (d d). |1{z } c 1 (x)d = 0 .


T

(4.11)

(4.12)

We also know that a direction of improvement must result in a decrease in f , i.e.,

f (x + d) f (x) < 0.
Thus to rst order we require that

(4.13)

f (x) + f (x)d f (x) f (x)d < 0 .


T

<

(4.14) (4.15)

A necessary condition for optimality is that there is no direction satisfying both of these conditions. The only way that such a direction cannot exist is if f (x) and c 1(x) are 1 c 1(x) holds. parallel, that is, if f (x) = By dening the Lagrangian function

1) = f (x) 1c L(x, 1(x),

(4.16)

1) = f (x) 1 c and noting that xL(x, 1(x), we can state the necessary optimality such that xL(x, ) = 0. condition as follows: At the solution x there is a scalar 1 1
Thus we can search for solutions of the equality-constrained problem by searching for a 1 is called the Lagrange multiplier for stationary point of the Lagrangian function. The scalar the constraint c 1(x) = 0.

You might also like