0% found this document useful (0 votes)
75 views

Quadratic Programming

Quadratic programming involves minimizing a quadratic objective function subject to linear constraints. It has many applications including image and signal processing, financial portfolio optimization, and chemical process control. The formulation involves minimizing a quadratic function of the form 1/2x'Qx + q'x subject to linear equality constraints Ax=b and inequality constraints Bx<=c. If the objective function is convex, local minima are also global minima. Common solution methods include conjugate gradients for equality-constrained problems and interior point and active set methods for problems with inequality constraints. The Karush-Kuhn-Tucker conditions provide a way to test solutions for optimality in quadratic programs.

Uploaded by

jabir sulaiman
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Quadratic Programming

Quadratic programming involves minimizing a quadratic objective function subject to linear constraints. It has many applications including image and signal processing, financial portfolio optimization, and chemical process control. The formulation involves minimizing a quadratic function of the form 1/2x'Qx + q'x subject to linear equality constraints Ax=b and inequality constraints Bx<=c. If the objective function is convex, local minima are also global minima. Common solution methods include conjugate gradients for equality-constrained problems and interior point and active set methods for problems with inequality constraints. The Karush-Kuhn-Tucker conditions provide a way to test solutions for optimality in quadratic programs.

Uploaded by

jabir sulaiman
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Quadratic Programming

Introduction

 The objective function can contain bilinear or up to second order


polynomial terms,2 and the constraints are linear and can be both
equalities and inequalities
  QP is widely used in image and signal processing, to optimize financial
portfolios, to perform the least-squares method of regression, to control
scheduling in chemical plants, and in sequential quadratic programming
Formulation
 A general quadratic programming formulation contains a quadratic
objective function and linear equality and inequality constraints

 The objective function is arranged such that the vector q contains all of the
(singly-differentiated) linear terms and Q contains all of the (twice-
differentiated) quadratic terms.
 Put more simply, Q is the Hessian matrix of the objective function and q is
its gradient.
 The matrix equation Ax=a contains all of the linear equality constraints,
and Bx <= b are the linear inequality constraints
 When there are only inequality constraints (Bx<= b), the Lagrangean is
Global solutions

 If the objective function is convex, then any local minimum found is also
the sole global minimum.
 To analyze the function’s convexity, one can compute its Hessian matrix
and verify that all eigenvalues are positive, or, equivalently,
 one can verify that the matrix Q is positive definite.
 This is a sufficient condition, meaning that it is not required
 To be true in order for a local minimum to be the unique global minimum,
but will guarantee this property holds if true.
Example
KKT conditions

 Solutions can be tested for optimality using Karush-Kuhn-Tucker conditions


Solution strategies
  An unconstrained quadratic programming problem is most
straightforward to solve: simply set the derivative (gradient) of the
objective function equal to zero and solve
 The typical solution technique when the objective function is strictly
convex and there are only equality constraints is the conjugate gradient
method
 If there are inequality constraints (Bx<= b), then the interior point and
active set methods are the preferred solution methods.
 When there is a range on the allowable values of x , trust-
region methods are most frequently used
Quadratic Function and Eigenvectors

 the eigenvectors of the symmetric matrix tells us the information about the gradient
direction of the function
 first compute the eigenvectors and eigenvalues of the matrix
 we draw the contour map of the function and mark the directions of the two eigenvectors
 eigenvector with largest eigenvalue indicates the steepest gradient direction
 eigenvector with smallest eigenvalue indicates the most gradual gradient direction
Karush-Kuhn-Tucker
 Find the solution that minimizes f(x), as long as all equalities hi(x)=0 and
all inequalities gi(x)≤0 hold
 Put the cost function as well as the constraints in a single minimization
problem, but multiply each equality constraint by a factor λi 
 and the inequality constraints by a factor μi (the KKT multipliers)
 We would have m equalities and n inequalities. Hence the expression for the
optimization problem becomes:

 where L(x,λ,μ) is the Lagrangian and depends also on λ and μ, which are vectors
of the multipliers
  we find the roots of the gradient of the loss function with respect to x to
find the extremum of the function.
 the constraints in the function will make x depend on λ and μ

 we have number of variables equal to the elements in x (say k) plus the


number of multipliers (m+n),
 we only have k equations coming from the gradient with respect to x.
  we can differentiate the function with respect to each lagrange
multiplier λi to get m more equations
 how to come up with n more equations
coming from the inequality constraints. 
 If the extremum of the original function is
in gi(x∗)<0, then this constraint will never
play any role in changing the extremum
compared with the problem without the
constraint. 
 Therefore, its coefficient μi can be set to
zero.
 If, on the other hand, the new
solution is at the border of the
constraint, then gi(x∗)=0. The
next graphical representation
helps to understand this
concept.
 In both situations, the equation:

 is necessary for the solution to our new problem.


 Therefore, we get n equations from the inequality constraints
 The coefficients λi can have any value. However, the coefficients μi are
limited to nonnegative values. 
  imagine x∗ is in the region gi(x)=0, so that μi can be different from
zero.
 The constraint terms are always zero in the set of
possible solutions,

 At such point x∗, the gradient of f(x) and


of gi(x) both with respect to x have opposite
directions.
The KKT conditions

You might also like