0% found this document useful (0 votes)
86 views

A Toolbox For Modeling and Optimization in Matlab: October 2004

This document introduces YALMIP, a MATLAB toolbox for modeling and solving optimization problems. YALMIP allows users to easily model problems involving linear programming, quadratic programming, second order cone programming, semidefinite programming, and more. It interfaces with over 20 solvers to solve the modeled problems. Recent updates to YALMIP expanded its capabilities to include nonlinear expressions, mixed integer programming, and sum-of-squares decompositions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

A Toolbox For Modeling and Optimization in Matlab: October 2004

This document introduces YALMIP, a MATLAB toolbox for modeling and solving optimization problems. YALMIP allows users to easily model problems involving linear programming, quadratic programming, second order cone programming, semidefinite programming, and more. It interfaces with over 20 solvers to solve the modeled problems. Recent updates to YALMIP expanded its capabilities to include nonlinear expressions, mixed integer programming, and sum-of-squares decompositions.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/4124388

A toolbox for modeling and optimization in


MATLAB

Conference Paper · October 2004


DOI: 10.1109/CACSD.2004.1393890 · Source: IEEE Xplore

CITATIONS READS

931 9,672

1 author:

Johan Löfberg
Linköping University
58 PUBLICATIONS 3,562 CITATIONS

SEE PROFILE

All content following this page was uploaded by Johan Löfberg on 17 December 2016.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
YALMIP : A toolbox for modeling and
optimization in MATLAB
Johan Löfberg
Automatic Control Laboratory, ETHZ
CH-8092 Zürich, Switzerland.
[email protected]

Abstract— The MATLAB toolbox YALMIP is introduced. Rapid prototyping of an algorithm based on SDP can
It is described how YALMIP can be used to model and be done in matter of minutes using standard MATLAB
solve optimization problems typically occurring in systems and commands. In fact, learning 3 YALMIP specific commands
control theory.
will be enough for most users to model and solve their
I. I NTRODUCTION optimization problem.
YALMIP was initially indented for SDP and LMIs
Two of the most important mathematical tools introduced (hence the now obsolete name Yet Another LMI Parser),
in control and systems theory in the last decade are proba- but has evolved substantially over the years. The most
bly semidefinite programming (SDP) and linear matrix in- recent release, YALMIP 3, supports linear programming
equalities (LMI). Semidefinite programming unifies a large (LP), quadratic programming (QP), second order cone pro-
number of control problems, ranging from the more than gramming (SOCP), semidefinite programming, determinant
100 year old classical Lyapunov theory for linear systems, maximization, mixed integer programming, posynomial ge-
modern control theory from the 60’s based on the algebraic ometric programming, semidefinite programs with bilinear
Riccati equation, and more recent developments such as H∞ matrix inequalities (BMI), and multiparametric linear and
control in the 80’s. More importantly, LMIs and SDP has led quadratic programming. To solve these problems, around 20
to many new results on stability analysis and synthesis for solvers are interfaced. This includes both freeware solvers
uncertain system, robust model predictive control, control such as SeDuMi [16] and SDPT3 [17], and commercial
of piecewise affine systems and robust system identification, solvers as the PENNON solvers [7], LMILAB [4] and
just to mention a few applications. CPLEX [1]. Due to a flexible solver interface and internal
In the same sense that we earlier agreed that a control format, adding new solvers, and even new problem classes,
problem was solved if the problem boiled down to a can often be done with modest effort.
Riccati equation, as in linear quadratic control, we have YALMIP automatically detects what kind of a problem
now come to a point where a problem with a solution the user has defined, and selects a suitable solver based on
implicitly described by an SDP can be considered solved, this analysis. If no suitable solver is available, YALMIP
even though there is no analytic closed-form expression of tries to convert the problem to be able to solve it. As an
the solution. It was recognized in the 90’s that SDPs are example, if the user defines second order cone constraints,
convex optimization problems that can be solved efficiently but no second order cone programming solver is available,
in polynomial time [13]. Hence, for a problem stated using YALMIP converts the constraints to LMIs and solves the
an SDP, not only can we solve the problem but we can problem using any installed SDP solver.
solve it relatively efficiently. One of the most important extension in YALMIP 3
The large number of applications of SDP has led to an compared to earlier versions is the possibility to work with
intense research and development of software for solving nonlinear expression. This has enabled YALMIP users to
the optimization problems. There are today around 10 define optimization problems involving BMIs, which then
public solvers available, most of them free and easily can be solved using the solver PENBMI [6], the first
accessible on the Internet. However, these solvers typically public solver for problems with BMI constraints. These
take the problem description in a very compact format, optimization problems are unfortunately extremely hard to
making immediate use of the solvers time-consuming and solve, at-least globally, but since an enormous amount of
error prone. To overcome this, modeling languages and problems in control theory falls into this problems class, it
interfaces are needed. is our hope that YALMIP will inspire researchers to develop
This paper introduces the free MATLAB toolbox efficient BMI solvers and make them publicly available.
YALMIP, developed initially to model SDPs and solve Another introduction in YALMIP 3 is an internal branch-
these by interfacing external solvers. The toolbox makes and-bound framework. This enables YALMIP to solve
development of optimization problems in general, and con- integer programs for all supported convex optimization
trol oriented SDP problems in particular, extremely simple. classes, i.e. mixed integer linear, quadratic, second order
cone and semidefinite programs. The built-in integer solver III. I NTRODUCTION TO YALMIP
should not be considered a competitor to any dedicated This paper does not serve as a manual to YALMIP.
integer solver such as CPLEX [1]. However, if the user Nevertheless, a short introduction to the basic commands
has no integer solver installed, he or she will at-least be is included here to allow novel users to get started. It is
able to solve some small integer problems using YALMIP. assumed that the reader is familiar with MATLAB.
Moreover, there are currently no other free public solvers
available for solving mixed integer second order cone and A. Defining decision variables
semidefinite programs. The central component in an optimization problem is
The latest release of YALMIP has been extended to the decision variables. Decision variables are represented
include a set of mid-level commands to facilitate advanced in YALMIP by sdpvar objects. Using full syntax, a
YALMIP programming. These commands have been used symmetric matrix P ∈ Rn×n is defined by the following
to develop scripts for moment relaxation problems [10] command.
and sum-of-square decompositions [14], two recent ap- >> P = sdpvar(n,n,’symmetric’,’real’);
proaches, based on SDP and LMIs, for solving global poly-
nomial optimization problems. There are dedicated, more Square matrices are by default symmetric and real, so the
efficient, packages available for solving these problems same variable can be defined using only the dimension
(GloptiPoly [5] and SOSTOOLS [15]), and the inclusion of arguments.
these functionalities are mainly intended to give advanced >> P = sdpvar(n,n);
users hints on how the mid-level commands can be used.
The sum-of-square functionality does however have a novel A set of standard parameterizations are predefined and can
feature in that the sum-of-squares problem can be non- be used to create, e.g., fully parameterized matrices and
linearly parameterized. In theory, this means that this func- various type of matrices with complex variables.
tion can be used, e.g., to synthesize controllers for nonlinear >> Y = sdpvar(n,n,’full’);
systems. However, the resulting optimization problem is a >> X = sdpvar(n,n,’hermitian’,’complex’);
semidefinite program with BMIs instead of LMIs.
Although SDPs can be solved relatively efficiently using Important to realize is that most standard MATLAB com-
polynomial time algorithms, large-scale control problems mands and operators can be applied to sdpvar variables.
can easily become problematic, even for state-of-the-art Hence, the following construction is valid.
semidefinite solvers. To reduce computational complex- >> X = [P P(:,1);ones(1,n) sum(sum(P))];
ity, problem-specific solvers are needed in some cases.
B. Defining constraints
One problem class where structure can be exploited is
KYP problems, a generalization of Lyapunov inequalities. The most commonly used constraints in YALMIP are
YALMIP comes with a specialized command for defin- element-wise, semidefinite and equality constraints. The
ing KYP constraints, and interfaces the dedicated solver command to define these is called set1 .
KYPD [20]. The code below generate a list of constraints, gathered
Other features worth mentioning are the capabilities to in the set object F, constraining a matrix to be positive
work transparently with complex-valued data and con- definite, having all elements positive, and with the sum of
straints, easy extraction of dual variables and automatic all elements being n.
reduction of variables in equality constrained problems. >> P = sdpvar(n,n);
>> F = set(P > 0);
II. P RELIMINARIES AND NOTATION >> F = F + set(P(:) > 0);
>> F = F + set(sum(sum(P)) == n);
A symmetric matrix P is denoted positive semidefinite Note that the operators > and < are used to describe
(P  0) if zT Pz ≥ 0 ∀z. Positive definite (P  0) is the strict both semidefinite constraints and standard element-wise
version zT Pz > 0 ∀z 6= 0 . Linear matrix inequality (LMI) constraints2 . A constraint is interpreted in terms of semidef-
denotes a constraint of the form F0 + ∑ni=1 Fi xi  0, where initeness if both left-hand side and the right-hand side of
Fi are fixed symmetric matrices and x ∈ Rn is the decision the constraint is symmetric, and as an element-wise con-
variable. Constraints F0 + ∑ni=1 Fi xi + ∑nj=1 ∑ni=1 Fi j xi x j  0 straints otherwise. In addition to these standard constraints,
are denoted BMIs (bilinear matrix inequalities). Constraints YALMIP also supports convenient definition of integral-
involving either LMIs or BMIs are called semidefinite ity constraints, second order cone constraints and sum-
constraints. Optimization problems involving semidefinite of squares constraints. Without going into details, typical
constraints are termed semidefinite programs (SDPs). notation for these constraints would be
MATLAB commands and variables will be displayed 1 Not to be confused with the built-in function set in MATLAB
using typewriter font. Commands will be written on 2 Non-strict inequalities (>= and <=) are supported also. The reader is
separate lines and start with >>. referred to the YALMIP manual for details.
>> F = set(integer(x)); The YALMIP implementation of this feasibility problem is
>> F = set(cone(A*x+b,c’*x+d)); given below.
>> F = set(sos(1+x+xˆ7+xˆ8));
>> P = sdpvar(n,n);
C. Solving optimization problems >> F = set(P > 0) + set(A’*P+P*A < 0);
>> solvesdp(F)
Once all variables and constraints have been defined, the
optimization problem can be solved. Let us for simplicity The problem above can be addressed more efficiently by
assume that we have matrices c, A and b and we wish solving the classical Lyapunov equation, and the benefit of
to minimize cT x subject to the constraints Ax ≤ b and semidefinite programming and YALMIP is appearant first
∑ xi = 1. The YALMIP code to define and solve this prob- when we try to solve more complex problems. Consider the
lem is extremely intuitive and is essentially a one-to-one problem of finding a common Lyapunov function for two
mapping from the mathematical description. The command different systems with state matrices A1 and A2 . Further-
solvesdp3 is used for all4 optimization problems and more, let us assume that we want to find a diagonal solution
typically take two arguments, a set of constraints and the P satisfying P  Q for some given symmetric matrix Q,
objective function. and moreover, we want to find the minimum trace solution.
Stating this as an SDP using YALMIP is straightforward.
>> x = sdpvar(length(c),1);
>> F = set(A*x < b)+set(sum(x)==1); >> P = diag(sdpvar(n,1));
>> solvesdp(F,c’*x); >> F = set(P > Q);
>> F = F + set(A1’*P+P*A1 < 0);
YALMIP will automatically categorize this as a linear pro- >> F = F + set(A2’*P+P*A2 < 0);
gramming problem and call a suitable solver. The optimal >> solvesdp(F,trace(P))
solution can be extracted with the command double(x).
A third argument can be used to guide YALMIP in the To make things even more complicated, consider the min-
selection of solver, setting display levels and change solver imum Frobenius norm (TrPPT ) problem. The changes in
specific options etc. the code are minimal.

>> ops = sdpsettings(’solver’,’glpk’); >> solvesdp(F,trace(P*P’))


>> ops = sdpsettings(ops,’glpk.dual’,0); YALMIP will analyze the objective function and detect that
>> ops = sdpsettings(ops,’verbose’,1); it is a convex quadratic function. Since no public SDP solver
>> solvesdp(F,c’*x,ops); currently support quadratic objective functions, YALMIP
will internally convert the problem by performing suitable
IV. C ONTROL RELATED OPTIMIZATION USING YALMIP
epigraph formulations, and solve the problem using any
As stated in the introduction, YALMIP is a general available SDP solver.
purpose toolbox for modeling and solving optimization
problems using MATLAB. The focus in the remainder of B. Determinant maximization problems
this paper will however be on control related problems, and As an example of a more advanced optimization problem
we will illustrate how straightforward it is to model complex based on semidefinite programming, let us address the
optimization problems using YALMIP. problem of computing a state feedback controller u = Lx
together with a maximally large invariant region R, for a
A. Standard SDP problems in control saturated single-input system ẋ = Ax + Bu, |u| ≤ 1. If we
The perhaps most fundamental problem in control and work with an ellipsoidal region R = {x : xT Px ≤ 1, this can
systems theory is stability analysis using Lyapunov theory. addressed using semidefinite programming. To see this, let
A linear system ẋ = Ax is asymptotically stable if and us first state the problem in a mathematical framework.
only if the real part of all eigenvalues of A are negative,
or equivalently, there exist a solution P to the following (A + BL)T P + P(A + BL)  0
Lyapunov inequality. P  0
|Lx| ≤ 1∀x : xT Px ≤ 1
AT P + PA ≺ 0, P = PT  0
The first constraint ensures invariance of R (xT Px is non-
It is easy to realize that this is a linear matrix inequality,
increasing), the second constraint ensures that xT Px ≤ 1 de-
and the decision variables are the elements of the matrix P.
fines an ellipsoidal region, while the last constraint ensures
3 The name solvesdp was chosen since YALMIP initially only solved |u| ≤ 1 in R.
semidefinite programs. For compatibility issues, the name is kept even The constraints above are not LMIs, but a couple of
though the command now is used also for LP, QP, SOCP etc. standard tricks can be used to overcome this [3]. To begin
4 Special higher level problems such as moment relaxations, sum-of-
squares decompositions and multiparametric programs are invoked using with, multiply the first constraint from left and right with
specialized, but syntactically similar, commands. P−1 and introduce the new variables Q = P−1 and Y =
LP−1 . This yields the LMI AT Q + QA + Y T BT + BY  0. By exploiting connections to classical Lyapunov equali-
Furthermore, it can easily be shown that max |Lx| = ties and using duality theory for SDP, specially crafted
xT Px≤1
√ algorithms can eliminate the computational impact of the
LP−1 LT . Squaring this expression, inserting the definition complicating matrices Pi , and improve performance several
of Q and Y , and performing a Schur complement shows that orders of magnitude [11]. A clever implementation of the
1 Y

the third constraint is equivalent to  0. ideas in [11] can be found in the MATLAB package
YT Q
A natural objective function is the volume of the ellipsoid KYPD [20]. This special purpose solver can be used to-
R. The volume of this ellipsoid is proportional to det P−1 , gether with YALMIP.
or equivalently det Q. Hence, if we search for the maximal Consider the problem of computing the worst case L2
volume invariant ellipsoid, and the corresponding state gain from u to y for the the system ẋ = Ax + Bu, y = Cx.
feedback, we need to solve the following problem. This can be written as an SDP with one KYP constraint.

max det Q min t


t,P
Q,Y  T
A P + PA +CT C PB

s.t. AT Q + QA +Y B + BY  0
T T
s.t.  0
BT P −tI
Q  0
A KYP constraint can conveniently be defined in
1 Y
 
 0 YALMIP by using the command kyp. The use of kyp
YT Q
not only simplifies the code but, more importantly, enables
Still, this is not a standard SDP, but we have a so YALMIP to categorize the problem as a KYP-SDP and call
called determinant maximization (MAXDET) problem [19]. KYPD if available.
Surprisingly, this class of problems can be solved with
just slightly extended SDP solvers [21], or be converted >> P = sdpvar(n,n);
to a standard SDP problem [13]. YALMIP supports the >> t = sdpvar(1,1);
dedicated MAXDET solver [21], but can also use the con- >> M = blkdiag(C’*C,-t*eye(m));
struction in [13] to convert the problem to a standard SDP >> F = set(kyp(A,B,P,M) < 0);
and solve the problem using any installed SDP solver. The >> solvesdp(F,t,ops);
following code implements the whole synthesis problem. D. Non-convex semidefinite programming
>> Q = sdpvar(n,n); Although many problems in control and systems theory
>> Y = sdpvar(1,n); can be modeled using LMIs and solved using convex
>> F = set(Q>0); semidefinite programming, even more problems turn out to
>> F = F + set(A’*Q+Q*A+Y’*B’+B*Y < 0); be non-convex.
>> F = F + set([1 Y;Y’ Q]>0); One of the most basic problem in control theory is static
>> solvesdp(F,-logdet(Q)); output feedback where we search for a controller u = Ky
>> P = inv(double(Q)); and Lyapunov function xT Px. The closed-loop Lyapunov
>> L = P*double(Y); stability condition gives the following constraints.
Notice the objective function logdet(Q). This command (A + BKC)T P + P(A + BKC) ≺ 0, P = PT  0
is the key to declaring a MAXDET problem in YALMIP.5 Due to products between elements in P and K, the constraint
C. Large-scale KYP-SDPs is not linear, but a bilinear matrix inequality (BMI). Opti-
Despite the celebrated polynomial complexity of con- mization problems with BMIs are known to be non-convex
vex SDPs, they do admittedly scale poorly when applied and NP-hard in general [18], hence intractable in theory.
to large-scale system analysis and control synthesis. The However, there is code available to attack these problems,
reason is most often the introduction of a large Lyapunov- and it is possible to find solutions in some practical cases.
like matrix in the problem. YALMIP code to define BMI problems is no more com-
A substantial number of problems in systems and control plicated than code to define standard LMIs. The following
theory can be addressed using the Kalman-Yakubovic- program defines stability conditions for the output feedback
Popov lemma, often giving rise to SDPs of the following problem, and tries to obtain a feasible solution by calling
form. any installed BMI solver (YALMIP can currently only
interface the BMI solver PENBMI [6])
min cT x
x,P >> P = sdpvar(n,n);
 i
ATi Pi + Pi Ai Pi Bi

>> K = sdpvar(m,n);
s.t. + Mi (x)  0, i = 1 . . . N
BTi Pi 0 >> Ac = A+B*K*C;
5 This somewhat strange notation is a heritage from earlier version of
>> F = set(P > 0);
YALMIP when MAXDET problems only could be solved using [21]. In
>> F = F + set(Ac’*P+P*Ac < 0);
this solver, the objective function is cT x − log detQ(x) >> solvesdp(F)
E. Sum-of-squares decompositions The constraints are a mix of standard constraints and SOS
constraints.6
Sum-of-squares decompositions (SOS) is a recent tech-
nique for analyzing positivity of polynomials using semidef- >> F = set(P>0) + set(-25<K<25);
inite programming [14]. The basic idea is to decompose a >> F = F + set(sos(-x’*x-u’*u-Vdot));
polynomial p(x) as a product v(x)T Qv(x) for some poly-
The important catch here is that the SOS-constraint is bilin-
nomial vector v(x) and positive semidefinite matrix Q, thus
early parameterized in P and K. YALMIP will automatically
trivially showing non-negativity. As an example, consider
realize this and formulate the SOS-decomposition using
the following decomposition.
BMIs, and call a BMI solver to find the decomposition. The
 T  internal SOS-is invoked7 , using TrP as objective function.
1 1 1/2 1
 
−1/8
p(x) = 1 + x + x4 =  x   1/2 1/4 0  x  >> solvesos(F,trace(P));
x2 −1/8 0 1 x2
If a feasible solution is found, double(P) and double(K)
Since the matrix on the right-hand side is positive semidefi- recovers the controller and the Lyapunov function.
nite, the polynomial on the left side is non-negative. Finding It should be stressed that this functionality currently is
a decomposition for this example in YALMIP is done with rather academic, since it requires the solution of a non-
the following code. convex semidefinite program. Hence, it is only applicable
to small systems. For this to become a viable approach,
>> x = sdpvar(1,1); vastly improved robustness and efficiency of BMI solvers
>> p = 1+x+xˆ4; is needed.
>> F = set(sos(p));
>> solvesos(F) F. Multiparametric programming
A SOS-solver essentially derives a set of constraints that Another field in control theory where optimization has
have to hold on the matrix Q, and then solves a semidefinite had a tremendous impact is model predictive control
program to find a positive semidefinite Q satisfying all (MPC) [12]. The basic idea in model predictive control
constraints. There is already user-friendly and efficient is to pose optimal control problems on-line and solve
software available for these decompositions [15], but there these optimization problems continuously. MPC has had
might be cases when YALMIP is a valuable alternative. a substantial impact in practice, and is probably one of
As an example of a unique feature of the SOS- the most successful modern control algorithms. However,
functionality in YALMIP, let us study nonlinear control since MPC is based on optimization, it requires a consid-
synthesis for the following model (taken from [8]). erable amount of on-line computer resources to solve the
optimization problems fast enough. Hence, the impact in
ẋ1 = −1.5x12 − 0.5x13 − x2 systems requiring fast sampling or cheap on-line computers
has been limited.
ẋ2 = u
The on-line optimization problems for model predictive
T
Define the vector z = x1 x2 x12 . Our goal is to find a
 control, applied to a constrained linear discrete-time sys-
stabilizing nonlinear controller u = Kz and a non-quadratic tem xk+1 = Axk + Buk , yk = Cxk , essentially boils down to
Lyapunov function V = zT Pz, P  0. To prove stability, we optimization problems of the following form.
would like to enforce V̇ ≤ −xT x − uT u. Note that this is a z∗ (x) = arg min 1 T T T
z 2 z Hz + (c + Fx) z + d x
polynomial inequality in x, parameterized in the decision
variables P and K. In order to address this problem using Gz ≤ w + Ex
SOS, we search for P  0 and K such that −xT x − uT u − V̇ The variable x is typically the current state xk , whereas the
is a sum-of-squares. decision variable z normally denotes a control trajectory
For an implementation in YALMIP, we begin by defining to be optimized. The variables H, F, c, d, G, w and E are
states and decision variables. constant data depending on the model and controller tuning.
Note that the problem is a QP in z for fixed x.
>> x1 = sdpvar(1,1);x2 = sdpvar(1,1);
The function z∗ (x), i.e., the optimal future control trajec-
>> z = [x1;x2;x1ˆ2];
tory as function of the state, can be shown to piecewise
>> K = sdpvar(1,3);
affine. Hence, if we can find this function off-line, the
>> P = sdpvar(3,3);
on-line effort essentially reduce to a function evaluation.
Closed loop dynamics and differentiation of V (x). The concept of explicit solutions to parameterized opti-
mization problems is called multiparametric programming.
>> u = K*z;
>> f = [-1.5*x_1ˆ2-0.5*x_1ˆ3-x_2;u]; 6 Constraints
on K are added for numerical reasons
>> Vdot = jacobian(V,x)*f) 7 YALMIP will automatically detect parametric decision variables.
An introduction to this field, with a bias towards control the casual MATLAB user, and, ultimately, deliver a general
applications, can be found in [2]. framework for control relevant optimization in MATLAB.
Efficient algorithms to calculate explicit solutions to
R EFERENCES
multiparametric LPs and QPs have recently been publicly
available in the MATLAB toolbox MPT [9]. This toolbox [1] CPLEX. http://www.ilog.com/products/cplex.
[2] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos. The ex-
is interfaced in YALMIP, enabling extremely convenient plicit linear quadratic regulator for constrained systems. Automatica,
definition and solution of multiparametric problems. 38(1):3–20, 2002.
Giving a detailed description of MPC is beyond the scope [3] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix
Inequalities in System and Control Theory. SIAM Studies in Applied
of this paper, so let us just state and solve a typical MPC Mathematics. SIAM, Philadelphia, Pennsylvania, 1994.
problem, for a given state xk , using YALMIP. [4] P. Gahinet and A. Nemirovskii. LMI Control Toolbox: the LMI Lab.
The MathWorks, Inc, 1995.
>> U = sdpvar(N,1); [5] D. Henrion and J. B. Lasserre. Gloptipoly: Global optimization
over polynomials with Matlab and SeDuMi. ACM Transactions on
>> Y = T*x_k+S*U; Mathematical Software, 29(2):165–194, 2003.
>> F = set(-1 < U < 1) + set(Y > 0); [6] M. Kočvara and M. Stingl. PENBMI. Available at
>> sol = solvesdp(F,Y’*Y+U’*U); http://www.penopt.com.
[7] M. Kočvara and M. Stingl. PENNON: a code for convex nonlinear
The variable U is the decision variable and describes the and semidefinite programming. Optimization Methods and Software,
18(3):317–333, 2003.
future control trajectory. The model of the dynamic system [8] M. Krstić and P. Kokotović. Lean backstepping design for a jet engine
is captured in the matrices T and S, and gives the prediction compressor model. In Proceedings of the 4th IEEE Conference on
of future outputs Y , given the current state xk , and the Control Applications, pages 1047–1052, Albany, New York, 1995.
[9] M. Kvasnica, P. Grieder, M. Baotic, and M. Morari. Multi-Parametric
control sequence U. The input U is constrained and the Toolbox (MPT). Automatic Control Laboratory, ETHZ, Zürich, 2003.
output Y has to be positive. The performance measure is [10] J. B. Lasserre. Global optimization with polynomials and the problem
the standard unweighted quadratic cost, so the optimization of moments. SIAM Journal on Optimization, 11(3):796–817, 2001.
[11] L.Vandenberghe, R.V. Balakrishnan, R. Wallin, and A. Hansson.
problem solved when solvesdp is called will be a QP. On the implementation of primal-dual interior-point methods for
The changes in the YALMIP code above to calculate semidefinite programming problems derived from the kyp lemma.
an explicit solution U ∗ (xk ) instead is minimal. To begin In Proceedings of the IEEE Conference on Decision and Control,
volume 5, pages 4658–4663, Maui, HI, USA, December 9-12 2003.
with, we define xk as an sdpvar variable. The explicit IEEE.
solution can only be calculated over a bounded set, so [12] J. M. Maciejowski. Predictive Control with constraints. Prentice
we constrain xk to the region −10 ≤ xk ≤ 10. Instead of Hall, 2002.
[13] Y. Nesterov and A. Nemirovskii. Interior-Point Polynomial Algo-
using the command solvesdp, we invoke the function rithms in Convex Programming. SIAM Studies in Applied Mathe-
solvemp, with one additional argument to define the so matics. SIAM, Philadelphia, Pennsylvania, 1993.
called parametric variable, in our case xk . The function [14] P. A. Parrilo. Semidefinite programming relaxations for semialgebraic
problems. Mathematical Programming Ser. B, 96(2):293–320, 2003.
solvemp serves as an interface to MPT and returns a [15] S. Prajna, A. Papachristodoulou, and P. A. Parrilo. SOSTOOLS: a
MATLAB object defining the function U ∗ (x). general purpose sum of squares programming solver. In Proceedings
of the 41th Conference on Decision & Control, Las Vegas, USA,
>> U = sdpvar(N,1); 2002.
[16] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimiza-
>> x_k = sdpvar(n,1); tion over symmetric cones. Optimization Methods and Software,
>> Y = T*x_k+S*U; 11-12(1-4):625–653, 1999.
>> F = set(-10 < x_k < 10); [17] K. C. Toh, M. J. Todd, and R. H. Tütüncü. SDPT3 - a Matlab
software package for semidefinite programming, version 2.1. Opti-
>> F = F + set(-1 < U < 1) + set(Y > 0); mization Methods and Software, 11-12(1-4):545–581, 1999.
>> sol = solvemp(F,Y’*Y+U’*U,x_k); [18] O. Toker and H. Özbay. On NP-hardness of solving bilinear
matrix inequalities and simultaneous stabilization with static output
Of-course, explicit solutions can be applied also in other feedback. In Proceedings of the American Control Conference,
fields than MPC. It should however be kept in mind that the Seattle, Washington, USA, 1995.
[19] L. Vandenberghe, S. Boyd, and S.-P. Wu. Determinant maximization
currently available algorithms to calculate explicit LP and with linear matrix inequality constraints. SIAM Journal on Matrix
QP solutions are limited to problems with a few number of Analysis and Applications, 19(2):499–533, 1998.
parametric variables, typically 5 or less. [20] R. Wallin and A. Hansson. KYPD: a solver for semidefinite programs
derived from the Kalman-Yakubovich-Popov lemma. In Proceedings
of the CACSD Conference, Taipei, Taiwan, 2004.
V. C ONCLUSION AND FUTURE PERSPECTIVES [21] S.-P. Wu, L. Vandenberghe, and S. Boyd. MAXDET-Software for
Determinant Maximization Problems-User’s Guide. Information
We hope that this paper has convinced the reader that Systems Laboratory, Electrical Engineering Department, Stanford
YALMIP is a powerful tool for optimization based algo- University, 1996.
rithm development in MATLAB. The reader is encouraged
to download YALMIP and experiment.
YALMIP has grown substantially since its first public
release in early 2001, but is still evolving. The goal is to
simplify the whole process of using optimization as an en-
gineering tool, bring state-of-the-art solvers and methods to

View publication stats

You might also like