MPTmanual PDF
MPTmanual PDF
∗ Institut für Automatik, ETH - Swiss Federal Institute of Technology, CH-8092 Zürich
† Corresponding Author: E-mail: [email protected], Tel. +41 01 632 4274
Contents
1 Introduction 1
2 Installation 3
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Additional software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Setting up default parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4 MPT in 15 minutes 14
4.1 First Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 How to Obtain a Tractable State Feedback Controller . . . . . . . . . . . . . . . . 14
4.3 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6 Control Design 35
6.1 Controller computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Fields of the mptctrl object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3 Functions defined for mptctrl objects . . . . . . . . . . . . . . . . . . . . . . . . 37
6.4 Design of custom MPC problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.5 Soft constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.6 Control of time-varying systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.7 On-line MPC for nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.8 Move blocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.9 Problem Structure probStruct . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
ii
Contents iii
9 Visualization 64
9.1 Plotting of polyhedral partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
9.2 Visualization of closed-loop and open-loop trajectories . . . . . . . . . . . . . . . 64
9.3 Visualization of general PWA and PWQ functions . . . . . . . . . . . . . . . . . . 65
10 Examples 67
11 Polytope Library 70
11.1 Creating a polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11.2 Accessing data stored in a polytope object . . . . . . . . . . . . . . . . . . . . . . 70
11.3 Polytope arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.4 Geometric operations on polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12 Acknowledgment 77
Bibliography 78
1
Introduction
Optimal control of constrained linear and piecewise affine (PWA) systems has garnered great
interest in the research community due to the ease with which complex problems can be stated
and solved. The aim of the Multi-Parametric Toolbox (MPT ) is to provide efficient computational
means to obtain feedback controllers for these types of constrained optimal control problems
in a Matlab [27] programming environment. By multi-parametric programming, a linear or
quadratic optimization problem is solved off-line. The associated solution takes the form of a
PWA state feedback law. In particular, the state-space is partitioned into polyhedral sets and
for each of those sets the optimal control law is given as one affine function of the state. In
the on-line implementation of such controllers, computation of the controller action reduces to
a simple set-membership test, which is one of the reasons why this method has attracted so
much interest in the research community.
As shown in [8] for quadratic objectives, a feedback controller may be obtained for constrained
linear systems by applying multi-parametric programming techniques. The linear objective was
tackled in [4] by the same means. The multi-parametric algorithms for constrained finite time
optimal control (CFTOC) of linear systems contained in the MPT are based on [1] and are
similar to [28]. Both [1] and [28] give algorithms that are significantly more efficient than the
original procedure proposed in [8].
It is current practice to approximate the constrained infinite time optimal control (CITOC) by
receding horizon control (RHC) - a strategy where CFTOC problem is solved at each time step,
and then only the initial value of the optimal input sequence is applied to the plant. The main
problem of RHC is that it does not, in general, guarantee stability. In order to make reced-
ing horizon control stable, conditions (e.g., terminal set constraints) have to be added to the
original problem which may result in degraded performance [25, 24]. The extensions to make
RHC stable are part of the MPT . It is furthermore possible to impose a minimax optimization
objective which allows for the computation of robust controllers for linear systems subject to
polytopic and additive uncertainties [6, 19]. As an alternative to computing suboptimal stabi-
lizing controllers, the procedures to compute the infinite time optimal solution for constrained
linear systems [13] are also provided.
Optimal control of piecewise affine systems has also received great interest in the research
community since PWA systems represent a powerful tool for approximating non-linear sys-
tems and because of their equivalence to hybrid systems [17]. The algorithms for computing
the feedback controllers for constrained PWA systems were presented for quadratic and linear
objectives in [10] and [3] respectively, and are also included in this toolbox. Instead of comput-
1
1 Introduction 2
ing the feedback controllers which minimize a finite time cost objective, it is also possible to
obtain the infinite time optimal solution for PWA systems [2].
Even though the multi-parametric approaches rely on off-line computation of a feedback law,
the computation can quickly become prohibitive for larger problems. This is not only due
to the high complexity of the multi-parametric programs involved, but mainly because of
the exponential number of transitions between regions which can occur when a controller
is computed in a dynamic programming fashion [10, 20]. The MPT therefore also includes
schemes to obtain controllers of low complexity for linear and PWA systems as presented in
[15, 14, 16].
2
Installation
2.1 Installation
Remove any previous copy of MPT from your disk before installing any new version!
In order to use MPT , set a Matlab path to the whole mpt/ directory and to all it’s subdirecto-
ries. If you are using Matlab for Windows, go to the ”File - Set Path...” menu, choose ”Add with
Subfolders...” and pick up the MPT directory. Click on the ”Save” button to store the updated
path setting. Under Unix, you can either manually edit the file ”startup.m”, or to use the same
procedure described above.
Once you install the toolbox, please consult Section 3 on how to set default values of certain
parameters.
To explore functionality of MPT , try one of the following:
help mpt
help mpt/polytope
help mpt_sysStruct
help mpt_probStruct
mpt_demo1
mpt_demo2
mpt_demo3
mpt_demo4
mpt_demo5
3
2 Installation 4
mpt_demo6
runExample
MPT toolbox comes with a set of pre-defined examples which the user can go through to get
familiar with basic features of the toolbox.
If you wish to be informed about new releases of the toolbox, subscribe to our mailing list by
sending an email to:
subscribe
to the subject field. To unsubscribe, send an email to the same mail address and spec-
ify
unsubscribe
LP and QP solvers
Please consult Section 2.3 on how to make CDD a default LP solver for the MPT toolbox.
The NAG Foundation Toolbox for Matlab provides a fast and reliable functionality to
tackle many different optimization problems. It’s LP and QP solvers are fully supported by
MPT .
An another alternative is the commercial CPLEX solver from ILOG. The authors provide an
interface to call CPLEX directly from Matlab, you can download source codes and pre-compiled
libraries for Windows, Solaris and Linux from
http://control.ee.ethz.ch/∼hybrid/cplexint.php
Please note that you need to be in possession of a valid CPLEX license in order to use CPLEX
solvers.
The free GLPK (GNU Linear Programming Kit) solver is also supported by MPT toolbox and a
MEX interface is included in the distribution. You can download the latest version of GLPKMEX
written by Nicolo Giorgetti from:
http://www-dii.ing.unisi.it/∼giorgetti/downloads.php
Note that we have experienced several numerical inconsistencies when using GLPK.
Some routines of the MPT toolbox rely on Linear Matrix Inequalities (LMI) theory. Certain
functions therefore require solving a semidefinite optimization problem. The YALMIP interface
by Johan Lofberg http://control.ee.ethz.ch/∼joloef/
is included in this release of MPT toolbox. Since the interface is a wrapper and calls external
LMI solver, we strongly recommend to install one of the solvers supported by YALMIP. You
can obtain a list of free LMI solvers here:
http://control.ee.ethz.ch/∼joloef/yalmip.php
YALMIP supports a large variety of Semi-Definite Programming packages. One of them,
namely the SeDuMi solver written by Jos Sturm, comes along with MPT . Source codes as well
as binaries for Windows are included directly, you can compile the code for other operating sys-
tems by following the instructions in mpt/solvers/SeDuMi105/Install.unix. For more
information consult http://fewcal.kub.nl/sturm/software/sedumi.html
MPT allows to compute orthogonal projections of polytopes. To meet this task, several methods
for projections are available in the toolbox. Two such methods – ESP and Fourier-Motzkin Elim-
ination are coded in C and need to be accessible as a mex library. These libraries are already pro-
vided in compiled form for Linux and Windows. For other architectures you will need to com-
pile the corresponding library on your own. To do so follow instructions in mpt/solvers/esp
and mpt/solvers/fourier, respectively.
2 Installation 6
By default, it is not necessary to modify the default setting stored in mpt init.m.
However if you decide to do so, we strongly recommend to use the GUI setup func-
tion
mpt_setup
Any routine of the MPT toolbox can be called with user-specified values of certain global
parameters. To make usage of MPT toolbox as user-friendly as possible, we provide the
option to store default values of the parameters in variable mptOptions, which is kept
in MATLAB’s workspace as a global variable (i.e. it stays there unless one types clear
all).
The variable is created when the toolbox get’s initialized through a call to mpt init.
Default LP solver: In order to set the default LP solver to be used, open the file mpt init.m
in your editor. Scroll down to the following line:
mptOptions.lpsolver = [];
Integer value on the right-hand side specifies the default LP solver. Allowed values are:
0 NAG Foundation LP solver
3 CDD Criss-Cross Method
2 CPLEX
4 GLPK
5 CDD Dual-Simplex Method
1 linprog
If the argument is empty, the fastest available solver will be enabled. Solvers presented
in the table above are sorted in the order of preference.
Default QP solver: To change the default QP solver, locate and modify this line in mpt init.m:
mptOptions.qpsolver = [];
Allowed values for the right-hand side argument are the following:
0 NAG Foundation QP solver
2 CPLEX
1 quadprog
Again, if there is no specification provided, the fastest alternative will be used.
Note: Quadratic Program solver is not necessarily required by MPT . If you are not in
possession of any QP solver, you still will be able to use large part of functionality in-
volved in the toolbox. But the optimization problems will be limited to linear performance
objectives.
Default solver for extreme points computation: Some of the functions in MPT toolbox require
computing of extreme points of polytopes given by their H-representation and calculating
convex hulls of given vertices respectively. Since efficient analytical methods are limited
to low dimensions only, we provide the possibility to pass this computation to an external
2 Installation 7
software package (CDD). However, if the user for any reason does not want to use third-
party tools, the problem can still be tackled in an analytical way (with all the limitations
mentioned earlier).
To change the default method for extreme points computation, locate the following line
in mpt init.m:
mptOptions.extreme_solver = [];
and change the right-hand side argument to one of these values:
3 CDD (faster computation, works also for higher dimensions)
0 Analytical computation (limited to dimensions up to 3)
Default tolerances: The Multi-Parametric Toolbox internally works with 2 types of tolerances:
- absolute tolerance - relative tolerance
Default values for these two constants can be set by modifying the following lines of
mpt init.m:
mptOptions.rel_tol = 1e-6;
mptOptions.abs_tol = 1e-7;
Default values for Multi-parametric solvers: Solving a given QP/LP in a multi-parametric way
involves making ”steps” across given boundaries. Length of this ”step” is given by the
following variable:
mptOptions.step_size = 1e-4;
Due to numerical problems tiny regions are sometimes difficult to calculate, i.e. are not
identified at all. This may create ”gaps” in the computed control law. For the exploration,
these will be jumped over and the exploration in the state space will continue. See [1] for
details.
Level of detecting those gaps is given by the following variable:
mptOptions.debug_level = 1;
The right-hand side argument can have three values:
1 No debug done
2 A tolerance is given to find gap in the region partition, small empty regions inside
the region partition will be discarded. Note that this is generally not a problem, since
the feedback law is continuous and can therefore be interpolated easily. Correction
to the calculation of the outer hull is performed as well.
3 Zero tolerance to find gaps in the region partition, empty regions if they exist, will
be detected, i.e. the user will be notified. Correction to the calculation of the outer
hull is performed.
Default Infinity-box: MPT internally converts the Rn to a box with large bounds. The following
parameter specifies size of this box:
mptOptions.infbox = 1e4;
2 Installation 8
Note that also polyhedra (unbounded polytopes) are converted to bounded polytopes by
making an intersection with the ”Infinity-box”.
Default values for plotting: The overloaded plot function can be forced to open a new fig-
ure windows every time the user calls it. If you want to disable this feature, go to the
following line in mpt init.m:
mptOptions.newfigure = 0;
and change the constant to 0 (zero)
1 means ”enabled”, 0 stands for ”disabled”
Default level of verbosity: Text output from functions can be limited or suppressed totally by
changing the following option in mpt init.m:
mptOptions.display = 1;
Allowed values are:
0 only important messages
1 displays also intermediate information
2 no output suppression
Level of details: Defines how many details about the solution should be stored in the resulting
controller structure. This can have a significant impact on the size of the controller struc-
ture. If you want to evaluate open-loop solution for PWA systems, set this to 1. Otherwise
leave the default value to save memory and disk space.
mptOptions.details = 0;
Once you modify the mpt init.m file, type:
mpt_init
3.1 Polytopes
Polytopic (or, more general, polyhedral) sets are an integral part of multi-parametric program-
ming. For this reason we give some of the definitions and fundamental operations with poly-
topes. For more details we refer reader to [30, 12].
is called polyhedron.
P = { x ∈ Rn | P x x ≤ P c }, (3.2)
is called polytope.
It is obvious from the above definitions that every polytope represents a convex, compact (i.e.,
bounded and closed) set. We say that a polytope P ⊂ Rn , P = { x ∈ Rn | P x x ≤ Pc } is full
dimensional if ∃ x ∈ Rn : P x x < Pc . Furthermore, if k( P x )i k = 1, where ( P x )i denotes i-th row of
a matrix P x , we say that the polytope P is normalized. One of the fundamental properties of a
polytope is that it can also be described by its vertices
vP vP
( i)
P = { x ∈ Rn | x = ∑ αi VP , 0 ≤ αi ≤ 1, ∑ α i = 1}, (3.3)
i =1 i =1
( i)
where VP denotes the i-th vertex of P , and vP is the total number of vertices of
P.
We will henceforth refer to the half-space representation (3.2) and vertex representation (3.3)
as H and V representation respectively.
9
3 Theory of Polytopes and Multi-Parametric Programming 10
F = P ∩ { x ∈ Rn | a ′ x = b }, (3.4)
R = P \ Q : = { x ∈ Rn | x ∈ P , x ∈
/ Q}. (3.5)
P ⊖ W : = { x ∈ Rn | x + w ∈ P , ∀ w ∈ W }. (3.6)
P ⊕ W : = { x + w ∈ Rn | x ∈ P , w ∈ W }. (3.7)
p
!
p p
: = { x ∈ Rn | x =
[
hull Pi ∑ αi xi , xi ∈ Pi , 0 ≤ αi ≤ 1, ∑ α i = 1}. (3.8)
i =1 i =1 i =1
where P¯x x ≤ P̄c is the subsystem of P x x ≤ Pc obtained by removing all the inequalities not
valid for the polyhedron Q, and Q̄ x x ≤ Q̄c are defined in the similar way with respect to
Q x x ≤ Qc and P [7].
3 Theory of Polytopes and Multi-Parametric Programming 11
This section first covers some of the fundamentals of multi-parametric programming for linear
systems before restating results for PWA systems. Consider a discrete-time linear time-invariant
system
with A ∈ Rn×n and B ∈ Rn×m . Let x(t) denote the state at time t and xt+k|t denote the predicted
state at time t + k given the state at time t. For brevity we denote xk|0 as xk . Let uk be the
computed input for time k, given x(0). Assume now that the states and the inputs of the
system in (3.10) are subject to the following constraints
x ∈ X ⊂ Rn , u ∈ U ⊂ Rm (3.11)
where X and U are compact polyhedral sets containing the origin in their interior, and consider
the constrained finite-time optimal control (CFTOC) problem
N −1
∗
JN ( x(0)) = min
u0 ,...,u N −1
||Q f x N ||ℓ + ∑ || Ruk ||ℓ + ||Qxk ||ℓ (3.12a)
k =0
subj. to xk ∈ X, ∀k ∈ {1, . . . , N }, (3.12b)
x N ∈ Xset , (3.12c)
uk ∈ U, ∀k ∈ {0, . . . , N − 1}, (3.12d)
x0 = x(0), xk+1 = Axk + Buk , ∀k ∈ {0, . . . , N − 1}, (3.12e)
Q = Q′ 0, Q f = Q′f 0, R = R′ ≻ 0, if ℓ = 2,
(3.12f)
rank( Q) = n, rank( R) = m, if ℓ ∈ {1, ∞}.
where (3.12c) is a user defined set-constraint on the final state which may be chosen such
that stability of the closed-loop system is guaranteed [24]. The cost (3.12a) may be linear (e.g.,
ℓ ∈ {1, ∞}) [4] or quadratic (e.g., ℓ = 2) [8] whereby the matrices Q, R and Q f represent
user-defined weights on the states and inputs.
Definition 3.3.1: We define the N-step feasible set X fN ⊆ Rn as the set of initial states x (0) for
which the CFTOC problem (3.12) is feasible, i.e.
For a given initial state x(0), problem (3.12) can be solved as an LP or QP for linear or quadratic
cost objectives respectively. However, this type of on-line optimization may be prohibitive for
control of fast processes.
By substituting xk = Ak x(0) + ∑kj=−01 Ak Buk−1− j , problem (3.12) for the quadratic cost objective
can be reformulated as
∗ ′ ′ ′
JN ( x(0)) = x(0) Yx(0) + min UN HUN + x(0) FUN
UN
where the column vector UN , [u0′ , . . . , u′N −1 ]′ ∈ Rs is the optimization vector, s , mN and H,
F, Y, G, W, E are easily obtained from Q, R, Q f , (3.10) and (3.11) (see [8] for details). The same
transformation can trivially be applied to linear cost objectives in (3.12a). Because problem (3.14)
depends on x(0), it can be also solved as a multi-parametric program [8]. Denoting with UN =
[u0′ , . . . , u′N −1 ]′ the optimization vector and considering x(0) as a parameter, problem (3.12)
can then be solved for all parameters x(0) to obtain a feedback solution with the following
properties,
Theorem 3.3.2: [8, 9] Consider the CFTOC problem (3.12). Then, the set of feasible parameters
∗ : X N → R Nm is continuous and piecewise affine (PWA), i.e.
X fN is convex, the optimizer UN f
∗
UN ( x(0)) = Fr x(0) + Gr if x(0) ∈ Pr = { x ∈ Rn | Hr x ≤ Kr }, r = 1, . . . , R (3.15)
and the optimal cost JN∗ : X N → R is continuous, convex and piecewise quadratic (ℓ = 2) or
f
piecewise linear (ℓ ∈ {1, ∞}).
According to Theorem 3.3.2, the feasible state space X fN is partitioned into R polytopic regions,
i.e., X fN = {Pr }rR=1 . Though the initial approach was presented in [8], more efficient algorithms
for the computation are given in [1, 28]. With sufficiently large horizons or appropriate terminal
set constraints (3.12c) the closed-loop system is guaranteed to be stabilizing for receding hori-
zon control [13, 24]. However, no robustness guarantees can be given. This issue is addressed
in [19, 6] where the authors present minimax methods which are able to cope with additive
disturbances
x(t + 1) = Ax(t) + Bu(t) + w, w ∈ W, (3.16)
where W is a polytope with the origin in its interior. The minimax approach can be applied
also when there is polytopic uncertainty in the system dynamics,
x ( t + 1) = A ( λ ) x ( t ) + B ( λ ) u ( t ) , (3.17)
with λ ∈ RL and
Ω := conv [ A(1) | B(1) ], [ A(2) | B(2) ], . . . , [ A( L) | B( L) ] ,
(3.18a)
[ A(λ)| B(λ)] ∈ Ω, (3.18b)
i.e., there exist L nonnegative coefficients λl ∈ R (l = 1, . . . , L) such that
L L
∑ λl = 1 , [ A(λ)| B(λ)] = ∑ λ l [ A ( l ) | B ( l ) ]. (3.19)
l =1 l =1
The set of admissible λ can be written as Λ := { x ∈ [0, 1] L | || x||1 = 1}. In order to guar-
antee robust stability of the closed loop system, the objective (3.12a) is modified such that
the feedback law which minimizes the worst case is computed, hence the name minimax con-
trol.
The results in [8] were extended in [5, 10, 3] to compute the optimal explicit feedback controller
for PWA systems of the form
x ( k + 1) = A i x ( k ) + Bi u ( k ) + f i , (3.20a)
Li x(k) + Ei u(k) ≤ Wi , i∈I (3.20b)
′ ′ ′
if [ x (k) u (k)] ∈ Di (3.20c)
3 Theory of Polytopes and Multi-Parametric Programming 13
whereby the dynamics (3.20a) with the associated constraints (3.20b) are valid in the polyhedral
set Di defined in (3.20c). The set I ⊂ N, I = {1, . . . , d} represents all possible dynamics, and d
denotes the number of different dynamics. Henceforth, we will abbreviate (3.20a) and (3.20c)
with x(k + 1) = f PWA ( x(k), u(k)). Note that we do not require x(k + 1) = f PWA ( x(k), u(k)) to
be continuous. The optimization problem considered here is given by
N −1
∗
JN ( x(0)) = min
u0 ,...,u N −1
||Q f x N ||ℓ + ∑ || Ruk ||ℓ + ||Qxk ||ℓ (3.21a)
k =0
subj. to Li xk + Ei uk ≤ Wi , if [ xk uk ]′ ∈ Di , i ∈ I, ∀k ∈ {0, . . . , N − 1}, (3.21b)
x N ∈ Xset , (3.21c)
xk+1 = f PWA ( xk , uk ), x0 = x(0), ∀k ∈ {0, . . . , N − 1}, (3.21d)
Q = Q′ 0, Q f = Q′f 0, R = R′ ≻ 0, if ℓ = 2,
(3.21e)
rank( Q) = n, rank( R) = m, if ℓ ∈ {1, ∞}.
Here (3.21c) is a user-specified set constraint on the terminal state which may be used to
guarantee stability [23, 14, 9]. As an alternative, the infinite horizon solution to (3.21) guarantees
stability as well [2]. In order to robustify controllers with respect to additive disturbances, a
minimax approach is taken [20] which is identical to what was proposed for linear systems
[19, 9].
All multi-parametric programming methods suffer from the curse of dimensionality. As the pre-
diction horizon N increases, the number of partitions R (X fN = {Pr }rR=1 ) grows exponentially
making the computation and application of the solution intractable. Therefore, there is a clear
need to reduce the complexity of the solution. This was tackled in [16, 15, 14] where the authors
present two methods for obtaining feedback solutions of low complexity for constrained linear
and PWA systems. The first controller drives the state in minimum time into a convex set Xset ,
where the cost-optimal feedback law is applied [15, 14]. This is achieved by iteratively solving
one-step multi-parametric optimization problems. Instead of solving one problem of size N,
the algorithm solves N problems of size 1, thus the decrease in both on- and off-line complexity.
This scheme guarantees closed-loop stability. If a linear system is considered, an even simpler
controller may be obtained by solving only one problem of size 1, with the additional constraint
that x1 ∈ X fN [15, 16]. In order to guarantee stability of this closed-loop system, an LMI analysis
is performed which aims at identifying a Lyapunov function [18, 11].
4
MPT in 15 minutes
This short introduction is not meant to (and does not) replace the MPT manual. It serves to
clarify the key points of Model Predictive Control and application thereof within the framework
of the MPT toolbox. Specifically, the main problems which arise in practice are illustrated in a
concise manner without going into the technical details.
Before reading the rest of this introduction, have a close look at the provided demonstrations
and go through them slowly. At the Matlab command prompt, type mpt demo1, mpt demo2,
. . . , mpt demo6. After completing the demos, run some examples by typing runExample
at the command prompt. More demos can be found in the mpt/examples/ownmpc and
mpt/examples/nonlin directories of your MPT installation. Finally, for a good overview,
type help mpt and help polytope to get the list and short descriptions of (almost) all
available functions.
In this section the regulation problem will be treated. See the subsequent section for the special
case of tracking.
The most important aspects in system modelling for MPT are given below:
1. Always make sure your dynamic matrices and states/inputs are well scaled. Ideally all
variables exploit the full range between ±10. See [26] for details.
2. Try to have as few different dynamics as possible when designing your PWA system
model.
3. The fewer states and inputs your system model has, the easier all subsequent computa-
tions will be.
4. Use the largest possible sampling time when discretizing your system.
14
4 MPT in 15 minutes 15
Control Schemes
For a detailed description of how to define your system sysStruct and problem probStruct,
see Sections 5.7 and 6.9), respectively. We also suggest you examine the m-files in the ‘Exam-
ples’ directory of the MPT toolbox and take a closer look at the runExample.m file. Detailed
examples for controller computations are also provided in the MPT manual (Section Examples).
Computing explicit state feedback controllers via multi-parametric programming may easily
lead to controllers with prohibitive complexity (both in runtime and solution) and the following
is intended to give a brief overview of the existing possibilities to obtain tractable controllers
for the problems MPT users may face. Specifically, there are three aspects which are important
in this respect: performance, stability and constraint satisfaction.
Infinite Time Optimal Control: [13, 2]
To use this method, set probStruct.N=Inf, probStruct.subopt lev=0. This will
yield the infinite time optimal controller, i.e., the best possible performance for the prob-
lem at hand. Asymptotic stability and constraint satisfaction are guaranteed and all states
which are controllable (maximum controllable set) will be covered by the resulting con-
troller. However, the complexity of the associated controller may be prohibitive. Note that
the computation of this controller may take forever.
Finite Time Optimal Control [8, 3, 9, 24]
To use this method, set probStruct.N ∈ N+ , {1, 2, . . . } and probStruct.subopt lev=0.
This will yield the finite time optimal controller, i.e. performance will be N-step opti-
mal but may not be infinite horizon optimal. The complexity of the resulting controller
depends strongly on the prediction horizon (large N → complex controller). It is further-
more necessary to differentiate the following cases:
probStruct.Tset=P: User defined terminal set. Depending on the properties (e.g., in-
variance, size) of the target set P, any combination of the two cases previously described
may occur.
Minimum Time Control [15, 14]
To use this method, set probStruct.subopt lev=1. This will yield the minimal time
controller with respect to a target set around the origin, i.e. the controller will drive the
state into this set in minimal time. In general, the complexity of minimum time con-
trollers is significantly lower than that of their 1/2/∞-norm cost optimal counterparts.
The controller is guaranteed to cover all controllable states and asymptotic stability and
constraint satisfaction are guaranteed. Note that if you choose to manually define your
target set by setting probStruct.Tset=P, these properties may not hold.
Low Complexity Controller [15, 16]
To use this method, set probStruct.subopt lev=2. This will yield a controller for
a prediction horizon N = 1 with additional constraints which guarantee asymptotic
stability and constraint satisfaction in closed-loop. The controller covers all controllable
states. The complexity of this 1-step controller is generally significantly lower than all
other control schemes in MPT which cover the maximal controllable set. However, the
computation of the controller may take a long time.
Conclusion
4.3 Tracking
If you are solving a tracking problem, everything becomes more complicated. It is necessary to
differentiate between the case of constant reference tracking (reference state is fixed a priori)
and varying reference tracking (reference is arbitrarily time varying).
Assume a 3rd order system with 2 inputs. In ∆u-tracking formulation, the resulting dynamical
model will have 8 states (3 system states x + 3 reference states xre f + 2 input states u(k − 1)) and
2 inputs (∆u(k)). If we solve the regulation problem for the augmented system (see previous
sections) we obtain a controller which allows for time varying references. For control purposes,
the reference state xre f is imposed by the user, i.e. xre f is set to a specific value. The regulation
controller then automatically steers the state x to the reference state.
Note that time varying tracking problems are generally of high dimension, such that controller
computation is expensive. If you can reduce your control objective to a regulation problem
for a set of predefined reference points, we suggest you solve a sequence of fixed state track-
ing problems instead of the time varying tracking problem, in order to obtain computational
tractability.
5
In this chapter we will show how to model dynamical systems in MPT framework. As already
described before, each system is defined by means of a sysStruct structure which is described
in more details in Section 5.7.
Behavior of a plant is in general driven by two major components: system dynamics
and system constraints. Both these components has to be described in the system struc-
ture.
LTI dynamics
where x(k) ∈ Rn x is the state vector at time instance k, x(k + 1) denotes the state vector at
time k + 1, u(k) ∈ Rnu and y(k) ∈ Rny are values of the control input and system output,
respectively. A, B, C and D are matrices of appropriate dimensions, i.e. A is a n x × n x matrix,
dimension of B is n x × nu , C is a ny × n x and D a ny × nu matrix.
Dynamical matrices are stored in the following fields of the system structure:
sysStruct.A = A
sysStruct.B = B
sysStruct.C = C
sysStruct.D = D
18
5 Modelling of Dynamical Systems 19
sysStruct.A = [1 1; 0 1];
sysStruct.B = [1; 0.5];
sysStruct.C = [1 0; 0 1];
sysStruct.D = [0; 0]
PWA dynamics
Piecewise-Affine systems are systems whose dynamics are affine and can be different in differ-
ent parts of the state-input state. In particular, they are defined by
x ( k + 1) = A i x ( k ) + Bi u ( k ) + f i (5.5)
y ( k ) = Ci x ( k ) + D i u ( k ) + g i (5.6)
x ( k)
if ∈ Di (5.7)
u ( k)
The subindex i takes values 1 . . . NPWA , where NPWA is total number of PWA dynamics defined
over a polyhedral partition D . Dimensions of matrices in (5.5)–(5.7) are summarized in Table 5.1.
Matrix Dimension
A nx × nx
B n x × nu
f nx × 1
C ny × n x
D ny × nu
g ny × 1
Tab. 5.1: Dimensions of matrices of a PWA system.
Matrices in equations (5.5) and (5.6) are stored in the following fields of the system struc-
ture:
Equation (5.7) defines a polyhedral partition of the state-input space over which the different
dynamics are active. Different segments of the polyhedral partition D are defined using so-
called guard lines, i.e. constraints on state and input variables. In general, the guard lines are
described by the following constraints:
In PWA case, each field of the structure has to be a cell array of matrices of appropriate
dimensions. Each index i ∈ 1, 2, . . . , n corresponds to one PWA dynamics, i.e. to one tuple
[ Ai , Bi , f i , Ci , Di , gi ] and one set of constraints Gix x(k) + Giu u(k) ≤ Gic
Unlike the LTI case, you can omit sysStruct.f and sysStruct.g if they are zero. All other
matrices have to be defined in the structure.
We will illustrate modelling of PWA systems on the following example:
Example 5.1.2: Assume a frictionless car moving horizontally on a hill with different slopes, as
illustrated in Figure 5.1.
5 Modelling of Dynamical Systems 21
It can be seen that speed of the car depends only on the force applied to the car (manipu-
lated variable u) and slope of the road α. Slope is different in different sectors of the road. In
particular we have:
Sector 1: p ≥ −0.5 ⇒ α = 0o
Sector 2: −3 ≤ p ≤ −0.5 ⇒ α = 10o
(5.13)
Sector 3: −4 ≤ p ≤ −3 ⇒ α = 0o
Sector 4: p ≤ −4 ⇒ α = − 5o
Substituting slopes α from (5.13) to (5.11), we obtain 4 tuples [ Ai , Bi , f i , Ci , Di , gi ] for i ∈ 1, . . . , 4.
Furthermore we need to define parts of the state-input space where each dynamics is active. We
do that using the guard-lines Gix x(k) + Giu u(k) ≤ Gic . With this formulation we can describes
each sector as follows:
Sector 1: −1 0 x(k) ≤ 0.5
1 0 −0.5
Sector 2: x ( k) ≤
−1 0 3 (5.14)
1 0 −3
Sector 3: x ( k) ≤
−1 0 4
Sector 4: 1 0 x ( k) ≤ −4
Note that the state vector x consists of two components (position and velocity) and our sectors
do not depend on value of the manipulated variable u, hence G u is zero in our case and can be
omitted from the definition. Once different dynamics and the corresponding guard-lines are
defined, they must be linked together to tell MPT which dynamics is active in which sector. To
do so, one needs to fill out the system structure in a prescribed way, i.e. by putting dynamics i
and guard-lines i at the same position in the corresponding cell array. If you, for instance, put
guard-lines defining sector 1 at first position in the cell array sysStruct.guardX, you link
this sector with the proper dynamics by putting A1 , B1 , f 1 , C1 , D1 also on the first position in
the corresponding fields. The whole system structure will then look as follows:
• Sector 1 - Guard-lines and dynamics:
sysStruct.guardX{1} = [-1 0]
sysStruct.guardC{1} = 0.5
sysStruct.A{1} = [1 0.1; 0 1]
5 Modelling of Dynamical Systems 22
We now consider a slight extension of Example 5.1.2 and show how to define a PWA system
which also depends on values of the manipulated variable(s) u.
Example 5.1.3: Assume the Car on a PWA hill system as depicted in Figure 5.1. In addition to
the original setup we assume different behavior of the car when applying positive and negative
control action. In particular we assume that translation of the force u on the car is limited by
half when u is negative. We can then consider two cases:
1. u ≥ 0:
dp
= v (5.15)
dt
dv
m = u − mg sin α (5.16)
dt
5 Modelling of Dynamical Systems 23
2. u ≤ 0:
dp
= v (5.17)
dt
dv 1
m = u − mg sin α (5.18)
dt 2
With m = 1 and x = [ p v]T , discretization with sampling time of 0.1 seconds leads the following
state-space representation:
1. u ≥ 0:
1 0.1 0.005 c
x ( k + 1) = x ( k) + u ( k) + (5.19)
0 1 0.1 − g sin α
= Ax(k) + B1 u(k) + f i (5.20)
1 0 0 0
y( k) = x ( k) + u ( k) + (5.21)
0 1 0 0
= Cx(k) + Du(k) + g (5.22)
2. u ≤ 0:
1 0.1 0.0025 c
x ( k + 1) = x ( k) + u ( k) + (5.23)
0 1 0.05 − g sin α
= Ax(k) + B2 u(k) + f i (5.24)
1 0 0 0
y( k) = x ( k) + u ( k) + (5.25)
0 1 0 0
= Cx(k) + Du(k) + g (5.26)
Value of the slope α again depends on horizontal position of the car according to sector con-
ditions (5.13). Model of such a system now consists of 8 PWA dynamics (4 for positive u, 4
for negative u) which are defined over 8 sectors of the state-input space. Note that matrices
A, C, D and g in (5.19)–(5.25) are constant and do not depend on the slope α nor on value
of the control input u. With f i we abbreviate matrices we obtain by substituting α from (5.13)
into the equations of motion. B1 and B2 take different values depending on the orientation of
the manipulated variable u. We can now define 8 segments of the state-input space and link
dynamics to these sectors. We define these sectors using guard lines on states and inputs as
follows:
• Sectors for u ≥ 0
Sector 1: p ≥ −0.5 ⇒ α = 0o
Sector 2: −3 ≤ p ≤ −0.5 ⇒ α = 10o
(5.27)
Sector 3: −4 ≤ p ≤ −3 ⇒ α = 0o
Sector 4: p ≤ −4 ⇒ α = − 5o
• Sectors for u ≤ 0
Sector 5: p ≥ −0.5 ⇒ α = 0o
Sector 6: −3 ≤ p ≤ −0.5 ⇒ α = 10o
(5.28)
Sector 7: −4 ≤ p ≤ −3 ⇒ α = 0o
Sector 8: p ≤ −4 ⇒ α = − 5o
5 Modelling of Dynamical Systems 24
which we can translate into guard-line setup Gix x(k) + Giu u(k) ≤ Gic as follows:
• Sectors for u ≥ 0
−1 0 0 0.5
Sector 1: x ( k) + u ( k) ≤
0 0 −1 0
1 0 0 −0.5
Sector 2: −1 0 x ( k) + 0 u ( k) ≤ 3
0 0 −1 0 (5.29)
1 0 0 −3
Sector 3: −1 0 x ( k) + 0 u ( k) ≤ 4
0 0 −1 0
1 0 0 −4
Sector 4: x ( k) + u ( k) ≤
0 0 −1 0
• Sectors for u ≤ 0
−1 0 0 0.5
Sector 5: x ( k) + u ( k) ≤
0 0 1 0
1 0 0 −0.5
Sector 6: −1 0 x ( k) + 0 u ( k) ≤ 3
0 0 1 0
(5.30)
1 0 0 −3
Sector 7: −1 0 x ( k) + 0 u ( k) ≤ 4
0 0
1 0
1 0 0 −4
Sector 8: x ( k) + u ( k) ≤
0 0 1 0
Now we can define the system by filling out the system structure:
• Sector 1 - Guard-lines and dynamics:
sysStruct.guardX{1} = [-1 0; 0 0]
sysStruct.guardU{1} = [0; -1]
sysStruct.guardC{1} = [0.5; 0]
sysStruct.A{1} = A
sysStruct.B{1} = B_1
sysStruct.f{1} = f_1
sysStruct.C{1} = C
sysStruct.D{1} = D
• Sector 2 - Guard-lines and dynamics:
sysStruct.guardX{2} = [1 0; -1 0; 0 0]
sysStruct.guardU{2} = [0; 0; -1]
sysStruct.guardC{2} = [-0.5; 3; 0]
sysStruct.A{2} = A
sysStruct.B{2} = B_1
sysStruct.f{2} = f_2
sysStruct.C{2} = C
5 Modelling of Dynamical Systems 25
sysStruct.D{2} = D
..
.
• Sector 8 - Guard-lines and dynamics:
sysStruct.guardX{8} = [1 0; 0 0]
sysStruct.guardU{8} = [0; 1]
sysStruct.guardC{8} = [-4; 0]
sysStruct.A{8} = A
sysStruct.B{8} = B_2
sysStruct.f{8} = f_4
sysStruct.C{8} = C
sysStruct.D{8} = D
MPT can design control laws for discrete-time constrained linear, switched linear and hybrid
systems. Hybrid systems can be described in Piecewise-Affine (PWA) or Mixed Logical Dy-
namical (MLD) representations and an efficient algorithm is provided to switch from one rep-
resentation to the other form and vice-versa. To increase user’s comfort, models of dynamical
systems can imported from various sources:
• Models of hybrid systems designed in the HYSDEL [29] language,
• MLD structures generated by mpt pwa2mld
• Nonlinear models defined by mpt nonlinfcn template
• State-space and transfer function objects of the Control toolbox,
• System identification toolbox objects,
• MPC toolbox objects.
In order to import a dynamical system, one has to call
where object can be either a string (in which case the model is imported from a corresponding
HYSDEL source file), or it can be a variable of one of the above mentioned object types. The
second input parameter Ts denotes sampling time and can be omitted, in which case Ts = 1
is assumed.
Example 5.2.1: The following code will first define a continuous-time state-space object which
is then imported to MPT :
% sampling time
Ts = 1;
Note: If the state-space object is already in discrete-time domain, it is not necessary to provide
the sampling time parameter Ts to mpt sys. After importing a model using mpt sys it is still
necessary to define system constraints as described previously.
Models of hybrid systems can be imported from HYSDEL source (see HYSDEL documentation
for more details on HYSDEL modelling), e.g.
Note: Hybrid systems modeled in HYSDEL are already defined in the discrete-time domain, the
additional sampling time parameter Ts is only used to set the sampling interval for simulations.
If Ts is not provided, it is set to 1.
Model of a hybrid system defined in hysdelfile.hys is first transformed into an Mixed
Logical Dynamical (MLD) form using the HYSDEL compiler and then an equivalent PWA
representation is created using MPT . It is possible to avoid the PWA transformation by call-
ing
in which case only an MLD representation is created. Note, however, that systems only in MLD
form can be controlled only with the on-line MPC schemes.
After calling mpt sys it is still necessary to define system constraints as described in the next
section.
Output equation is in general driven by the following relation for PWA systems
y ( k ) = Ci x ( k ) + D i u ( k ) + g i (5.31)
and by
y(k) = Cx(k) + Du(k) (5.32)
for LTI systems. It is therefore clear that by choice of C = I one can use these constraints to
restrict system states as well. Min/Max output constraints have to be given in the following
fields of the system structure:
sysStruct.ymax = outmax
sysStruct.ymin = outmin
sysStruct.xmax = xmax
sysStruct.xmin = xmin
Goal of each control technique is to design a controller which chooses a proper value of the ma-
nipulated variable in order to achieve the given goal (usually to guarantee stability, but other as-
pects like optimality may also be considered at this point). In most real plants values of manip-
ulated variables are restricted and these constraints have to be taken into account in controller
design procedure. These limitations are usually saturation constraints and can be captured by
min / max bounds. In MPT , constraints on control input are given in:
sysStruct.umax = inpmax
sysStruct.umin = inpmin
Another important type of constraints are rate constraints. These limitations restrict the varia-
tion of two consecutive control inputs (δu = u(k) − u(k − 1)) to be within of prescribed bounds.
One can use slew rate constraints when a “smooth” control action is required, e.g. when con-
trolling a gas pedal in a car to prevent the car from jumping due to sudden changes of the
controller action. Min/max bounds on slew rate can be given in:
5 Modelling of Dynamical Systems 28
sysStruct.dumax = slewmax
sysStruct.dumin = slewmin
MPT also supports one additional constraint, the so-called Pbnd constraint. If you define
sysStruct.Pbnd as a polytope object of the dimension of your state vector, this entry will be
used as a polytopic constraint on the initial condition, i.e.
x0 ∈ sysStruct.Pbnd
This is especially important for explicit controllers, since sysStruct.Pbnd there lim-
its the state-space which will be explored. If sysStruct.Pbnd is not specified, it will
be set as a ”large” box of size defined by mptOptions.infbox (see help mpt init
for details). Note: sysStruct.Pbnd does NOT impose any constraints on predicted
states!
If you want to enforce polytopic constraints on predicted states, inputs and outputs, you
need to add them manually using the ”Design your own MPC” function described in Sec-
tion 6.4.
MPT allows to define system with discrete-valued control inputs. This is especially important
in framework of hybrid systems where control inputs are often required to belong to certain
set of values. We distinguish between two cases:
1. All inputs are discrete
2. Some inputs are discrete, the rest are continuous
Typical application of discrete-valued inputs are various on/off switches, gears, selectors, etc.
All these can be modelled in MPT and taken into account in controller design. Defining
discrete inputs is fairly easy, all you need to do is to fill out
sysStruct.Uset = Uset
where Uset is a cell array which defines all possible values for every control input. If your
system has, for instance, 2 control inputs and the first one is just an on/off switch (i.e.
u1 = {0, 1}) and the second one can take values from set {−5, 0, 5}, you define it as fol-
lows:
5 Modelling of Dynamical Systems 29
sysStruct.Uset{1} = [0, 1]
sysStruct.Uset{2} = [-5, 0, 5]
where the first line corresponds to u1 and the second to u2 . If your system has only one manip-
ulated variable, the cell operator can be omitted, i.e. one could write:
sysStruct.Uset = [0, 1]
and assume that the the manipulated variable can take only values from the set {−1, 01}. The
corresponding MPT model would look like this:
sysStruct.A = [1 1; 0 1]
sysStruct.B = [1; 0.5]
sysStruct.C = [1 0]
sysStruct.D = 0
sysStruct.Uset = [-1 0 1]
sysStruct.ymin = -10
sysStruct.ymax = 10
sysStruct.umax = 1
sysStruct.umin = -1
Notice that constraints on control inputs umax, umin have to be provided even when manip-
ulated variable is declared to be discrete.
Example 5.5.2: We consider system defined in Example 5.5.1. In addition we assume that when
u is 1, dynamics of the system is driven by equation (5.33), otherwise the state-update equation
takes the following xform:
1 1 2
x ( k + 1) = x ( k) + u ( k ), (5.35)
0 1 1
sysStruct.A = { [1 1; 0 1], [1 1; 0 1] }
sysStruct.B = { [1; 0.5], [2; 1] }
5 Modelling of Dynamical Systems 30
sysStruct.C = { [1 0], [1 0] }
sysStruct.D = { 0, 0}
sysStruct.guardX = { [0 0], [0 0] }
sysStruct.guardU = { 1, -1 }
sysStruct.guardC = { 0, -1 }
sysStruct.Uset = [-1 0 1]
Mixed inputs
Mixed discrete and continuous inputs can be modelled by appropriate choice of sysStruct.Uset.
For each continuous input it is necessary to set the corresponding entry to [-Inf Inf], in-
dicating to MPT that this particular input variable should be treated as a continuous input.
For a system with two manipulated variables, where the first one takes values from a set
{−2.5, 0, 3.5} and the second one is continuous, one would set:
State, input and output variables can be assigned a text label which overrides the default axis
labels in trajectory and partition plotting (xi , ui and yi , respectively). To assign a text label, set
the following fields of the system structure, e.g. as follows:
which corresponds to the Double Integrator example. Each field is an array of strings
corresponding to a given variable. If the user does not define any (or some) labels, they will be
replaced by default strings (xi , ui and yi ). The strings are used once polyhedral partition of the
explicit controller, or closed-loop (open-loop) trajectories are visualized.
Both system types can be subject to constraints imposed on control inputs and sys-
tem outputs. In addition, constraints on slew rate of the control inputs can also be
given.
LTI systems
sysStruct.A = A;
sysStruct.B = B;
sysStruct.C = C;
sysStruct.D = D;
sysStruct.ymax = ymax;
sysStruct.ymin = ymin;
sysStruct.umax = umax;
sysStruct.umin = umin;
Constraints on slew rate of the control input u(k) can also be imposed by:
sysStruct.dumax = dumax;
sysStruct.dumin = dumin;
sysStruct.noise = W
where W is a polytope object bounding the disturbance. MPT also supports lower-dimensional
noise polytopes. If you want to define noise only on a subset of system states, you can now
do so by defining sysStruct.noise as a set of vertices representing the noise. Say you want
to impose a +/- 0.1 noise on x1, but no noise should be used for x2. You can do that
by:
Just keep in mind that the noise polytope must have vertices stored column-wise.
A polytopic uncertainty can be specified by a cell array of matrices Aunc and Bunc as fol-
lows:
PWA Systems
PWA systems are models for describing hybrid systems. Dynamical behavior of such systems
is captured by relations of the following form:
x ( k + 1) = A i x ( k ) + Bi u ( k ) + f i
y ( k ) = Ci x ( k ) + D i u ( k ) + g i
subj. to
ymin ≤ y(k) ≤ ymax
umin ≤ u(k) ≤ umax
∆umin ≤ u(k) − u(k − 1) ≤ ∆umax
Note that all fields have to be cell arrays of matrices of compatible dimensions, n stands for
total number of different dynamics. If sysStruct.guardU is not provided, it is assumed to
be zero.
System constraints are defined by:
sysStruct.ymax = ymax;
sysStruct.ymin = ymax;
sysStruct.umax = umax;
sysStruct.umin = umin;
sysStruct.dumax = dumax;
sysStruct.dumin = dumin;
x ( k + 1) = A i x ( k ) + Bi u ( k ) + f i + w ( k )
where the disturbance w(k) is assumed to be bounded for all time instances by some polytope
W. To indicate that your system is subject to such a disturbance, set
sysStruct.noise = W;
Control Design
For constrained linear and hybrid systems, MPT can design optimal and sub-optimal control
laws either in implicit form, where an optimization problem of finite size is solved on-line at
every time step and is used in a Receding Horizon Control (RHC) manner or, alternatively,
solve an optimal control problem in a multi-parametric fashion. If the latter approach is used,
an explicit representation of the control law is obtained.
The solution to an optimal control problem can be obtained by a simple call of mpt control.
The general syntax to obtain an explicit representation of the control law is:
Based on the system definition described by sysStruct (cf. Section 5.7) and problem descrip-
tion provided in probStruct (cf. Section 6.9), the main control routine automatically calls
one of the functions reported in Table 6.1 to calculate the explicit solution to a given problem.
mpt control first verifies if all mandatory fields in sysStruct and probStruct structures
are filled out, if not, the procedure will break with an appropriate error message. Note that the
validation process sets the optional fields to default values if there are not present in the two
respective structures. Again, an appropriate message is displayed.
Once the control law is calculated, the solution (here ctrl) is returned as an instance of the
mptctrl object. Internal fields of this object, described in Section 6.2, can be accessed directly
using the sub-referencing operator. For instance
Pn = ctrl.Pn;
will return the polyhedral partition of the explicit controller defined in the variable
ctrl.
Control laws can further be analyzed and/or implemented by functions reported in Chapters 8
and 9.
35
6 Control Design 36
MPT provides a variety of control routines which are being called from mpt control. So-
lutions to the following problems can be obtained depending on the properties of the sys-
tem model and the optimization problem. One of the following control problems can be
solved:
A. Constrained Finite Time Optimal Control (CFTOC) Problem.
B. Constrained Infinite Time Optimal Control Problem (CITOC).
C. Constrained Minimum Time Optimal Control (CMTOC) Problem.
D. Low complexity setup.
The problem which will be solved depends on parameters of the system and problem structure,
namely on the type of the system (LTI or PWA), prediction horizon (fixed or infinity) and the
level of sub-optimality (optimal solution, minimum-time solution, low complexity). Different
combinations of these three parameters lead to a different optimization procedure, as reported
in Table 6.1.
See the documentation of the individual functions for more details. For a good overview of
receding horizon control we refer the reader to [24, 22].
The controller object includes all results obtained as a solution of a given optimal control
problem. In general, it describes the obtained control law and can be used both for analysis of
the solution, as well as for an implementation of the control law.
Fields of the object are summarized in Table 6.2. Every field can be accessed using the standard
. (dot) sub-referencing operator, e.g.
Pn = ctrl.Pn;
Fi = ctrl.Fi;
runtime = ctrl.details.runtime;
6 Control Design 37
Pn The polyhedral partition over which the control law is defined is returned
in this field. It is, in general, a polytope array.
Fi, Gi The PWA control law for a given state x(k) is given by u = Fi{r} x(k)
+ Gi{r}. Fi and Gi are cell arrays.
Ai, Bi, Ci Value function is returned in these three cell arrays and for a given state x(k)
can be evaluated as J = x(k)’ Ai{r} x(k) + Bi{r} x(k) + Ci{r}
where the prime denotes the transpose and r is the index of the active re-
gion, i.e. the region of Pn containing the given state x(k).
Pfinal In this field, the maximum (achieved) feasible set is returned. In general, it
corresponds to the union of all polytopes in Pn.
dynamics A vector which denotes which dynamics is active in which region of Pn.
(Only important for PWA systems.)
details More details about the solution, e.g. total run time.
overlaps Boolean variable denoting whether regions of the controller partition over-
lap.
sysStruct System description in the sysStruct format.
probStruct Problem description in the probStruct format.
Tab. 6.2: Fields of MPT controller objects.
Once the explicit control law is obtained, the corresponding controller object is returned to the
user. The following functions can then be applied:
analyze
Analyzes a given explicit controller and suggests which actions to take in order to improve the
controller.
>> analyze(ctrl)
isexplicit
>> isexplicit(expc)
ans =
>> isexplicit(onlc)
ans =
isinvariant
>> isinvariant(ctrl)
ans =
isstabilizable
>> isstabilizable(ctrl)
ans =
length
Returns the number of regions over which the explicit control law is defined.
>> length(ctrl)
ans =
25
6 Control Design 39
plot
>> plot(ctrl)
runtime
>> runtime(ctrl)
ans =
0.5910
sim
The sim command computes trajectories of the closed-loop system subject to a given controller.
For a more detailed description, please see help mptctrl/sim.
simplot
The simplot plots closed-loop trajectories of a given system subject to given control law. For
a more detailed description, please see help mptctrl/simplot.
In order to obtain a control action associated to a given initial state x0 , it is possible to evaluate
the controller object as follows:
>> u = ctrl(x0)
ans =
-0.7801
6 Control Design 40
This is the coolest feature in the whole history of MPT ! And the credits go Johan Löfberg and
his YALMIP [21]. The function mpt ownmpc allows you to add (almost) arbitrary constraints to
an MPC setup and to define a custom objective functions.
First we explain the general usage of the new function. Design of the custom MPC controllers
is divided into three parts:
1. Design phase. In this part, general constraints and a corresponding cost function are de-
signed
2. Modification phase. In this part, the user is allowed to add custom constraints and/or to
modify the cost function
3. Computation phase. In this part, either an explicit or an on-line controller which respects
user constraints is computed.
Design phase
Aim of this step is to obtain constraints which define a given MPC setup, along with an asso-
ciated cost function, and variables which represent system states, inputs and outputs at vari-
ous prediction steps. In order to obtain said elements for the case of explicit MPC controllers,
call:
Here the variable CON represents a set of constraints, OBJ denotes the optimization objective
and VARS is a structure with the fields VARS.x (predicted states), VARS.u (predicted inputs)
and VARS.y (predicted outputs). Each element is given as a cell array, where each element
corresponds to one step of the prediction (i.e. VARS.x1 denotes the initial state x0, VARS.x2 is
the first predicted state x1, etc.) If a particular variable is a vector, it can be indexed directly to
refer to a particular element, e.g. VARS.x3(1) refers to the first element of the 2nd predicted
state (i.e. x2).
Modification phase
Now you can start modifying the MPC setup by adding your own constraints and/or by modi-
fying the objective. See examples below for more information about this topic.
Note: You should always add constraints on system states (sysStruct.xmin, sysStruct.xmax),
inputs (sysStruct.umin, sysStruct.umax) and outputs (sysStruct.ymin, sysStruct.ymax)
if you either design a controller for PWA/MLD systems, or you intend to add logic constraints
later. Not adding the constraints will cause your problem to be very badly scaled, which can
lead to very bad solutions.
6 Control Design 41
Computation phase
Once you have modified the constraints and/or the objective according to your needs, you can
compute an explicit controller by
Example 6.4.1 (Polytopic constraints 1): Say we would like to introduce polytopic constraints of
the form Hxk ≤ K on each predicted state, including the initial state x0 . To do that, we simply
add these constraints to our set CON:
for k = 1:length(VARS.x)
CON = CON + set(H * VARS.x{k} <= K);
end
You can now proceed with the computation phase, which will give you a controller which
respects given constraints.
Example 6.4.2 (Polytopic constraints 2): We now extend the previous example and add the spec-
ification that polytopic constraints should only be applied on the 1st, 3rd and 4th predicted
state, i.e. on x1 , x3 and x4 . It is important to notice that the variables contained in the VARS
structure are organized in cell arrays, where the first element of VARS.x corresponds to x0 , i.e.
to the initial condition. Therefore to meet or specifications, we would write following code:
for k = [1 3 4],
% VARS.x{1} corresponds to x(0)
% VARS.x{2} corresponds to x(1)
% VARS.x{3} corresponds to x(2)
% VARS.x{4} corresponds to x(3)
% VARS.x{5} corresponds to x(4)
% VARS.x{6} corresponds to x(5)
CON = CON + set(H * VARS.x{k+1} <= K);
end
Example 6.4.3 (Move blocking): Say we want to use more complicated move blocking with fol-
lowing properties: u0 = u1 , (u1 − u2 ) = (u2 − u3 ), and u3 = Kx2 . These requirements can be
implemented by
% u_0 == u_1
>> CON = CON + set(VARS.u{1} == VARS.u{2});
6 Control Design 42
% (u_1-u_2) == (u_2-u_3)
>> CON = CON + set((VARS.u{2}-VARS.u{3}) == (VARS.u{3}-VARS.u{4}));
% u_3 == K*x_2
>> CON = CON + set(VARS.u{4} == K * VARS.x{3});
Example 6.4.4 (Mixed constraints): As illustrated in the move blocking example above, one can
easily create constraints which involve variables at various stages of the prediction. In addition,
it is also possible to add constraints which involve different types of variables. For instance, we
may want to add a constraint that the sum of control inputs and system outputs at each step
must be between certain bounds. This specification can be expressed by:
for k = 1:length(VARS.u)
CON = CON + set(lowerbound < VARS.y{k} + VARS.u{k} < upperbound);
end
Example 6.4.5 (Equality constraints): Say we want to add a constraint that the sum of all predicted
control actions along the prediction horizon should be equal to zero. This can easily be done
by
Example 6.4.6 (Constraints involving norms): We can extend the previous example and add a
specification that the sum of absolute values of all predicted control actions should be less than
some given bound. To achieve this goal, we can make use of the 1-norm function, which exactly
represents the sum of absolute values of each element:
for k = 1:length(VARS.x)-1
CON = CON + set(norm(VARS.x{k+1}, 1) <= norm(VARS.x{k}, 1));
end
Note that these types of constraints are not convex and resulting problems will be difficult to
solve (time-wise).
6 Control Design 43
% now define the complement of the "usafe" set versus some large box,
% to obtain the set of states which are "safe":
>> Pbox = unitbox(dimension(Punsafe), 100);
>> Psafe = Pbox \ Punsafe;
Here set(ismember(VARS.xk, Psafe)) will impose a constraint which tells MPT that it
must guarantee that the state xk belongs to at least one polytope of the polytope array Psafe,
and hence avoiding the ”unsafe” set Punsafe. Notice that this type of constraints requires
binary variables to be introduced, making the optimization problem difficult to solve.
Example 6.4.9 (Logic constraints): Logic constraints in the form of IF-THEN conditions can be
added as well. For example, we may want to require that if the first predicted input u0 is
smaller or equal to zero, then the next input u1 has to be bigger than 0.5:
Notice that this constraint only acts in one direction, i.e. if u0 ≤ 0 then u1 ≥ 0.5, but it does not
say what should be the value of u1 if u0 > 0.
To add an ”if and only if” constraint, use the iff() operator:
which will guarantee that if u0 > 0, then the value of u1 will be smaller than 0.5.
Example 6.4.10 (Custom optimization objective): In the last example we show how to define your
own objective functions. Depending on the value of probStruct.norm, the objective can
either be quadratic, or linear. By default, it is defined according to standard MPC theory (see
help mpt probStruct for details).
To write a custom cost function, simply sum up the terms you want to penalize. For instance,
the standard quadratic cost function can be defined by hand as follows:
6 Control Design 44
OBJ = 0;
for k = 1:length(VARS.u),
% cost for each step is given by x’*Q*x + u’*R*u
OBJ = OBJ + VARS.x{k}’ * Q * VARS.x{k};
OBJ = OBJ + VARS.u{k}’ * R * VARS.u{k};
end
For 1/Inf-norm cost functions, you can use the overloaded norm() operator, e.g.
OBJ = 0;
for k = 1:length(VARS.u),
% cost for each step is given by ||Q*x|| + ||R*u||
OBJ = OBJ + norm(Q * VARS.x{k}, Inf);
OBJ = OBJ + norm(R * VARS.u{k}, Inf;
end
If you, for example, want to penalize deviations of predicted outputs and inputs from a given
time-varying trajectories, you can do so by defining a cost function as follows:
yref = [4 3 2 1];
uref = [0 0.5 0.1 -0.2]
OBJ = 0;
for k = 1:length(yref)
OBJ = OBJ + (VARS.y{k} - yref(k))’ * Qy * (VARS.y{k} - yref(k));
OBJ = OBJ + (VARS.u{k} - uref(k))’ * R * (VARS.u{k} - uref(k));
end
Example 6.4.11 (Defining new variables): Remember the avoidance example? There we have used
constraints to tell the controller that it should avoid a given set of unsafe states. Let’s now
modify that example a bit. Instead of adding constraints, we will introduce a binary variable
which will take a true value if the system states are inside of a given location. Subsequently
we will add a high penalty on that variable, which will tell the MPC controller that it should
avoid the set if possible.
Example 6.4.12 (Removing constraints): When mpt ownmpc constructs the constraints and ob-
jectives, it adds constraints on system states, inputs and outputs, providing they are defined
in respective fields of probStruct. Though one may want to remove certain constraints, for
instance the target set constraints imposed on the last predicted state. To do so, first notice that
each constraint has an associated string tag:
>> Double_Integrator
>> sysStruct.xmax = sysStruct.ymax; sysStruct.xmin = sysStruct.ymin;
>> probStruct.N = 2;
>> [CON, OBJ, VARS] = mpt_ownmpc(sysStruct, probStruct);
>> CON
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
| ID| Constraint| Type| Tag|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
| #1| Numeric value| Element-wise 2x1| umin < u_1 < umax|
| #2| Numeric value| Element-wise 4x1| xmin < x_1 < xmax|
| #3| Numeric value| Element-wise 4x1| xmin < x_2 < xmax|
| #4| Numeric value| Element-wise 4x1| ymin < y_1 < ymax|
| #5| Numeric value| Equality constraint 2x1| x_2 == A*x_1 + B*u_1|
| #6| Numeric value| Equality constraint 2x1| y_1 == C*x_1 + D*u_1|
| #7| Numeric value| Element-wise 6x1| x_2 in Tset|
| #8| Numeric value| Element-wise 4x1| x_0 in Pbnd|
| #9| Numeric value| Element-wise 2x1| umin < u_0 < umax|
| #10| Numeric value| Element-wise 4x1| xmin < x_0 < xmax|
| #11| Numeric value| Element-wise 4x1| ymin < y_0 < ymax|
| #12| Numeric value| Equality constraint 2x1| x_1 == A*x_0 + B*u_0|
| #13| Numeric value| Equality constraint 2x1| y_0 == C*x_0 + D*u_0|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Then to remove certain constraints all you need to do is to substract the constraint you want to
get rid of, identified by its tag. For instance
| #2| Numeric value| Element-wise 4x1| xmin < x_1 < xmax|
| #3| Numeric value| Element-wise 4x1| ymin < y_1 < ymax|
| #4| Numeric value| Equality constraint 2x1| x_2 == A*x_1 + B*u_1|
| #5| Numeric value| Equality constraint 2x1| y_1 == C*x_1 + D*u_1|
| #6| Numeric value| Element-wise 4x1| x_0 in Pbnd|
| #7| Numeric value| Element-wise 2x1| umin < u_0 < umax|
| #8| Numeric value| Element-wise 4x1| xmin < x_0 < xmax|
| #9| Numeric value| Element-wise 4x1| ymin < y_0 < ymax|
| #10| Numeric value| Equality constraint 2x1| x_1 == A*x_0 + B*u_0|
| #11| Numeric value| Equality constraint 2x1| y_0 == C*x_0 + D*u_0|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
will remove any state constraints imposed on the last predicted state x 2. Alternativelly, it is
also possible to identify constraints by their index (ID number in the first column of the above
table). For example to remove the constraint on u 0 (constraint number 7 in the list above), one
can do
Notice that while the string tags associated to each constraint remain absolute, the relative
position of constraints given by the ID number may change.
Since MPT 2.6 it is possible to denote certain constraints as soft. This means that the respective
constraint can be violated, but such a violation is penalized. To soften certain constraints, it is
necessary to define the penalty on violation of such constraints:
• probStruct.Sx - if given as a ”nx” x ”nx” matrix, all state constraints will be treated
as soft constraints, and violation will be penalized by the value of this field.
• probStruct.Su - if given as a ”nu” x ”nu” matrix, all input constraints will be treated
as soft constraints, and violation will be penalized by the value of this field.
6 Control Design 47
• probStruct.Sy - if given as a ”ny” x ”ny” matrix, all output constraints will be treated
as soft constraints, and violation will be penalized by the value of this field.
In addition, one can also specify the maximum value by which a given constraint can be
exceeded:
• probStruct.sxmax - must be given as a ”nx” x 1 vector, where each element defines
the maximum admissible violation of each state constraints.
• probStruct.sumax - must be given as a ”nu” x 1 vector, where each element defines
the maximum admissible violation of each input constraints.
• probStruct.symax - must be given as a ”ny” x 1 vector, where each element defines
the maximum admissible violation of each output constraints.
The aforementioned fields also allow to tell that only a subset of state, input, or output con-
straint should be treated as soft constraints, while the rest of them remain hard. Say, for instance,
that we have a system with 2 states and we want to soften only the second state constraint.
Then we would write:
Here probStruct.sxmax(1)=0 tells MPT that the first constraint should be treated as a hard
constraint, while we are allowed to exceed the second constraints at most by the value of 10,
while every such violation will be penalized by the value of 1000.
Please note that soft constraints are not available for minimum-time (probStruct.subopt lev=1)
and low-complexity (probStruct.subopt lev=2) strategies.
Time-varying system dynamics or systems with time-varying constraints can now be used for
synthesis of optimal controllers. There are couple of limitations, though:
• Number of states, inputs and outputs must remain identical for each system.
• You cannot use time-varying systems in time-optimal (probStruct.subopt lev=1)
and low-complexity (probStruct.subopt lev=2) strategies.
To tell MPT that it should consider a time-varying system, define one system structure for each
step of the prediction, e.g.
>> Double_Integrator
>> S1 = sysStruct;
>> S2 = sysStruct; S2.C = 0.9*S1.C;
>> S3 = sysStruct; S3.C = 0.8*S1.C;
Here we have three different models which differ in the C element. Now we can define the
time-varying model as a cell array of system structures by
6 Control Design 48
Notice that order of systems in the model variable determines that the system S1 will be used
to make predictions of states x(1), while the predicted value of x(2) will be determined based
on model S2, and so on. Once the model is defined, you can now compute either the explicit,
or an on-line MPC controller using the standard syntax:
>> Double_Integrator
>> S1 = sysStruct; S1.ymax = [5; 5]; S1.ymin = [-5; -5];
>> S2 = sysStruct; S2.ymax = [4; 4]; S2.ymin = [-4; -4];
>> S3 = sysStruct; S3.ymax = [3; 3]; S3.ymin = [-3; -3];
>> S4 = sysStruct; S4.ymax = [2; 2]; S4.ymin = [-2; -2];
>> probStruct.N = 4;
>> ctrl = mpt_control({S1, S2, S3, S4}, probStruct);
You can go as far as combining different classes of dynamical systems at various stages of the
predictions, for instance you can arbitrary combine linear, Piecewise-Affine (PWA) and Mixed
Logical Dynamical (MLD) systems. For instance you can use a detailed PWA model for the first
prediction, while having a simple LTI model for the rest:
With MPT 2.6 you can now solve on-line MPC problems based on nonlinear or piecewise
nonlinear systems. In order to define models of such systems, one has to create a special func-
tion based on the mpt nonlinfcn.m template (see for instance the duffing oscillator.m
or pw nonlin.m examples contained in your MPT distribution). Once the describing func-
tion is defined, you can use mpt sys to convert it into format suitable for further computa-
tion:
where function name is the name of the function you have just created. Now you can con-
struct an on-line MPC controller using the standard syntax:
6 Control Design 49
or
After that you can use the controller either in Simulink, or in Matlab-based simulations invoked
either by
>> u = ctrl(x0);
or by
Note: nonlinear problems are very difficult to solve, don’t be surprised! Check the help of
mpt getInput for description of parameters which can affect quality and speed of the nonlin-
ear solvers. Also note that currently only polynomial type of nonlinearities is supported, i.e. no
1/x terms or log/exp functions are allowed. Moreover, don’t even try to use nonlinear models
for things like reachability or stability analysis, it wouldn’t work.
Move blocking is a popular technique used to decrease complexity of MPC problems. In this
strategy the number of free control moves is usually kept low, while some of the control
moves are assumed to be fixed. To enable move blocking in MPT , define the control hori-
zon in
where Nc specifies the number of free control moves, and this value should be less than the
prediction horizon probStruct.N. Control moves u0 up to u Nc −1 will be then treated as free
control moves, while u Nc , . . . , u N −1 will be kept identical to u Nc −1 , i.e.
The optimal control problem with a linear performance index is given by:
N −1
min
u (0),...,u ( N −1)
|| PN x( N )|| p + ∑ || Ru(k)|| p + ||Qx(k)|| p
k =0
subj. to
x(k + 1) = f dyn ( x(k), u(k), w(k))
umin ≤ u(k) ≤ umax
∆umin ≤ u(k) − u(k − 1) ≤ ∆umax
ymin ≤ gdyn ( x(k), u(k)) ≤ ymax
x( N ) ∈ Tset
where:
u vector of manipulated variables over which the optimization is performed
N prediction horizon
p linear norm, can be 1 or Inf for 1- and Infinity-norm, respectively
Q weighting matrix on the states
R weighting matrix on the manipulated variables
PN weight imposed on the terminal state
umin , umax constraints on the manipulated variable(s)
∆umin , dumax constraints on slew rate of the manipulated variable(s)
ymin , ymax constraints on the system outputs
Tset terminal set
the function f dyn ( x(k), u(k), w(k)) is the state-update function and is different for LTI and for
PWA systems (see Section 5.7 for more details).
In case of a performance index based on quadratic forms, the optimal control problem takes
the following form:
N −1
min x( N ) T PN x( N ) + ∑ u(k) T Ru(k) + x(k) T Qx(k)
u (0),...,u ( N −1) k =0
subj. to
x(k + 1) = f dyn ( x(k), u(k), w(k))
umin ≤ u(k) ≤ umax
∆umin ≤ u(k) − u(k − 1) ≤ ∆umax
ymin ≤ gdyn ( x(k), u(k)) ≤ ymax
x( N ) ∈ Tset
strained Infinite Time Optimal Control (CITOC) problem is formulated. Objective of the op-
timization is to choose the manipulated variables such that the performance index is mini-
mized.
In order to specify which problem the user wants to solve, mandatory fields of the problem
structure probStruct are listed in Table 6.3.
Level of Optimality
The toolbox offers broad functionality for analysis of hybrid systems and verification of safety
and liveliness properties of explicit control laws. In addition, stability of closed-loop systems
can be verified using different types of Lyapunov functions.
MPT can compute forward N-steps reachable sets for linear and hybrid systems assuming
the system input either belongs to some bounded set of inputs, or when the input is driven by
some given explicit control law.
To compute the set of states which are reachable from a given set of initial conditions X0 in N
steps assuming system input u(k) ∈ U0 , one has to call:
where sysStruct is the system structure, X0 is a polytope which defines the set of initial
conditions (x(0) ∈ X0 ), U0 is a polytope which defines the set of admissible inputs and N
is an integer which specifies for how many steps should the reachable set be computed. The
resulting reachable sets R are returned as a polytope array. We illustrate the computation on
the following example:
Example 7.1.1: First we define the dynamical system for which we want to compute reachable
sets
53
7 Analysis and Post-Processing 54
Now we can define a set of initial conditions X0 and a set of admissible inputs U0 as polytope
objects.
N = 50;
R = mpt_reachSets(sysStruct, X0, U0, N);
The reachable sets (green) as well as the set of initial conditions (red) are depicted in Figure 7.1.
0.8
0.6
0.4
2
x
0.2
−0.2
−0.4
−0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2
x
1
To compute reachable sets for linear or hybrid systems whose inputs are driven by an explicit
control law, the following syntax can be used:
where ctrl is the controller object as generated by mpt control, X0 is a polytope which
defines a set of initial conditions (x(0) ∈ X0 ), and N is an integer which specifies for how many
steps should the reachable set be computed. The resulting reachable sets R are again returned
as polytope array.
7 Analysis and Post-Processing 55
Example 7.1.2: In this example we illustrate the reachability computation on the double integra-
tor example
% plot results
plot(ctrl.Pn, ’y’, X0, ’r’, R, ’g’);
The reachable sets (green) as well as the set of initial conditions (red) are depicted on top of
the controller regions (yellow) in Figure 7.2.
Reachable sets (green), set of initial conditions X0 (red), controller regions (yellow)
4
1
2
0
x
−1
−2
−3
−4
−6 −4 −2 0 2 4 6
x
1
7.2 Verification
Reachability computation can be directly extended to answer the following question: Do states
of a dynamical system (whose inputs either belong to some set of admissible inputs, or whose in-
puts are driven by an explicit control law) enter some set of “unsafe” states in a given number of
steps?
7 Analysis and Post-Processing 56
Example 7.2.1: In this example we show how to answer the verification question for the first
case, i.e. system inputs belong to some set of admissible inputs (u(k) ∈ U0 ). Although we use
a linear system here, exactly the same procedure applies to hybrid systems in PWA represen-
tation as well.
% number of steps
N = 50;
% perform verification
[canreach, Nf] = mpt_verify(sysStruct, X0, Xf, N, U0);
If system states can reach the set Xf, canreach will be true, otherwise the function will return
false. In case Xf can be reached, the optional second output argument Nf will return the number
of steps in which Xf can be reached from X0.
Example 7.2.2: It is also possible to answer the verification question if the system inputs are
driven by an explicit control law:
X0 = unitbox(2,1) + [3;0];
% number of steps
N = 10;
% perform verification
[canreach, Nf] = mpt_verify(expc, X0, Xf1, N);
For controllers for which no feasibility guarantee can be given a priori, the function
mpt invariantSet can compute an invariant subset of a controller, such that constraints
satisfaction is guaranteed for all time.
ctrl_inv = mpt_invariantSet(ctrl)
In terms of stability analysis, MPT offers functions which aim at identifying quadratic, sum-
of-squares, piecewise quadratic, piecewise affine or piecewise polynomial Lyapunov functions.
If such a function is found, it can be used to show stability of the closed-loop systems even in
cases where no such guarantee can be given a priori. To compute a Lyapunov function, one has
to call
where ctrl is an explicit controller and lyaptype is a string parameter which defines
type of a Lyapunov function to compute. Allowed values of the second parameter are sum-
marized in Table 7.1. Parameters of the Lyapunov function, if one exists, will be stored
in
lyapfunction = ctrl_lyap.details.lyapunov
MPT also addresses the issue of complexity reduction of resulting explicit control laws. As
mentioned in previous sections, in order to apply an explicit controller to a real plant, a proper
control law has to be identified. This involves checking which region of an explicit controller
7 Analysis and Post-Processing 58
contains a given measured state. Although such effort is usually small, it can become prohibitive
for very complex controllers with several thousands or even more regions. MPT therefore al-
lows to reduce this complexity by simplifying the controller partitions over which the control
law is defined. This simplification is performed by merging regions which contain the same ex-
pression of the control law. By doing so, the number of regions is greatly reduced, while main-
taining the same performance as the original controller. Results of the merging procedure for a
sample explicit controller of a hybrid system is depicted in Figure 7.3.
10 10
8 8
6 6
4 4
2 2
x2
x2
0 0
−2 −2
−4 −4
−6 −6
−8 −8
−10 −10
−10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10
x1 x1
(a) Regions of an explicit controller before simplification (b) Regions of an explicit controller after simplification
(252 regions). (39 regions).
To simplify the representation of a given explicit controller by merging regions which contain
the same control law, one has to call:
ctrl_simple = mpt_simplify(ctrl)
If the function is called as indicated above, a heuristical merging will be used. It is also possible
to use optimal merging based on boolean minimization:
Note, however, that the optimal merging can be prohibitive for dimensions above
2.
8
8.1 Algorithm
The control law obtained as a result of mpt control is stored in a respective controller object
mptctrl (see Section 6.2 for more details). The explicit controller takes the form a of Piecewise
Affine control law where the actual control action is given by
where the superindex r denotes the active region, i.e. the region which contains the
given state x(k). If the solution was obtained with feedback pre-stabilization enabled
(probStruct.feedback=1) K is the feedback gain (either user-provided or computed
and stored in ctrl.probStruct.FBgain). K will be zero if pre-stabilization was not re-
quested.
In the controller structure, matrices Fi and Gi are stored as cell arrays, i.e.
Regions of the state-space where each affine control (8.1) is active are stored as a polytope array
in the following field:
Cost associated to a given state x(k) can therefore easily be obtained by simply evaluating the
cost expression, which is defined by
59
8 Implementation of Control Law 60
The procedure to obtain the control action for a given state x(k) therefore reduces to a simple
membership-test. First, index of the active region r has to be identified. Since the polyhedral
partition is a polytope object, the function isinside will return indices of regions which con-
tain the given state x(k). Since certain types of optimization problems naturally generate over-
lapped regions, the active region corresponds to the region in which the cost expression (8.2) is
minimal. Once the active region is identified, the control action is calculated according to (8.1)
and can be applied to the system.
If the optimal control problem was solved for a fixed prediction horizon N, evalua-
tion (8.1) gives a vector of control moves which minimize the given performance criterion,
i.e.
U , [u(0) T u(1) T ...u( N ) T ]T (8.3)
When applying the obtained control law in the closed-loop, only the first input u(0) is extracted
from the sequence U and is applied to the system. This policy is refereed to as the Receding
Horizon Policy.
The algorithm to identify the active control law is summarized below:
8.2 Implementation
The Algorithm 8.1.1 is implemented by the function mpt getInput. Syntax of the function is
the following
can be used to extract the sequence of arguments which minimize the given performance
criterion. Note that unlike Algorithm 8.1.1, mpt getOptimizer does not take into account
overlaps. This is due to the fact that overlapping regions are not (usually) not generated by
mpLP and mpQP algorithms which are implemented in MPT .
The function sim calculates the open-loop or closed-loop state evolution from a given initial
state x0 . In each time step, the optimal control action is calculated according to Algorithm 8.1.1
by calling mpt getInput. Subsequently, the obtained control move is applied to the system to
obtain the successor state x(k + 1). The evolution is terminated once the state trajectory reaches
the origin. Because of numerical issues, a small box centered at origin is constructed an the evo-
lution is stopped as soon as all states enter this small box. Size of the box can be specified by the
user. For tracking problems, the evolution is terminated when all states reach their respective
reference signals. Validation of input and output constraints is performed automatically and
the user is provided with a textual output if the bounds are exceeded.
General syntax is the following:
[X,U,Y]=sim(ctrl,x0)
[X,U,Y]=sim(ctrl,x0,N)
[X,U,Y]=sim(ctrl,x0,N,Options)
[X,U,Y,cost,feasible]=sim(ctrl,x0,N)
8 Implementation of Control Law 62
[X,U,Y,cost,feasible]=sim(ctrl,x0,N,Options)
where the input and output arguments are summarized in Table 8.2. Note: If the third argument is an empty
matrix (N = []), the evolution will be automatically stopped when system states (or system outputs) reach a given
reference point with a pre-defined tolerance.
The trajectories can be visualized using the simplot function:
simplot(ctrl)
simplot(ctrl, x0)
simplot(ctrl, x0, N)
If x0 is not provided and the controller partition is in R2 , you will be able to specify the initial state just by clicking
on the controller partition.
It is possible to specify your own dynamical system to use for simulations. In such case control actions ob-
tained by a given controller can be applied to a different system than that which was used for computing the
controller:
Note that the N and Options arguments are optional. You can specify your own dynamics in two
ways:
1. By setting the system parameter to a system structure, i.e.
sim(ctrl, sysStruct, x0, N, Options)
2. By setting the system parameter to a handle of a function which will provide updates of system states in a
discrete-time fashion:
sim(ctrl, @sim_function, x0, N, Options)
Take a look at help di sim fun on how to write simulation functions compatible with this function.
8 Implementation of Control Law 63
>> mpt_sim
mpt_exportc(ctrl)
mpt_exportc(ctrl, filename)
If the function is called with only one input argument, a file called mpt getInput.h will be created in the working
directory. It is possible to change the filename by providing second input argument to mpt exportc. The header
file is then compiled along with mpt getInput.c and your target application. For more information, see the demo
in mpt/examples/ccode/mpt example.c:
where the filename argument specifies the name of the file which should be created. The controller ctrl used in
this example must have the search tree stored inside. If it does not, use the mpt searchTree function to calculate
it first:
Visualization
MPT provides various functionality for visualization of polytopes, polyhedral partitions, control laws, value func-
tions, general PWA and PWQ functions defined over polyhedral partitions. Part of the functions operate directly
on the resulting controller object ctrl obtained by mpt control, while the other functions accept more general
input arguments. Please consult help files of individual functions for more details.
plot(ctrl)
simplot(ctrl)
which allows to pick up the initial state x (0) by a mouse click, providing the controller object represents an ex-
plicit controller and dimension of the associated polyhedral partition is equal to 2. Subsequently, state trajectory is
64
9 Visualization 65
calculated on plotted on top of the polyhedral partition over which the control law is defined. If the solution was
obtained for a tracking problem, the user is first prompted to choose the reference point, again by a mouse click.
Afterwards, the initial state x (0) has to be selected. Finally, evolution of states is plotted again versus the polyhedral
partition.
If the same command is used with additional input arguments, e.g.
then the computed trajectories are visualized with respect to time. The system is not limited in dimension or number
of manipulated variables. Unlike the point-and-click interface, the initial point x (0) has to be provided by the user.
In addition, the maximal number of steps can be specified in horizon. If this variable is missing, or set to an
empty matrix, the evolution will continue until origin (or the reference point for tracking problems) is reached.
Additional optional argument Options can be provided to specify additional requirements. Similarly as described
by Section 8.2, also the simplot function allows the user to use different system dynamics when calculating the
system evolution. Check the help description of mptctrl/simplot for more details.
f ( x ) = Lr x + C r if x ∈ Pnr (9.1)
where the superindex r indicates that the expression for the function is different in every region r of a polyhedral
partition Pn .
Piecewise Quadratic functions can be described as follows
f ( x ) = x T Mr x + Lr x + C r if x ∈ Pnr (9.2)
Again, expression for the cost varies in different regions of the polyhedral set Pn .
MPT allows you to visualize both aforementioned types of functions.
The command
mpt_plotPWA(Pn, L, C)
plots the PWA function (9.1) defined over the polyhedral partition Pn. Typical application of this function is to
visualize the control law and value function obtained as a solution to a given optimal control problem. For the first
case (visualization of control action), one would type:
to get the desired result. The same limitation applies also in this case.
Piecewise quadratic functions defined by (9.2) can be plotted by function
mpt_plotPWQ(Pn, Q, L, C, meshgridpoints)
9 Visualization 66
Inputs are the polytope array Pn, cell arays Q, L and C. When plotting a PWQ function, the space covered by Pn
has to be divided into a mesh grid. The fourth input argument (meshgridpoints) states into how many points
should each axis of the space of interest be divided. Default value for this parameter is 30. Note that dimension of
Pn has to be at most 2.
MPT provides a ”shortcut” function to plot value of the control action with respect to the polyhedral partition
directly, without the need to pass each input (Pn, L, C) separately:
mpt_plotU(ctrl)
If the function is called with a valid controller object, value of the control action in each region will be de-
picted. If the polyhedral partition Pn contains overlapping regions, the user will be prompted to use the ap-
propriate reduction scheme (mpt removeOverlaps) first to get a proper result. See help mpt plotU for more
details.
Similarly, values of the cost function associated to a given explicit controller can be plotted by
mpt_plotJ(ctrl)
Also in this case the partition is assumed to contain no overlaps. See help mpt plotJ for more details and list of
available options.
10
Examples
In order to obtain a feedback controller, it is necessary to specify both a system as well as the problem. We
demonstrate the procedure on a simple second-order double integrator, with bounded input | u | ≤ 1 and output
||y(k)|| ∞ ≤ 5:
Example 10.0.1:
For this system we will now formulate the problem with quadratic cost objective in (3.12) and a prediction horizon
of N = 5:
If we now call
the controller for the given problem is returned and plotted (see Figure 10.1(a)), i.e., if the state x ∈ PA(i ), then the
optimal input for prediction horizon N = 5 is given by u = Fi ix + Gi i. If we wish to compute a low complexity
solution, we can run the following:
67
10 Examples 68
5 5
5
4 4 x 10
12
3 3
10
2 2 8
1 1 6
x2
2
0 0
x
2
−1 −1
0
5
−2 −2
−3 −3
0
−4 −4
10
5
−5 −5 0
−10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 x −5 −5
2 −10
x1 x x
1 1
(a) The N = 5 step optimal feed- (b) The iterative low complexity (c) Lyapunov function for the low
back solution. solution for the double integrator. complexity solution.
>> Q = ctrl.details.lyapunov.Q;
>> L = ctrl.details.lyapunov.L;
>> C = ctrl.details.lyapunov.C;
>> mpt_plotPWQ(ctrl.finalPn,Q,L,C); % Plot the Lyapunov Function
The resulting partition and Lyapunov function is depicted in Figures 10.1(b) and 10.1(c) respectively. In the following
we will solve the PWA problem introduced in [23] by defining two different dynamics which are defined in the left-
and right half-plane of the state space respectively.
Example 10.0.2:
we can now compute the low complexity feedback controller by defining the problem
10 Examples 69
>> ctrl=mpt_control(sysStruct,probStruct);
>> plot(ctrl)
10
2
2
0
x
−2
−4
−6
−8
−10
−6 −4 −2 0 2 4 6 8
x
1
For more examples we recommend to look at the demos which can be found in respective subdirectories of the
mpt/examples directory of your MPT installation.
11
Polytope Library
As already mentioned in Section 3.1, a polytope is a convex bounded set which can be represented either as an
intersection of a finite number of half-spaces (H-representation) or as a convex hull of vertices (V -representation).
Both ways of defining a polytope are allowed in MPT and you can switch from one representation to the other
one. However, by default all polytopes are generated in H-representation only to avoid unnecessary computa-
tion.
P = polytope(H,K)
creates a polytope by providing it’s H-representation, i.e. the matrices H and K which form the polytope P = { x ∈
Rn | Hx ≤ K }. If input matrices define some redundant constraints, these will be automatically removed to form a
minimal representation of the polytope. In addition, center and diameter of the largest ball which can be inscribed
into the polytope are computed as well and the H-representation is normalized to avoid numerical problems. The
constructor then returns a polytope object.
Polytope can also be defined by it’s vertices as follows:
P = polytope(V)
where V is a matrix which contains vertices of the polytope in the following format:
v1,1 . . . v1,n
V = ... .. .. (11.1)
. .
vk,1 . . . vk,n
where k is the total number of vertices and n is the dimension. Hence vertices are stored row-wise. Before the
polytope object is created, V -representation is first converted to half-space description by eliminating all points from
V which are not extreme points. Convex hull of the remaining points is then computed to obtain the corresponding
H-representation. Extreme points will be stored in the polytope object and can be returned upon request without
additional computational effort.
70
11 Polytope Library 71
[H,K] = double(P)
HK = double(P)
flag = isnormal(P)
flag = isminrep(P)
return 1 if polytope P is in minimal representation (i.e. the H-representation contains no redundant hyperplanes),
0 otherwise.
The polytope is bounded if
flag = isbounded(P)
d = dimension(P)
and
nc = nconstr(P)
will return number of constraints (i.e. number of half-spaces) defining the given polytope P.
Vertex representation of a polytope can be obtained by:
11 Polytope Library 72
V = extreme(P)
which returns vertices stored row-vise in the matrix V. As enumeration of extreme vertices is an expensive operation,
the computed vertices can be stored in the polytope object. To do it, we always recommend to call the function as
follows:
[V,R,P] = extreme(P)
which returns extreme points V, extreme rays R and the update polytope object with vertices stored inside
(P).
To check if a given point x lies in a polytope P, use the following call:
flag = isinside(P,x)
The function returns 1 if x ∈ P, 0 otherwise. If P is a polyarray (see Section 11.3 for more details about polyarrays),
the function call can be extended to provide additional information:
which returns a 1/0 flag which denotes if the given point x belongs to any polytope of a polyarray P. If the given
point lies in more than one polytope, inwhich contains indexes of the regions which contain x. If there is no such
region, index of a region which is closest to the given point x is returned in closest.
Functions mentioned in this chapter are summarized in Table 11.2.
It does not matter whether the concatenated elements are single polytopes or polyarrays. To illustrate this, assume
we’ve defined polytopes P1, P2, P3, P4, P5 and polyarrays A = [P1 P2] and B = [P3 P4 P5]. Then the
following polyarrays M and N are equivalent:
M = [A B]
N = [P1 P2 P3 P4 P5]
Individual elements of a polyarray can be obtained using the standard referencing (i) operator, i.e.
P = M(2)
will return the second element of the polyarray M which is equal to P2 in this case. More complicated expressions
can be used for referencing:
Q = M([1,3:5])
will return a polyarray Q which contains first, third, fourth and fifth element of polyarray M.
If you want to remove some element from a polyarray, use the referencing command as follows:
M(2) = []
which will remove the second element from the polyarray M. Again, multiple indices can be specified,
e.g.
M([1 3]) = []
N([1 3]) = []
the polyarray N = [P2 P4 P5] and the length of the array is 3. No empty positions in a polyarray are allowed!
Similarly, empty polytopes are not being added to a polyarray.
A polyarray is still a polytope object, hence all functions which work on polytopes support also polyarrays. This
is an important feature mainly in the geometric functions.
Length of a given polyarray is obtained by
l = length(N)
Nf = fliplr(N)
The resulting plot is depicted in Figure 11.1. When a polytope object is created, the constructor automatically
normalizes its representation and removes all redundant constraints. Note that all elements of the polytope class
are private and can only be accessed as described in the tables. Furthermore, all information on a polytope is
stored in the internal polytope structure. In this way unnecessary repetitions of the computations during polytopic
manipulations in the future can be avoided.
Example 11.4.2:
11 Polytope Library 75
1.5
0.5
−0.5
−1
−1.5
−1.5 −1 −0.5 0 0.5 1 1.5
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 1.5 −1 −0.5 0 0.5 1 1.5
(a) The sets P and Q in Example 11.4.2. (b) The sets P \ Q in Example 11.4.2.
The polytopes P and Q are depicted in Figure 11.4. The following will illustrate the hull and extreme func-
tions.
Example 11.4.3:
The hull function is overloaded such that it takes both elements of the polytope class as well as matrices of
points as input arguments.
12
Acknowledgment
We would like to thank all contributors and those who report bugs. Specifically (in alphabetical order): Miroslav
Baric, Alberto Bemporad, Francesco Borrelli, Frank J. Christophersen, Tobias Geyer, Eric Kerrigan, Adam Lager-
berg, Arne Linder, Marco Lüthi, Saša V. Raković, Fabio Torrisi and Kari Unneland. A special thanks goes to
Komei Fukuda (cdd), Johan Löfberg (Yalmip) and Colin Jones (ESP) for allowing us to include their respective
packages in the distribution. Thanks to their help we are able to say that MPT truly is an ’unpack-and-use’ tool-
box.
77
Bibliography
[1] Baotić, M.: An Efficient Algorithm for Multi-Parametric Quadratic Programming. Technical Report AUT02-04,
Automatic Control Laboratory, ETH Zurich, Switzerland, February 2002.
[2] Baotić, M., F. J. Christophersen and M. Morari: Infinite Time Optimal Control of Hybrid Systems with a Linear
Performance Index. In Proc. of the Conf. on Decision and Control, Maui, Hawaii, USA, December 2003.
[3] Baotić, M., F.J. Christophersen and M. Morari: A new Algorithm for Constrained Finite Time Optimal Control
of Hybrid Systems with a Linear Performance Index. In European Control Conference, Cambridge, UK, September
2003.
[4] Bemporad, A., F. Borrelli and M. Morari: Explicit Solution of LP-Based Model Predictive Control. In Proc. 39th
IEEE Conf. on Decision and Control, Sydney, Australia, December 2000.
[5] Bemporad, A., F. Borrelli and M. Morari: Optimal Controllers for Hybrid Systems: Stability and Piecewise Linear
Explicit Form. In Proc. 39th IEEE Conf. on Decision and Control, Sydney, Australia, December 2000.
[6] Bemporad, A., F. Borrelli and M. Morari: Min-max Control of Constrained Uncertain Discrete-Time Linear
Systems. IEEE Trans. Automatic Control, 48(9):1600–1606, 2003.
[7] Bemporad, A., K. Fukuda and F.D. Torrisi: Convexity Recognition of the Union of Polyhedra. Computational
Geometry, 18:141–154, April 2001.
[8] Bemporad, A., M. Morari, V. Dua and E.N. Pistikopoulos: The Explicit Linear Quadratic Regulator for Con-
strained Systems. Automatica, 38(1):3–20, January 2002.
[9] Borrelli, F.: Constrained Optimal Control Of Linear And Hybrid Systems, volume 290 of Lecture Notes in Control
and Information Sciences. Springer, 2003.
[10] Borrelli, F., M. Baotić, A. Bemporad and M. Morari: An Efficient Algorithm for Computing the State Feed-
back Optimal Control Law for Discrete Time Hybrid Systems. In Proc. 2003 American Control Conference, Denver,
Colorado, USA, June 2003.
[11] Ferrari-Trecate, G., F. A. Cuzzola, D. Mignone and M. Morari: Analysis of discrete-time piecewise affine and
hybrid systems. Automatica, 38:2139–2146, 2002.
[12] Fukuda, K.: Polyhedral computation FAQ, 2000. On line document. Both html and ps versions available from
http://www.ifor.math.ethz.ch/staff/fukuda.
[13] Grieder, P., F. Borrelli, F.D. Torrisi and M. Morari: Computation of the Constrained Infinite Time Linear
Quadratic Regulator. In Proc. 2003 American Control Conference, Denver, Colorado, USA, June 2003.
[14] Grieder, P., M. Kvasnica, M. Baotić and M. Morari: Low Complexity Control of Piecewise Affine Systems with
Stability Guarantee. In American Control Conference, Boston, USA, June 2004.
[15] Grieder, P. and M. Morari: Complexity Reduction of Receding Horizon Control. In Proc. 42th IEEE Conf. on
Decision and Control, Maui, Hawaii, USA, December 2003.
[16] Grieder, P., P. Parillo and M. Morari: Robust Receding Horizon Control - Analysis & Synthesis. In Proc. 42th
IEEE Conf. on Decision and Control, Maui, Hawaii, USA, December 2003.
[17] Heemels, W.P.M.H., B. De Schutter and A. Bemporad: Equivalence of Hybrid Dynamical Models. Automatica,
37(7):1085–1091, July 2001.
[18] Johannson, M. and A. Rantzer: Computation of piece-wise quadratic Lyapunov functions for hybrid systems. IEEE
Trans. Automatic Control, 43(4):555–559, 1998.
[19] Kerrigan, E. C. and J. M. Maciejowski: Robustly stable feedback min-max model predictive control. In Proc. 2003
American Control Conference, Denver, Colorado, USA, June 2003.
78
Bibliography 79
[20] Kerrigan, E. C. and D. Q. Mayne: Optimal control of constrained, piecewise affine systems with bounded disturbances.
In Proc. 41st IEEE Conference on Decision and Control, Las Vegas, Nevada, USA, December 2002.
[21] Löfberg, J.: YALMIP : A Toolbox for Modeling and Optimization in MATLAB. In Proceedings of the CACSD Confer-
ence, Taipei, Taiwan, 2004. Available from http://control.ee.ethz.ch/˜joloef/yalmip.php.
[22] Maciejowski, J.M.: Predictive Control with Constraints. Prentice Hall, 2002.
[23] Mayne, D. Q. and S. Raković: Model predicitive control of constrained piecewise affine discrete-time systems. Int. J.
of Robust and Nonlinear Control, 13(3):261–279, April 2003.
[24] Mayne, D. Q., J.B. Rawlings, C.V. Rao and P.O.M. Scokaert: Constrained model predictive control: Stability and
Optimality. Automatica, 36(6):789–814, June 2000.
[25] Rawlings, J.B. and K.R. Muske: The stability of constrained receding-horizon control. IEEE Trans. Automatic
Control, 38:1512–1516, 1993.
[26] Skogestad, S. and I. Postlethwaite: Multivariable Feedback Control. John Wiley & Sons, 1996.
[27] The MathWorks, Inc.: MATLAB Users Manual. Natick, MA, US, 2003. http://www.mathworks.com.
[28] Tøndel, P., T.A. Johansen and A. Bemporad: An Algorithm for Multi-Parametric Quadratic Programming and
Explicit MPC Solutions. In Proc. 40th IEEE Conf. on Decision and Control, Orlando, Florida, December 2001.
[29] Torrisi, F.D. and A. Bemporad: HYSDEL — A Tool for Generating Computational Hybrid Models. Technical Report
AUT02-03, ETH Zurich, 2002. Submitted for publication on IEEE Trans. on Control Systems Technology.
[30] Ziegler, G. M.: Lectures on Polytopes. Springer, 1994.