0% found this document useful (0 votes)
4 views

Robust Control of Constrained Discrete Time Systems Characterization and Implementation

This thesis by Saša V. Raković focuses on robust constrained optimal control for discrete time systems, detailing advancements in set invariance, reachability analysis, and model predictive control. It presents novel results in robust control invariance and addresses various reachability problems, emphasizing computational efficiency and robustness. The work also introduces parametric mathematical programming techniques for characterizing solutions to constrained optimal control problems.

Uploaded by

y760732895
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Robust Control of Constrained Discrete Time Systems Characterization and Implementation

This thesis by Saša V. Raković focuses on robust constrained optimal control for discrete time systems, detailing advancements in set invariance, reachability analysis, and model predictive control. It presents novel results in robust control invariance and addresses various reachability problems, emphasizing computational efficiency and robustness. The work also introduces parametric mathematical programming techniques for characterizing solutions to constrained optimal control problems.

Uploaded by

y760732895
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 266

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/257342235

Robust control of constrained discrete time systems: Characterization and


implementation

Thesis · January 2005

CITATIONS READS

39 1,441

1 author:

Saša V. Raković
-
159 PUBLICATIONS 8,187 CITATIONS

SEE PROFILE

All content following this page was uploaded by Saša V. Raković on 09 March 2018.

The user has requested enhancement of the downloaded file.


Robust Control
of Constrained Discrete Time Systems:
Characterization and Implementation

Saša V. Raković

Thesis Submitted for the Degree of Doctor of Philosophy

University of London
Imperial College London
Department of Electrical and Electronic Engineering

January 2005
Abstract

This thesis deals with robust constrained optimal control and characterizations of solu-
tions to some optimal control problems. The thesis has three main parts, each of which
is related to a set of important concepts in constrained and robust control of discrete
time systems.
The first part of this thesis is concerned with set invariance and reachability analysis.
A set of novel results that complement and extend existing results in set invariance and
reachability analysis for constrained discrete time systems is reported. These results are:
(i) invariant approximation of the minimal and maximal robust positively invariant set
for linear and/or piecewise affine discrete time systems, (ii) optimized robust control
invariance for a discrete-time, linear, time-invariant system subject to additive state
disturbances, (iii) abstract set invariance – set robust control invariance.
Additionally, a number of relevant reachability problems is addressed. These reach-
ability problems are: (i) reachability analysis for nonlinear, time-invariant, discrete-time
systems subject to mixed constraints on the state and input with a persistent distur-
bance, dependent on the current state and input, (ii) regulation of uncertain discrete
time systems with positive state and control constraints, (iii) robust time optimal obsta-
cle avoidance for discrete time systems, (iv) state estimation for piecewise affine discrete
time systems subject to bounded disturbances.
The second part addresses the issue of robustness of model predictive control. A
particular emphasis is given to feedback model predictive control and stability analysis in
robust model predictive control. A set of efficient and computationally tractable robust
model predictive schemes is devised for constrained linear discrete time systems. The
computational burden is significantly reduced while robustness is improved compared
with standard and existing approaches in the literature.
The third part introduces basic concepts of reverse transformation and parametric
mathematical programming. These techniques are used to obtain characterization of so-
lution to a number of important constrained optimal control problems. Some existing
results are then improved by exploiting parametric mathematical programming. Ap-
plication of parametric mathematical programming to a set of interesting problems is
reported.
Contents

Contents i

List of Figures vi

List of Tables viii

1 Introduction 1
1.1 What is model predictive control? . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Brief Historical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Current Research in RHC and MPC . . . . . . . . . . . . . . . . . . . . . 4
1.4 Outline & Brief Summary of Contributions . . . . . . . . . . . . . . . . . 5
1.5 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Basic Mathematical Notation, Definitions and Preliminaries . . . . . . . . 10
1.7 Preliminary Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7.1 Basic Stability Definitions . . . . . . . . . . . . . . . . . . . . . . . 15
1.7.2 Set Invariance and Reachability Analysis . . . . . . . . . . . . . . 17
1.7.3 Optimal Control of constrained discrete time systems . . . . . . . 21
1.7.4 Some Set Theoretic Concepts and Efficient Algorithms . . . . . . . 26

I Advances in Set Invariance and Reachability Analysis 31

2 Invariant Approximations of robust positively invariant sets 32


2.1 Invariant Approximations of RPI sets for Linear Systems . . . . . . . . . 33
2.2 Approximations of the minimal robust positively invariant set . . . . . . . 34
2.2.1 The origin is in the interior of W . . . . . . . . . . . . . . . . . . . 35
2.2.2 The origin is in the relative interior of W . . . . . . . . . . . . . . 40
2.2.3 Computing the reachable set of an RPI set . . . . . . . . . . . . . 42
2.3 The maximal robust positively invariant MRPI set . . . . . . . . . . . . . 44
2.3.1 On the determinedness index of O∞ . . . . . . . . . . . . . . . . . 45
2.3.2 Inner approximation of the MRPI set . . . . . . . . . . . . . . . . 47
2.4 Efficient computations and a priori upper bounds . . . . . . . . . . . . . 48
2.4.1 Efficient Computation if W is a Polytope . . . . . . . . . . . . . . 49
2.4.2 A priori upper bounds if A is diagonizable . . . . . . . . . . . . . 50

i
2.5 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3 Optimized Robust Control Invariance 55


3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Robust Control Invariance Issue . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.1 Optimized Robust Control Invariance . . . . . . . . . . . . . . . . 59
3.2.2 Optimized Robust Control Invariance Under Constraints . . . . . . 60
3.2.3 Relaxing Condition Mk ∈ Mk . . . . . . . . . . . . . . . . . . . . . 62
3.3 Comparison with Existing Methods . . . . . . . . . . . . . . . . . . . . . . 63
3.3.1 Comparison – Illustrative Example . . . . . . . . . . . . . . . . . . 64
3.4 Conclusions and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4 Abstract Set Invariance – Set Robust Control Invariance 68


4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 Set Robust Control Invariance for Linear Systems . . . . . . . . . . . . . . 71
4.3 Special Set Robust Control Invariant Sets . . . . . . . . . . . . . . . . . . 72
4.4 Dynamical Behavior of X ∈ Φ . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.5 Constructive Simplifications . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5.1 Case I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5.2 Case II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5.3 Case III . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.5.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5 Regulation of discrete-time linear systems with positive state and con-


trol constraints and bounded disturbances 80
5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2 Robust Control Invariance Issue – Revisited . . . . . . . . . . . . . . . . . 82
5.2.1 Optimized Robust Controlled Invariance Under Positivity Con-
straints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3 Robust Time–Optimal Control under positivity constraints . . . . . . . . 86
5.4 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

6 Robust Time Optimal Obstacle Avoidance Problem for discrete–time


systems 91
6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.2 Robust Time Optimal Obstacle Avoidance Problem – General Case . . . . 93
6.3 Robust Time Optimal Obstacle Avoidance Problem – Linear Systems . . 95
6.4 Robust Time Optimal Obstacle Avoidance Problem – Piecewise Affine
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

ii
6.5 An Appropriate Selection of the feedback control laws κ(i,j,k) (·) and κ(i,j,l,k) (·) 98
6.5.1 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

7 Reachability analysis for constrained discrete time systems with state-


and input-dependent disturbances 102
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.2 The One-step Robust Controllable set . . . . . . . . . . . . . . . . . . . . 104
7.2.1 General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2.2 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.2.3 Linear and Piecewise Affine f (·) with Additive State Disturbances 108
7.3 The i-step Set and robust control Invariant Sets . . . . . . . . . . . . . . . 109
7.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.4.1 Scalar System with State-dependent Disturbances . . . . . . . . . 111
7.4.2 Second-order LTI Example with Control-dependent Disturbances . 112
7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

8 State Estimation for piecewise affine discrete time systems subject to


bounded disturbances 118
8.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8.2 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.2.1 Recursive filtering algorithm . . . . . . . . . . . . . . . . . . . . . 122
8.2.2 Piecewise affine systems . . . . . . . . . . . . . . . . . . . . . . . . 123
8.3 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.4 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.5 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

II Robust Model Predictive Control 130

9 Tubes and Robust Model Predictive Control of constrained discrete


time systems 131
9.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
9.2 Tubes – Basic Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
9.3 Stabilizing Ingredients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

10 Robust Model Predictive Control by using Tubes – Linear Systems 144


10.1 Tubes for constrained Linear Systems with additive disturbances . . . . . 144
10.1.1 Tube MPC – Stabilizing Ingredients . . . . . . . . . . . . . . . . . 146
10.1.2 Simple Robust Optimal Control Problem . . . . . . . . . . . . . . 147
10.1.3 Tube model predictive controllers . . . . . . . . . . . . . . . . . . . 150

iii
10.1.4 Receding Horizon Tube controller . . . . . . . . . . . . . . . . . . . 150
10.2 Tube MPC – Simple Robust Control Invariant Tube . . . . . . . . . . . . 151
10.3 Tube MPC – Optimized Robust Control Invariant Tube . . . . . . . . . . 155
10.4 Tube MPC – Method III . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
10.4.1 Numerical Examples for Tube MPC – III method . . . . . . . . . . 163
10.5 Extensions and Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

III Parametric Mathematical Programming in Control Theory 166

11 Parametric Mathematical Programming and Optimal Control 167


11.1 Basic Parametric Mathematical Programs . . . . . . . . . . . . . . . . . . 168
11.2 Constrained Linear Quadratic Control . . . . . . . . . . . . . . . . . . . . 174
11.3 Optimal Control of Constrained Piecewise Affine Systems . . . . . . . . . 178
11.3.1 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . 186
11.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

12 Further Applications of Parametric Mathematical Programming 189


12.1 Robust One – Step Ahead Control of Constrained Discrete Time Systems 190
12.1.1 Robust Time Optimal Control of constrained PWA systems . . . . 194
12.2 Computation of Voronoi Diagrams and Delaunay Triangulations by para-
metric linear programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.3 A Logarithmic – Time Solution to the point location problem for closed–
form linear MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
12.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

IV Conclusions 226

13 Conclusion 227
13.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13.1.1 Contributions to Set Invariance Theory and Reachability Analysis 227
13.1.2 Contributions to Robust Model Predictive Control . . . . . . . . . 228
13.1.3 Contributions to Parametric Mathematical Programming . . . . . 229
13.2 Directions for future research . . . . . . . . . . . . . . . . . . . . . . . . . 229
13.2.1 Extensions of results related to Set Invariance Theory and Reach-
ability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13.2.2 Extensions of results related to Robust Model Predictive Control . 230
13.2.3 Extensions of results related to Parametric Mathematical Program-
ming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

A Geometric Computations with Polygons 231

B Constrained Control Toolbox 233

iv
Bibliography 235

v
List of Figures

1.7.1 Graphical Illustration of Proposition 1.7. . . . . . . . . . . . . . . . . . . 30

2.5.1 Invariant Approximations of F∞ – Sets: Fs and F (α∗ , s∗ ) . . . . . . . . 53


2.5.2 Reach sets of Ω , O∞ for third example . . . . . . . . . . . . . . . . . . 54

Ki : Sets F Ki
3.3.1 Invariant Approximations of F∞ (ζK i
,sKi ) , i = 1, 2, 3 . . . . . . . 65
3.3.2 Invariant Sets Rk i (Mk 0i ), i = 1, 2, 3 . . . . . . . . . . . . . . . . . . . . . 65
Ki : Ki
3.3.3 Invariant Approximations of F∞ Sets F(ζ Ki ,sKi )
, i = 4, 5, 6 . . . . . . . 66

4.1.1 Graphical Illustration of Robust Control Invariance Property . . . . . . . 69


4.1.2 Graphical Illustration of Definition 4.1 . . . . . . . . . . . . . . . . . . . 70
4.2.1 Exploiting Linearity – Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . 73
4.4.1 A Graphical Illustration of Convergence Observation . . . . . . . . . . . 75
4.5.1 Sample Set Trajectory for a set X0 ∈ Φ∞ (R) – Ellipsoidal Sets . . . . . . 78
4.5.2 Sample Set Trajectory for a set X0 ∈ Φ∞ (R) . . . . . . . . . . . . . . . . 79

5.0.1 Graphical Illustration of translated RCI set . . . . . . . . . . . . . . . . 81


5.4.1 RCI Set Sequence {Xi }, i ∈ N13 . . . . . . . . . . . . . . . . . . . . . . . 89

6.5.1 Obstacles, State Constraints and Target Set . . . . . . . . . . . . . . . . 100


6.5.2 RCI Set Sequence {Xi }, i ∈ N3 . . . . . . . . . . . . . . . . . . . . . . . 100
6.5.3 CI Set Sequence {Xi }, i ∈ N3 . . . . . . . . . . . . . . . . . . . . . . . . 101

7.2.1 Graphical illustration of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . 106


7.4.1 Graph of W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.4.2 Sets Σi for i = 1, 2, 3, 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.4.3 Graph of W (top) and the set Σ (bottom) . . . . . . . . . . . . . . . . . 113
7.4.4 Graph of W (top) and the set Σ (bottom) . . . . . . . . . . . . . . . . . 113
7.4.5 Graph of W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.4.6 Sets Xi for i = 0, 1, . . . , 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

8.2.1 Graphical Illustration of Theorem 8.1 . . . . . . . . . . . . . . . . . . . . 126


8.5.1 Estimated Sets for Example in Section 8.5 . . . . . . . . . . . . . . . . . 129

9.1.1 Comparison of open–loop OC, nominal MPC and feedback MPC . . . . . 132
9.2.1 Graphical illustration of feedback MPC by using tubes . . . . . . . . . . 134

vi
10.2.1 RCI Sets Xi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
10.2.2 Simple RMPC tubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
10.3.1 Controllability Sets Xi , i = 0, 1, . . . , 21 . . . . . . . . . . . . . . . . . . . 158
10.3.2 RMPC Tube Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
10.4.1 ‘Tube’ MPC trajectory with q = 10 . . . . . . . . . . . . . . . . . . . . . 164
10.4.2 ‘Tube’ MPC trajectory with q = 1000 . . . . . . . . . . . . . . . . . . . . 164

11.1.1 Graphical Illustration of Solution to pLP . . . . . . . . . . . . . . . . . . 171


11.1.2 Graphical Illustration of Solution to pPAP . . . . . . . . . . . . . . . . . 173
11.2.1 Regions RI for a second order example . . . . . . . . . . . . . . . . . . . 179
11.3.1 Constrained PWA system – Regions Xµ for a second order example . . . 187

12.1.1 Final robust time–optimal controller for Example 1. . . . . . . . . . . . . 206


12.1.2 Final robust time–optimal controller for Example 2. . . . . . . . . . . . . 207
12.2.1 Illustration of a Voronoi diagram and Delaunay triangulation of a random
set of points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
12.2.2 Voronoi Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
12.2.3 Calculation of a Delaunay Triangulation . . . . . . . . . . . . . . . . . . 213
12.2.4 Illustration of a Voronoi diagram and Delaunay triangulation for a given
set of points S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
12.2.5 Illustration of the Voronoi diagram and Delaunay triangulation of a unit-
cube P in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
12.3.1 Illustration of the value function V 0 (x) and control law κN (x) for a ran-
domly generated pLP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
12.3.2 Comparison of ANN (Solid lines) to (Borelli et al., 2001) (Dashed lines) . 224

vii
List of Tables

2.1 Data for 2nd order example . . . . . . . . . . . . . . . . . . . . . . . . . . 53

viii
Acknowledgments

First of all, I am sincerely thankful to Professor Richard B. Vinter for giving me a chance
to work on this project. I am truly indebted to Professor Vinter for making it possible
as without his support I would not have been in position to continue and complete this
particular part of my studies.

My sincere appreciation goes to Professor David Q. Mayne for unlimited encourage-


ment during this part of my studies. Being able to work with the real alchemist of MPC
is an unique opportunity and my best hope is that it has been used appropriately. I am
truly thankful to Professor Mayne for making me realize the difference between knowing
the path and walking the path. His immeasurable enthusiasm, creativity and dedication
have set up an appropriate example that I will try to follow. More importantly, at least
for me, my personal life and philosophy have been influenced by an extraordinary scholar
and person and for this I have just unbounded gratitude dedicated to David.

This research has been supported by the Engineering and Physical Sciences Research
Council (EPSRC) UK. Financial support provided by EPSRC is greatly appreciated.

Some time ago I did not believe, but merely hoped, that I would reach this stage.
Now I feel that I am close to completing a significant part of my life’s path. This path
would have been different if I did not have friends who gave me support and trust when
I needed it most. I am most thankful to Lydia Srebernjak, Rade Marić, Miroljub Lukić,
Rade Milićević, Jovan Mitrović and Srboljub Živanović for being honest friends and great
companions.

My work has been influenced by many brilliant young researchers and I would like
to thank them for a set of fruitful discussions that have affected my understanding of
certain subjects of control theory. In particular I would like to thank to Dr. Eric C.
Kerrigan, Dr. Konstantinos I. Kouramas, Dr. Pascal Grieder, Dr. Rolf Findeisen and
Dr. Colin Jones. My interaction with these excellent young researchers has influenced
certain results reported in this thesis (and is acknowledged as appropriate). I would also
like to thank to Professor Manfred Morari and Dr. Pascal Grieder for inviting me to visit
hybrid research group at ETH, Zürich. This visit has been a very valuable experience and
I appreciated it very much. My special gratitude goes to Dr. Konstantinos Kouramas,

ix
Dr. Dina Shona Laila and Dr. Stanislav Žaković for provisional proof–reading of certain
parts of this thesis.
I would like to express my thanks to all academic staff, a number of visiting researchers
and colleagues in Control and Power research group at Imperial College for many great
moments, research discussions and social events. I am happy that I have met a number
of extremely dedicated people and had a chance to learn from their dedication. In
particular my time has been enhanced by friendship of Francoise, Aleksandra, Shirley,
Dimitri, Simos, Dina, Paul .... Well, everyone in the group, please do not get upset if I
have not listed your name.
I am glad that I have had a chance to share countless hours, spent on discussing
control theory and/or various life issues, with Dr. Milan Prodanović, Dr. Eric Kerrigan,
and Dr. Konstantinos Kouramas. The resultant friendship is something that I highly
value and appreciate.
I have managed to enjoy my free time, there was some, playing chess and having long
walks in Holland Park. For long hours of speed chess and numberless conversations I am
thankful to Jure, Stanko, Noel, Hamish, Charles, Jason, Adrian, Hajme, Dan, ....

A special acknowledgment is dedicated to Lady L. who was a butterfly that brought


me a few moments in which I was granted happiness and that unique and special feeling.

Finally, my thoughts were, are and will be with my family. It is impossible to measure
my gratitude dedicated to my brother Branko, for all support and love I have received,
and to memories of my sister Slavica. I dedicate my work to my family for their non–
ordinary sacrifice and immeasurable support and love.

... nothing more can be attempted than to establish the beginning and the direction
of an infinitely long road. The pretension of any systematic and definitive completeness
would be, at least, a self–illusion. Perfection can here be obtained by the individual student
only in the subjective sense that he communicates everything he has been able to see.

– Georg Simmel

x
Here we go ...

xi
Chapter 1

Introduction

Study me, reader, if you find delight in me, because on very few occasions shall I return
to the world, and because the patience for this profession is found in very few, and only
in those who wish to compose things anew.

– Leonardo da Vinci

This thesis deals with robust constrained optimal control and characterizations of
solutions to some optimal control problems. The thesis has three main parts, and a
supplementary appendix, each of which is related to a set of important concepts in
constrained and robust control of discrete time systems.
The subject of constrained control is perhaps one of the most important topics in
control theory. Control of constrained systems has been a topic of research by many
authors. The most appropriate approach to control of constrained systems is to resort
to optimal control theory. An adequate control strategy that can be employed to control
constrained systems is model predictive control (MPC). A relatively mature theory of
stability and robustness of MPC have been recently developed.

1.1 What is model predictive control?


Model predictive control (MPC) computes the current control action by solving, at each
sampling time, a finite horizon, optimal control problem using the current state x of
the process as the initial state; the optimization yields an optimal control sequence, and
the current control action is set equal to the first control u00 (x) in this sequence. The
on-line optimization problem takes account of system dynamics, constraints and control
objectives. MPC is one of few control techniques that have the ability to handle hard
constraints in a systematic way. From a theoretical point of view, MPC is best regarded
as a practical implementation of receding horizon control RHC. More precisely, dynamic
programming is used to obtain a sequence of value functions {Vi0 (·)} and optimal control
laws {κi (·)} , i = 0, 1, . . . N , together with their domains Xi (here the subscript i denotes

1
Algorithm 1 General MPC algorithm
1: At each sample time nT measure (or estimate) the current value of the state x,
2: Compute an open loop control sequence u(x, iT ), n ≤ i ≤ n + N that drives the
state from x to the desired operating region,
3: Set the current control to the first element of the optimal control sequence, i.e. u =
u00 (x) = κ(x).

time-to-go). Whereas conventional optimal control would use the time-varying control
law u = κN −i (x) at event (x, i) (i.e. at state x, time i), over the time interval 0 to
N , receding horizon control employs the time-invariant control law u = κN (x) that is
neither optimal (for the stated optimal control problem) nor necessarily stabilizing. The
model predictive procedure implicitly defines a time invariant control law u00 (·) that is
identical to the receding horizon control law κN (·) obtained via dynamic programming
(u00 (x) = κN (x) for all x ∈ XN ). In general the implementation of MPC can be realized
by Algorithm 1.
Traditional control methods take account of constraints only in an ‘averaged’ sense so
that they can lead to conservative designs, requiring operation far from the constraints
boundary, since peak violation of the constraints must be avoided. MPC permits op-
erations close to the constraints boundary hence the resulting gains in profitability and
efficiency can be considerable. This illuminates the difference between MPC and any
other conventional control technique that use a pre-computed control law. A key ad-
vantage of MPC is that it permits efficient closed loop plant operation in the presence
of constraints. MPC is capable of dealing with control variable constraints: amplifier
saturation, power limits on control actuators as well as with state/output variable con-
straints: state variables are excluded from regions of the operations profile which are
dangerous or unprofitable (temperature, pressure constraints) or where the underlying
model is unreliable.
Recent research has contributed to the underlying theory of stability of MPC schemes,
giving a precise description of systems for which model predictive strategies are stabi-
lizing, and made MPC a preferable technique for industrial ‘real-life’ control problems.
MPC is now the most widely used of all non-traditional control schemes. Take up of
MPC has been most extensive in the process industries. MPC is probably the most
successful of modern control technologies with several thousand applications reported in
recent survey papers [QB97, QB00].
Recent applications have been in areas including:

⊲ Robotics,

⊲ Traction control of vehicles,

⊲ Process control,

⊲ Automated anaesthetic delivery.

2
1.2 Brief Historical Remarks
The first results related to the development of MPC are certainly results dealing with ex-
istence of solutions of optimal control problems and characterization of optimal solutions
(i.e. necessary and sufficient conditions of optimality), Lyapunov stability of the opti-
mally controlled system and the corresponding algorithms for necessary computations.
Some of the first relevant references are [LM67, FR75]. A variation of the MPC con-
troller for linear systems subject to input constraints based on linear programming was
already reported in [Pro63]. The first proposal to use a variation of the MPC controller
in the industry was reported in [RRTP76]. Since this early developments several gener-
ations of industrial MPC (identification and control (IDCOM), dynamic matrix control
(DMC), quadratic dynamic matrix control (QDMC), Shell multivariable optimizing con-
trol (SMOC) were developed (see [RRTP76, RRTP78, PG80, CR80, GM86]). Quadratic
programming formulation of open–loop optimal control problem for linearly constrained
linear systems with quadratic performance measure was first reported in [GM86]. These
methods were purely industry oriented and were unable to address appropriately issue
of stability. A relevant result concerned with existence of finite control and cost hori-
zons such that the resultant MPC controller is stabilizing can be found in [GPM89]. A
sequence of relevant papers addressing stability of predictive controllers appeared in the
early 1990s. At this stage researchers realized that stability can be enforced by con-
sidering an appropriate modification of the original optimal control problem. The first
ideas were related to introduction of terminal constraint set, initially terminal equal-
ity constraint, and terminal cost function. The relevant results for these ideas were
reported in a number of the relevant references, some of which are [MLZ90, CS91,
MZ92, KG88, PBG88, MM90, MHER95, MR93, MS97a, SMR99, CLM96]. The first
ideas addressed mainly constrained linear or unconstrained nonlinear systems. A rele-
vant extension of these results to constrained nonlinear systems is reported in important
papers [DMS96, CA98b, CA98a, MM93].
Academic community has consequently recognized MPC as a rare control strategy
that can efficiently handle constraints and this has lead to an extensive research in the
area addressing a whole range of issues in MPC (stability, output feedback MPC, robust
MPC, etc...) [BBBG02, KT02, YB02, QDG02, MQV02, LKC02, MDSA03, KM04]. It is
almost impossible to provide an appropriate overview of research done by many authors
in the field in recent years. Several relevant surveys appeared trying to summarize the
crucial steps and the most relevant advances in the development of MPC. The interested
reader is referred, for example, to the following set of excellent survey and overview
papers [ABQ+ 99, BM99, MRRS00, May01, FIAF03] for more detailed summary and for
an additional set of references. As a result of academic recognition of MPC a number
of relevant PhD thesis appeared, some of which are [Fon99, Ker00, Bor02, Kou02, L0̈3b,
Gri04, Fin05]. A number of books and collections of the papers, ranging from industrial
applications to theoretical aspects of MPC is also available [BGW90, Mos94, Cla94,

3
CB98, AZ99, KC01, Mac02a, Ros03, GSD03].

1.3 Current Research in RHC and MPC


Obviously, the first requirement for model predictive control is an appropriate algorithm
for solving on-line the optimal control problem. The burden of on-line computations is
a shortcoming of some traditional implementations of MPC. An alternative approach
is to obtain an explicit form of the receding horizon control law. Recent advances in
parametric mathematical programming permit a large part of these computations to
be carried out off-line, prior to implementation, in certain important cases. Obtaining
off-line an explicit solution to some constrained optimal control problems and the cor-
responding regions of state space in which this solution is optimal is of great interest.
The computational burden of running the control scheme is correspondingly considerably
reduced. An important observation is that if a system model is linear/affine or piecewise
affine, a cost function for on-line optimization is quadratic or linear and constraints on
controls and states/output are linear then the optimal control law is piece-wise affine
and it can be computed off-line.The state space may be decomposed into a collection
of polytopic cells, in each of which an affine control law is operative. Efficient methods
are now available for pre-computing the cells and the associated affine control laws. The
corresponding tools are parametric mathematical programming techniques, in case of
quadratic cost parametric quadratic programming and in case of linear cost parametric
linear programming.
Another direction of current research is improving the robustness of MPC. Namely,
techniques for reducing sensitivity of closed loop response to modeling errors and dis-
turbances (improved robustness) are desired and it is a goal of control design to ensure
feasibility and stability and to keep optimality in some sense when uncertainty is present.
Uncertainty can enter the system via uncertain dynamics, when parameters that describe
system behavior are not known exactly, or via external disturbance that acts on the sys-
tem.
Conventional MPC schemes, under certain conditions, may be robust with respect to
the uncertainty only up to a certain level, mainly due to the fact that they use the solution
of an optimal control problem that does not account for uncertainty. When uncertainty
is present it is desired to compute the control action that would guarantee an acceptable
performance of the controlled system for all possible uncertainty realizations. Open-loop
MPC cannot contain the ‘spread’ of predicted trajectories possibly rendering solutions
of the uncertain optimal control problem solved online unduly conservative or, even,
infeasible. Feedback MPC is therefore necessary. Both open-loop and feedback MPC
provide feedback control but, whereas in open-loop MPC the decision variable in the
optimal control problem solved on-line is a sequence of control actions, in feedback MPC
it is a policy π which is a sequence of control laws. Clearly, the control strategy has
to have feedback nature so that it counteracts to the uncertainties in an appropriate

4
way and therefore advantages of feedback MPC are obvious; but the price to be paid is
computational complexity. Recent MPC schemes ensure convergence of state trajectories
to a pre-specified desired region, for systems with additive but bounded disturbances.
An appropriate control action, that ensures that the closed loop state trajectory remains
within a minimal tube around a nominal path whatever the disturbance, is sought. This
is one possible approach to robust model predictive control (RMPC).
A relevant observation is that polytopic computations can be used efficiently in order
to obtain a solution to many problems in constrained control of linear/affine or piecewise
affine systems. Moreover, uncertainty leads to highly complex optimal control problems
and the solutions to these problems exist only in an invariant set. The computation
of the sequence of invariant sets requires set computations tools, that are polytopic
computations in linear/affine or piecewise affine case, and it highlights overlap of set
(polytopic) computations and MPC. Set invariance theory plays a significant role in the
determination of an explicit solution to some constrained optimal control problems and
in model predictive control of discrete time systems subject to constraints, particularly
when the uncertainties are present.
Feasibility of the constrained optimal control problem, that is to be solved when im-
plementing MPC, is an important issue, as feasibility region of optimal control problem
is domain of attraction of the MPC schemes. It is of importance to obtain qualitative
information on feasibility domains for fixed horizon optimal control problems. It is pos-
sible for a certain important class of optimal control problems to compute the feasibility
regions. If the constraints are polytopic and system model linear/affine or piecewise affine
then the feasibility regions for some optimal control problem can be computed using poly-
topic algebra. Moreover, in many designs of model predictive control the terminal set
constraints are imposed and in case of polytopic constraints and a linear system model
the terminal set is usually a maximal output admissible set or in some cases a minimal
robust positive invariant set that are convex polytopes. Basic tools for computation of
the feasibility regions are Minkowski (set) addition, Pontryagin difference, projection op-
eration and basic set operations (intersection, union, difference, symmetric set difference,
etc.).

1.4 Outline & Brief Summary of Contributions


Necessary background and necessary mathematical preliminaries regarding the most im-
portant concepts used in this thesis are given in Section 1.7.
This thesis is organized as follows.

Part II – Advances in Set Invariance and Reachability Analysis

The second chapter, Invariant Approximations of RPI sets for linear systems, pro-
vides a set of approximation techniques that enable computation of invariant approxi-

5
mation of the minimal and the maximal robust positively invariant set for linear discrete
time systems.
In the third chapter, Optimized Robust Control Invariance, we introduce the concept
of optimized robust control invariance for a discrete-time, linear, time-invariant system
subject to additive state disturbances. Novel procedures for the computation of robust
control invariant sets and corresponding controllers are presented. A novel character-
ization of a family of robust control invariant sets is proposed. These results address
the well known issue of finite time termination of recursive computational schemes in set
invariance theory.
The fourth chapter, Abstract Set Invariance – Set Robust Control Invariance, intro-
duces abstract set invariance by extending concept of robust control invariance to set
robust control invariance. Concepts of set invariance are extended to the trajectories of
tubes – set of states. A family of the sets of set robust control invariant sets is charac-
terized. Analogously to the concepts of the minimal and the maximal robust positively
invariant sets the concepts of the minimal and the maximal set robust positively invariant
sets are established.
In the fifth chapter, Regulation of discrete-time linear systems with positive state and
control constraints and bounded disturbances, the regulation problem for discrete-time
linear systems with positive state and control constraints subject to additive and bounded
disturbances is considered. This problem is relevant for the cases the controlled system
is required to operate as close as possible or at the boundary of constraint sets, i.e. when
any deviation of the control and/or state from its steady state value must be directed
to the interior of its constraint set. To address these problems, we extend the results
of the third chapter and characterize a novel family of the robust control invariant sets
for linear systems under positivity constraints. The existence of a constraint admissible
member of this family can be checked by solving a single linear or quadratic programming
problem.
The sixth chapter, Robust Time Optimal Obstacle Avoidance Problem for discrete–
time systems, presents results that use polytopic algebra to address the problem of the
robust time optimal obstacle avoidance for constrained discrete–time systems.
In the seventh chapter, Reachability analysis for constrained discrete time systems
with state- and input-dependent disturbances, we present a solution of the reachability
problem for nonlinear, time-invariant, discrete-time systems subject to mixed constraints
on state and input with a persistent disturbance that takes values in a set that depends
on the current state and input. These are new results that allow one to compute the set
of states which can be robustly steered in a finite number of steps, via state feedback
control, to a given target set. Existing methods fail to address state- and input-dependent
disturbances. Our methods then improve on previous ones, by taking account of these
significant factors.
The eighth chapter, State Estimation for piecewise affine discrete time systems
subject to bounded disturbances, considers the problem of state estimation for piecewise

6
affine, discrete time systems with bounded disturbances. It is shown that the state lies
in a closed uncertainty set that is determined by the available observations and that
evolves in time. The uncertainty set is characterised and a recursive algorithm for its
computation is presented. Recursive algorithms are proposed for filtering prediction and
smoothing problems.

Part III – Robust Model Predictive Control

The ninth chapter, Tubes and Robust Model Predictive Control of constrained discrete
time systems, discusses general concept of feedback model predictive control by using
tubes – sequences of set of states. Moreover, we discuss robust stability of an adequate
set (that plays a role of the origin for the controlled uncertain system). In particular, we
discuss choice of ‘tube’ path cost and terminal cost, as well as choice of ‘tube terminal
set’ and ‘tube cross–section’ in order to ensure the adequate stability properties.
In the tenth chapter, Robust Model Predictive Control by using tubes – Linear
Systems, we apply results of the ninth chapter to constrained linear systems and present
a set of relatively simple tube controllers for efficient robust model predictive control
of constrained linear, discrete-time systems in the presence of bounded disturbances.
The computational complexity of the resultant controllers is linear in horizon length.
We show how to obtain a control policy that ensures that controlled trajectories are
confined to a given tube despite uncertainty.

Part IV – Parametric Mathematical Programming in Control Theory

The eleventh chapter, Parametric Mathematical Programming, introduces basic con-


cepts of reverse transformation and parametric mathematical programming. These tech-
niques are used to obtain characterization of solution to a number of important con-
strained optimal control problems. Some existing results are then improved by exploiting
parametric mathematical programming.
The twelfth chapter, Further Applications of Parametric Mathematical Programming,
presents possible applications of parametric mathematical programming to a set of
interesting problems. Characterizations of the solution to some constrained optimal
control problems enables efficient application of receding horizon control and model
predictive control for particular class of discrete time systems such as linear/affine or
piecewise affine systems; this topics are also discussed.

A summary of the contributions of this thesis, a set of final remarks and possible
directions for future research are given in the last chapter – Conclusions, while the ap-
pendix provides a set of necessary results and algorithms for computational geometry
with collections of polyhedral (polytopic) sets.

7
1.5 Publications
This thesis is mostly based on the published results and on the results submitted for
publication.
Chapter 2 is based on:

1. [RKKM03] – S. V. Raković, E. C. Kerrigan, K. I. Kouramas and D. Q. Mayne.


Approximation of the minimal robustly positively invariant set for discrete-time
LTI systems with persistent state disturbances. In Proceedings of the 42nd IEEE
Conference on Decision and Control. Maui, Hawaii, USA.

2. [RKKM04b] – S. V. Raković, E. C. Kerrigan, K. I. Kouramas and D. Q. Mayne.


Invariant approximations of the minimal robustly positively invariant sets. IEEE
Trans. Automatic Control. In press.

3. [RKKM04a] – S. V. Raković, E. C. Kerrigan, K. I. Kouramas and D. Q. Mayne.


Invariant approximations of robustly positively invariant sets for constrained lin-
ear discrete-time systems subject to bounded disturbances. Technical Report
CUED/F-INFENG/TR.473, January 2004, Department of Engineering, Univer-
sity of Cambridge, Trumpington Street, CB2 1PZ Cambridge, UK, Downloadable
from http://www-control.eng.cam.ac.uk/eck21.

Chapter 3 is based on:

1. [Rak04] – Saša V. Raković. Optimized robustly controlled invari-


ant sets for constrained linear discrete-time systems. Technical Report
EEE/C&P/SVR/5/2004, May 2004, Imperial College London, Downloadable from
http://www2.ee.ic.ac.uk/cap/cappp/projects/11/reports.htm.

2. [RMKK05] – S. V. Raković, D. Q. Mayne, E. C. Kerrigan and K. I. Kouramas.


Optimized robust control invariant sets for constrained linear discrete-time systems.
Accepted for the 16th IFAC World Congress IFAC 2005.

Some of the results of chapter 4 and chapter 9 are contained in:

1. [RM05b] – S. V. Raković and D. Q. Mayne. A simple tube controller for efficient


robust model predictive control of constrained linear discrete time systems subject
to bounded disturbances. Accepted for the 16th IFAC World Congress IFAC 2005
(invited session).

Chapter 5 is based on:

1. [RM05a] – S. V. Raković and D. Q. Mayne. Regulation of discrete time linear


systems with positive state and control constraints and bounded disturbances. Ac-
cepted for the 16th IFAC World Congress IFAC 2005.

Chapter 7 is based on:

8
1. [RKM03] – S. V. Raković, E. C. Kerrigan and D. Q. Mayne. Reachability com-
putations for constrained discrete-time systems with state- and input-dependent
disturbance. In Proceedings of of the 42nd IEEE Conference on Decision and Con-
trol. Maui, Hawaii, USA.

2. [RKML05] – S. V. Raković, E. C. Kerrigan, D. Q. Mayne and J. Lygeros. Reach-


ability analysis of discrete time systems with disturbances. Submitted to IEEE
Trans. Automatic Control.

3. [RKM04] – S. V. Raković, E. C. Kerrigan and D. Q. Mayne (2004). Optimal


control of constrained piecewise affine systems with state- and input-dependent
disturbances. Sixteenth International Symposium on Mathematical Theory of Net-
works and Systems (MTNS2004). Leuven, Belgium.

Chapter 8 is based on:

1. [RM04b] – S. V. Raković and D. Q. Mayne. State estimation for piecewise affine,


discrete time systems with bounded disturbances. In Proceedings of the 43rd IEEE
Conference on Decision and Control. Paradise Island, Bahamas.

Chapters 9 and 10 are related to some of the results reported in:

1. [LCRM04] – W. Langson, I. Chryssochoos, S. V. Raković and D. Q. Mayne. Robust


model predictive control using tubes. Automatica, vol. 40, p:125–133.

2. [MSR05] – D. Q. Mayne, M. Seron and S. V. Raković. Robust model predictive


control of constrained linear systems with bounded disturbances. Automatica, vol
41, p:219–224.

3. [RM04a] – S. V. Rakovic and D. Q. Mayne. Robust model predictive control of


constrained, piecewise affine, discrete-time systems. In Proceedings of the 6th IFAC
Symposium on nonlinear control systems – NOLCOS2004. Stuttgart, Germany.

Chapters 11 and 12 present some of the results reported in:

1. [MR02] – D. Q. Mayne and S. V. Raković. Optimal control of constrained piecewise


affine discrete-time systems using reverse transformation. In Proceedings of the
42nd IEEE Conference on Decision and Control. Las Vegas, USA.

2. [MR03b] – D. Q. Mayne and S. V. Raković. Optimal control of constrained piece-


wise affine discrete-time systems. Journal of Computational Optimization and Ap-
plications, vol. 25, p:167-191.

3. [MR03a] – D. Q. Mayne and S. Raković. Model Predictive Control of Constrained


Piecewise Affine Discrete-time Systems. International Journal of Robust and Non-
linear Control, num. 13, p:261–279.

9
4. [RGK+ 04] – Saša V. Raković, Pascal Grieder, Michail Kvasnica, David Q. Mayne
and Manfred Morari. Computation of invariant sets for piecewise affine discrete
time systems subject to bounded disturbances. In Proceedings of the 43rd IEEE
Conference on Decision and Control. Paradise Island, Bahamas.

5. [JGR05] – C. N. Jones, P. Grieder and S. V. Raković. A logarithmic–time solution


to the point location problem for closed–form linear MPC. Accepted for the 16th
IFAC World Congress IFAC 2005.

6. [RGJ04] – S. Raković, P. Grieder and C. Jones. Computation of Voronoi


Diagrams and Delaunay Triangulation via parametric linear programming.
Technical Report AUT04-03, May 2004, ETHZ Zürich, Downloadable from
http://control.ee.ethz.ch/research/publications/publications.msql?id=1805.

A set of additional results, that is not included in this thesis, can be found in:

1. [MRVK04] – D. Q. Mayne and S. V. Raković and R.B. Vinter and E. C. Kerrigan.


Characterization of the solution to a constrained H-infinity optimal control problem.
submitted to Automatica.

2. [GRMM05] – Pascal Grieder, Saša V. Raković, Manfred Morari and David Q.


Mayne. Invariant Sets for Switched Discrete Time Systems subject to Bounded
Disturbances. Accepted for the 16th IFAC World Congress IFAC 2005.

3. [RG04] – S. Raković and P. Grieder. Approximations and proper-


ties of the disturbance response set of PWA systems. Technical Re-
port AUT04-02, February 2004, ETHZ Zürich, Downloadable from
http://control.ee.ethz.ch/research/publications/publications.msql?id=1781.

1.6 Basic Mathematical Notation, Definitions and Prelim-


inaries
Basic Notation:
To simplify exposition of the material in this thesis and to keep compactness of
presentation the following notation will be used in the subsequent chapters:

• Sequences1 are denoted by bold letters, i.e. p , {p(0), p(1), ..., p(N − 1)} and p
in algebraic expressions denotes the vector form (p(0)′ , p(1)′ , . . . , p(N − 1)′ )′ of the
sequence. The same convention will apply to other sequences. Typically,

◦ A control sequence: u , {u(0), u(1), ..., u(N − 1)} and in algebraic expressions
(u(0)′ , u(1)′ , . . . , u(N − 1)′ )′ ,
1
Sequences have N terms unless otherwise defined.

10
◦ A disturbance sequence: w , {w(0), w(1), ..., w(N − 1)} and in algebraic
expressions (w(0)′ , w(1)′ , . . . , w(N − 1)′ )′ ,

• Sequences of k terms are denoted by bold letters and a subscript k, i.e. pk ,


{p(0), p(1), ..., p(k − 1)} and pk in algebraic expressions denotes the vector form
(p(0)′ , p(1)′ , . . . , p(k − 1)′ )′ of the sequence. The same convention will apply to
other sequences. Typically,

◦ A control sequence of k terms: uk , {u(0), u(1), ..., u(k − 1)}; uk in algebraic


expressions denotes (u(0)′ , u(1)′ , . . . , u(k − 1)′ )′ ,
◦ A disturbance sequence of k terms: wk , {w(0), w(1), ..., w(k − 1)}; wk in
algebraic expressions denotes (w(0)′ , w(1)′ , . . . , w(k − 1)′ )′ ,

• An infinite sequence of variables is denoted by v(·) , {v(0), v(1), . . .}, where


v(k), k ∈ N is the k th element in the sequence,

• The set of all infinite sequences MV , {v(·), v(k) ∈ V, k ∈ N} is the set of all
infinite sequences whose elements take values in V ⊆ Rn (equivalently MV is the
set of all maps v : N → V),

• A control policy (sequence of control laws) is denoted by π ,


{µ0 (·), µ1 (·), . . . , µN −1 (·)},

• A control policy over horizon k is denoted by πk , {µ0 (·), µ1 (·), . . . , µk−1 (·)},

• φ(i; x, u) is the solution of the difference equation x+ = f (x, u) at time i if the


initial state is x and the control sequence is u,

• φ(i; x, u, w) denotes the solution of the difference equation x+ = f (x, u, w) at time


i if the initial state is x at time 0, the control sequence is u and the disturbance
sequence is w,

• φ(i; x, π, w) denotes the solution of the difference equation x+ = f (x, u, w) at time


i if the initial state is x at time 0, the control policy is π and the disturbance
sequence is w,

• A unit vector of appropriate length is denoted by 1,

• Identity matrix is denoted by I,

• A zero vector2 of appropriate length is denoted by 0,

• The set of integers is denoted by N , {0, 1, 2, ...}. We write N+ , {1, 2, ...}. Also
N[a,b] , {a, a + 1, . . . , b − 1, b}, 0 ≤ a ≤ b, a, b ∈ N; we will use the following
shorthand notation Nq , N[0,q] as well as N+
q , N[1,q] .

• The product of the sets A and B is denoted by A × B,


2
Sometimes 0 is used to denote appropriate zero matrix, but this is clear from the context.

11
• Ak denotes Ak , A × A × ... × A,

• Given a set U ⊆ Rn , 2U denotes the power set (set of all subsets) of U,

• Minkowski set addition is denoted by ⊕, so that X ⊕ Y , {x + y | x ∈ X , y ∈ Y}


for X ⊆ Rn and Y ⊆ Rn ,

• Minkowski set addition of a collection of sets A , {Ai ⊆ Rn , i ∈ N+


p } is denoted
Lp
by i=1 Ai , A1 ⊕ A2 ⊕ ... ⊕ Ap ,

• Pontryagin set difference is denoted by ⊖, so that X ⊖ Y , {z | z ⊕ Y ⊆ X },

• A closed hyperball in Rn is denoted by Bnp (r) , {x ∈ Rn | |x|p ≤ r} (r > 0),

• A closed hyperball in Rn centered at z ∈ Rn is denoted by Bnp (r, z) , {x ∈ Rn |


|x − z|p ≤ r} (r > 0),

• If A ⊆ Rn is a given set, then interior(A) denotes its interior closure(A) denotes


its closure,

• Given a set of points {vi ∈ Rn | i ∈ Np }, co{vi | i ∈ Np } denotes its convex hull.

Basic Definitions and Mathematical Preliminaries:

The following definitions and mathematical preliminaries are provided here in order
to keep the exposition of material in the sequel simpler to follow.
Given two sets A and B the following are basic set operations [KF57, KF70]:

◦ Set Intersection: A ∩ B , {x | x ∈ A and x ∈ B},

◦ Set Union: A ∪ B , {x | x ∈ A or x ∈ B},

◦ Set Complement: Ac , {x | x ∈
/ A},

◦ Set Difference: A \ B , {x | x ∈ A and x ∈


/ B},

◦ Symmetric Set Difference:

A △ B , (A \ B) ∪ (B \ A) = {x | (x ∈ A and x ∈
/ B) or (x ∈ B and x ∈
/ A)},

These basic set concepts are easy to realize algorithmically for polyhedrons and polytopes.

Definition 1.1 (Polyhedron) A polyhedron is the intersection of a finite number of open


and/or closed half-spaces. (A polyhedron is a convex set).

Definition 1.2 (Polytope) A polytope is a closed and bounded polyhedron.

Remark 1.1 (H representation of a polytope) If Z ⊆ Rn is a polytopic or polyhedral set,


then z ∈ Z ⇔ Cz z ≤ cz for some matrix Cz and a vector cz of appropriate dimensions.

12
Definition 1.3 (Polygon) A polygon is the union of a finite number of polyhedra. (A
polygon is possibly non – convex set).

Definition 1.4 (Projection) Given a set Ω ⊂ C ×D, the projection of Ω onto C is defined
as ProjC (Ω) , {c ∈ C | ∃d ∈ D such that (c, d) ∈ Ω }.
Chapter 2 of this thesis is concerned with finding invariant approximations of the
robust positively invariant sets. An adequate measure for determining whether one set
is a good approximation of another set, is the well-known Hausdorff metric:

Definition 1.5 (Hausdorff metric) If Ω and Φ are two non-empty, compact sets in Rn ,
then the Hausdorff metric is defined as
( )
dpH (Ω, Φ) , max sup d(ω, Ω), sup d(φ, Φ) ,
ω∈Φ φ∈Ω

where
d(z, Z) , inf |z − y|p .
y∈Z

Remark 1.2 (Hausdorff metric fact) For Ω and Φ, both non-empty, compact sets in Rn ,
Ω = Φ if and only if dpH (Ω, Φ) = 0. It is also useful to note that dpH (Ω, Φ) is the size of
the smallest norm-ball that can be added to Ω in order to cover Φ and vice versa, i.e.

dpH (Ω, Φ) = inf ε ≥ 0 Φ ⊆ Ω ⊕ Bnp (ε) and Ω ⊆ Φ ⊕ Bnp (ε) .

Given this last observation and the fact that a family of compact sets in Rn , equipped
with Hausdorff metric, is a complete metric space [Aub77], we will use the Hausdorff
metric to talk about convergence of a sequence of compact sets:
Definition 1.6 (Limit of a sequence of sets) An infinite sequence of non-empty, compact
sets {Ω1 , Ω2 . . .}, where each Ωi ⊂ Rn , is said to converge to a non-empty, compact set
Ω ⊂ Rn if dpH (Ω, Ωi ) → 0 as i → ∞.

Definition 1.7 (Increasing/decreasing sequences of sets) A sequence of non-empty sets


{Ω1 , Ω2 , . . .}, where each Ωi ⊂ Rn , is decreasing if Ωi+1 ⊆ Ωi for all i ∈ N+ . Similarly,
the sequence of sets is increasing if Ωi+1 ⊇ Ωi for all i ∈ N+ .
In the sequel (Chapter 2), we will be generating sequences of outer and inner approx-
imations of the minimal and maximal robust positively invariant sets, respectively. This
motivates the following definition:
Definition 1.8 (ε-approximations) Given a scalar ε ≥ 0, the set Φ ⊂ Rn is said to be
an ε-outer approximation to the set Ω ⊂ Rn if Ω ⊆ Φ ⊆ Ω ⊕ Bnp (ε). The set Ψ ⊂ Rn is
said to be an ε-inner approximation of the set Ω ⊂ Rn if Ψ ⊆ Ω ⊆ Ψ ⊕ Bnp (ε).

13
We recall the following definition:
Definition 1.9 (Support function) The support function of a set Π ⊂ Rn , evaluated at
z ∈ Rn , is defined as
h(Π, z) , sup z T π.
π∈Π

Remark 1.3 (Support function of a polytope) Clearly, if Π is a polytope (bounded and


closed polyhedron), then h(Π, z) is finite. Furthermore, if Π is described by a finite set of
affine inequality constraints, then h(Π, z) can be computed by solving a linear program
(LP).
Our main interest in the support function is the well-known fact that the support
function of a set allows one to write equivalent conditions for the set to be a subset of
another. In particular:
Proposition 1.1 (Support function and subset test) Let Π be a non-empty set in Rn
and the polyhedron

Ψ = ψ ∈ Rn fiT ψ ≤ gi , i ∈ I ,

where fi ∈ Rn , gi ∈ R and I is a finite index set.

(i) Π ⊆ Ψ if and only if h(Π, fi ) ≤ gi for all i ∈ I.

(ii) Π ⊆ interior(Ψ) if and only if h(Π, fi ) < gi for all i ∈ I.

The following result is a standard observation [KM03a, L0̈3b]:


Proposition 1.2 (Maximum of a linear function in the unit hypercube)

max c′ x = |c|1
x∈Bn
∞ (1)

The following result allows one to compute the support function of a set that
is the Minkowski sum of a finite sequence of linear maps of non-empty, compact
sets [RKKM04a].
Proposition 1.3 (Support function of Minkowski addition of a finite collection of poly-
topes) Let each matrix Lk ∈ Rn×m and each Φk be a non-empty, compact set in Rm for
all k ∈ {1, . . . , K}. If
K
M
Π= Lk Φk , (1.6.1)
k=1
then
K
X
h(Π, z) = max (z T Lk )φ. (1.6.2)
φ∈Φk
k=1

Furthermore, if Φk = Bm
∞ (1), then

max (z T Lk )φ = |LTk z|1 . (1.6.3)


φ∈Φk

14
Proof: The result follows immediately from the fact that if π , π1 +

· · · + πk , where each πk ∈ Lk Φk , then h(Π, z) = max z T π | π ∈ Π =
 T PK  T
max z (π1 + · · · + πK ) | πk ∈ Lk Φk , k = 1, . . . , K = k=1 max z πk | πk ∈ Lk Φk .
The last equality follows because of the fact that the constraints on πk are indepen-

dent of the constraint on πl for all k 6= l. Noting that max z T πk | πk ∈ Lk Φk =

max z T Lk φk | φk ∈ Φk , it follows that (1.6.2) holds. The fact that (1.6.3) holds, fol-
lows from Proposition 1.2 and it can be proven in a similar manner [KM03a, Prop. 2].

QeD.

Remark 1.4 (Computational remark regarding support function of Minkowski addition


of a finite collection of polytopes) Clearly, if all the Φk in the above result are polytopes,
then the computation of the value of the support function in (1.6.2) can be done by
solving K LPs. However, it is useful to note that if any Φk is a hypercube (∞-norm
ball), then the value of the support function of Π can be computed faster by evaluating
the explicit expression in (1.6.3).
Our last preliminary definition is:
Definition 1.10 (Smallest/largest hypercube) Let Ψ be a non-empty, compact set in Rn
containing the origin. The size of the largest hypercube in Ψ is defined as

βin (Ψ) , max {r ≥ 0 | Bn∞ (r) ⊆ Ψ } (1.6.4)


r

and the size of the smallest hypercube containing Ψ is defined as

βout (Ψ) , min {r ≥ 0 | Ψ ⊆ Bn∞ (r) } . (1.6.5)


r

1.7 Preliminary Background


Our next step is to provide a set of basic and preliminary results regarding the main
topics of this thesis. These topics have been subject of study by many authors having
made important contributions. However, our intention is to provide only necessary pre-
liminaries in order to enable the reader to follow the development of our results in the
sequel of this thesis.

1.7.1 Basic Stability Definitions

The seminal work by A. M. Lyapunov, reported in his PhD thesis and published as
a book [Lya92], established a fundamental theory of stability of motions. Lyapunov
theory has been recognized as a fundamental tool for establishing stability of an equi-
librium for a system controlled by an MPC controller. Consequently, MPC researchers

15
have adopted the theory of Lyapunov for establishing stabilizing properties of the MPC
schemes. Some of the most influential books on the Lyapunov stability theory are still
classics [Lya66, Hah67, Las76, Lya92]. We recall here a set of the basic definitions needed
in the subsequent chapters of this thesis. In what follows d(x, y) denotes the distance
between two vectors x ∈ Rn and y ∈ Rn and d(x, R) is the distance of a vector x ∈ Rn
from the set R ⊆ Rn .

Definition 1.11 (Stability of the origin) The origin is Lyapunov stable for system x+ =
f (x) if for all ε > 0, there exists a δ > 0 such that d(x(0), 0) ≤ δ implies that any solution
x(·) of x+ = f (x) with initial state d(x(0), 0) ≤ δ satisfies d(x(i), 0) ≤ ε for all i ∈ N+ .

Definition 1.12 (Asymptotic (Finite–Time) Attractivity of the origin) The origin is


asymptotically (finite-time) attractive for system x+ = f (x) with domain of attraction
X if, for all x(0) ∈ X , any solution x(·) of x+ = f (x) satisfies d(x(t), 0) → 0 as t → ∞
(x(j) = 0, j ≥ k for some finite k).

Definition 1.13 (Exponential (Finite–Time) Attractivity of the origin) The origin is


exponentially (finite-time) attractive for system x+ = f (x) with domain of attraction X
if there exist two constants c > 1 and γ ∈ (0, 1) such that for all x(0) ∈ X , any solution
x(·) of x+ = f (x) satisfies d(x(t), 0) ≤ cγ t d(x(0), 0), ∀t ∈ N+ .

Definition 1.14 (Asymptotic (Finite–Time) Stability of the origin) The origin is asymp-
totically (finite-time) stable for system x+ = f (x) with a region of attraction X if it is
stable and asymptotically (finite-time) attractive for system x+ = f (x) with domain of
attraction X .

Definition 1.15 (Exponential (Finite–Time) Stability of the origin) The origin is


exponentially(finite-time) stable for system x+ = f (x) with a region of attraction X if it
is stable and exponentially (finite-time) attractive for system x+ = f (x) with domain of
attraction X .
In the case when the additive disturbances are present, convergence to a set (rather
than to the origin) is the best that can be hoped for.

Definition 1.16 (Stability of the set R) A set R is robustly stable for system x+ =
f (x, w), w ∈ W if, for all ε > 0, there exists a δ > 0 such that d(x(0), R) ≤ δ implies
that any solution x(·) of x+ = f (x, w), w ∈ W with initial state d(x(0), R) ≤ δ satisfies
d(x(i), R) ≤ ε for all i ∈ N+ and for all admissible disturbance sequences w(·) ∈ MW .

Definition 1.17 (Asymptotic (Finite–Time) Attractivity of the set R) The set R is


asymptotically (finite-time) attractive for system x+ = f (x, w), w ∈ W with domain
of attraction X if, for all x(0) ∈ X , any solution x(·) of x+ = f (x, w), w ∈ W satisfies

16
d(x(t), R) → 0 as t → ∞ (x(j) ∈ R, j ≥ k for some finite k) for all admissible disturbance
sequences w(·) ∈ MW .

Definition 1.18 (Exponential (Finite–Time) Attractivity of the set R) The set R is


exponentially (finite-time) attractive for system x+ = f (x, w), w ∈ W with domain of
attraction X if there exist two constants c > 1 and γ ∈ (0, 1) such that for all x(0) ∈ X ,
any solution x(·) of x+ = f (x, w), w ∈ W satisfies d(x(t), R) ≤ cγ t d(x(0), R), ∀t ∈ N+
(x(j) ∈ R, j ≥ k for some finite k) for all admissible disturbance sequences w(·) ∈ MW .

Definition 1.19 (Asymptotic (Finite–Time) Stability of the set R) The set R is as-
ymptotically (finite-time) stable for system x+ = f (x, w), w ∈ W with a region
of attraction X if it is stable and asymptotically (finite-time) attractive for system
x+ = f (x, w), w ∈ W with domain of attraction X .

Definition 1.20 (Exponential (Finite–Time) Stability of the set R) The set R is expo-
nentially (finite-time) stable for system x+ = f (x, w), w ∈ W with a region of attraction
X if it is stable and exponentially (finite-time) attractive for system x+ = f (x, w), w ∈ W
with domain of attraction X .
We also clarify our use of the term Lyapunov function. We first recall the definition
of the class-K and class-K∞ functions.
Definition 1.21 ( Class-K and class-K∞ functions) A function α : R+ → R+ is said
to be of class-K if it is continuous, zero at zero and strictly increasing. The function
α : R+ → R+ is said to be of class-K∞ if it is class-K function and it is unbounded.
Next we give the definition of the Lyapunov function.
Definition 1.22 (A (converse) Lyapunov function) A continuous function V : Rn → R+
is said to be a (converse) Lyapunov function for x+ = f (x) if there exists functions
α1 (·), α2 (·), α3 (·) ∈ K∞ such that for all x ∈ Rn :

α1 (|x|) ≤ V (x) ≤ α1 (|x|)

and
V (f (x)) ≤ V (x) − α3 (|x|).

1.7.2 Set Invariance and Reachability Analysis

The theory of set invariance plays fundamental role in the control of constrained
systems and it has been a subject of research by many authors – see for exam-
ple [Ber71, Ber72, BR71b, Aub91, De 94, De 97, Tan91, GT91, KG98, Bit88b, Bit88a,
Las87, Las93, Bla94, Bla99, Ker00, Kou02]. Set invariance theory is concerned with the
problems of controllability to a target set and computation of robust control invariant
sets for systems subject to constraints and persistent, unmeasured disturbances. The

17
interested reader is referred to an excellent and comprehensive survey paper [Bla99] for
an introduction to this field and a set of relevant references. Importance of set invariance
in predictive control has been recognized by control community; an appropriate illus-
tration of the relevance of set invariance in model predictive control can be found in a
remarkable thesis [Ker00] (see also for instance [Bla99, May01] for additional discussion
of the importance of set invariance in robust control of constrained systems). We will
introduce the crucial concepts of the theory of set invariance since we will present, in the
sequel, a set of novel results that complement and improve upon existing results. We will
not attempt to provide a detailed account of the history of developments and all existing
results in set invariance theory. Instead, we will provide an appropriate comparison and
discussion with respect to our results, as we develop them.

Set Invariance and Reachability Analysis – Basic Background

We consider the following time–invariant, uncertain discrete time system:

x+ = f (x, u, w) (1.7.1)

where x ∈ Rn is the current state (assumed to be measured), x+ is the successor state,


u ∈ Rm is the input and w ∈ Rp is an unmeasured, persistent disturbance. The system
is subject to constraints:
(x, u, w) ∈ X × U × W (1.7.2)

where the sets U and W are compact (i.e. closed and bounded) and X is closed. A
standing assumption is that the system f : Rn × Rm × Rp → Rn is uniquely defined. We
first give the following definition:
Definition 1.23 (Robust Control Invariant Set) A set Ω ⊆ Rn is a robust control invari-
ant (RCI) set for system x+ = f (x, u, w) and constraint set (X, U, W) if Ω ⊆ X and for
every x ∈ Ω there exists a u ∈ U such that f (x, u, w) ∈ Ω, ∀w ∈ W.
If the system does not have input and/or there is no disturbance the concept of RCI
set is replaced by robust positively invariant (RPI) set (system does not have input)
or by control invariant (CI) set (disturbance is absent from the system equation) or,
finally, by positively invariant (PI) set (system does not have input and disturbance is
not present). We provide the corresponding definitions for these cases, but we remind
the reader that we assume in our definitions that the corresponding function defining the
system dynamics is uniquely defined over appropriate domains. The definition of a RPI
set is given next.
Definition 1.24 (Robust Positively Invariant Set) A set Ω ⊆ Rn is a robust positively
invariant (RPI) set for system x+ = f (x, w) and constraint set (X, W) if Ω ⊆ X and
f (x, w) ∈ Ω, ∀w ∈ W for every x ∈ Ω.
A control invariant set is defined as follows:
Definition 1.25 (Control Invariant Set) A set Ω ⊆ Rn is a control invariant (CI) set
for system x+ = f (x, u) and constraint set (X, U) if Ω ⊆ X and for every x ∈ Ω there
exists a u ∈ U such that f (x, u) ∈ Ω.

18
Finally, the definition of a PI set is:
Definition 1.26 (Positively Invariant Set) A set Ω ⊆ Rn is a positively invariant (PI)
set for system x+ = f (x) and constraint set X if Ω ⊆ X and f (x) ∈ Ω for every x ∈ Ω.

Remark 1.5 (Invariance property and constraints) Note that in our definitions of the
invariant sets we have stressed dependence on the corresponding constraint set. The
main reason for this is to allow for a more natural and simpler development of the results
in the subsequent chapters.
From this point, we will continue to present further preliminaries only for the cases
of the RCI and RPI sets, since the analogous discussion for the CI and PI sets is straight
forward. We recall an important concept in set invariance, the maximal robust control
invariant set contained in a given set Ω ⊆ X:
Definition 1.27 (Maximal Robust Control Invariant Set) A set Φ ⊆ Ω is a maximal ro-
bust control invariant (MRCI) set for system x+ = f (x, u, w) and constraint set (X, U, W)
if Φ is RCI set for system x+ = f (x, u, w) and constraint set (X, U, W) and Φ contains
all RCI sets contained in Ω.
Similarly, maximal robust positively invariant set contained in a given set Ω ⊆ X is
defined:

Definition 1.28 (Maximal Robust Positively Invariant Set) A set Φ ⊆ Ω is a maximal


robust positively invariant (MRPI) set for system x+ = f (x, w) and constraint set (X, W)
if Φ is RPI set for system x+ = f (x, w) and constraint set (X, W) and Φ contains all RPI
sets contained in Ω.
Next concept to be introduced is the minimal robust positively invariant set:

Definition 1.29 (Minimal Robust Positively Invariant Set) A set Φ ⊆ Ω is a minimal


robust positively invariant (mRPI) set for system x+ = f (x, w) and constraint set (X, W)
if and only if Φ is RPI set for system x+ = f (x, w) and constraint set (X, W) and is
contained in all RPI sets contained in Ω.
It is shown in [KG98] that when f (·) is linear and stable, the mRPI set exists and is
unique.

Remark 1.6 (Minimal Robust Positively Invariant Set and Minimal Robust Control
Invariant Set) It is important to observe that defining the mRPI set is relatively simple
while, in contrast, defining the minimal robust control invariant set introduces a number
of subtle technical issues, such as: non–uniqueness, existence and a measure of minimality.
However, it is possible to introduce the concept of the RCI set contained in a minimal p
norm ball.
Before introducing the concept the N – step predecessor set we define the set ΠN (x)
of admissible control policies (recall that a control policy is a sequence of control laws

19
πN = {µ0 (·), µ1 (·), . . . , µN −1 (·)}):

ΠN (x) , {πN | (φ(i; x, πN , wN ), µi (φ(i; x, πN , wN ))) ∈ X × U, i ∈ NN −1 ,


φ(N ; x, πN , wN ) ∈ Ω, ∀wN ∈ WN } (1.7.3)

Definition 1.30 (The N – step (robust) predecessor set) Given the non-empty set Ω ⊂
Rn , the N – step predecessor set PreN (Ω) for system x+ = f (x, u, w) and constraint set
(X, U, W), where N ∈ N+ , is:

PreN (Ω) = {x | ΠN (x) 6= ∅ } (1.7.4)

where ΠN (x) is defined in (1.7.3).


The predecessor set is defined as Pre(Ω) , Pre1 (Ω) and by convention we define
Pre0 (Ω) , Ω.
Remark 1.7 (The predecessor set) It follows immediately that

Pre(Ω) = {x ∈ X | ∃u ∈ U such that f (x, u, w) ∈ Ω, ∀w ∈ W } (1.7.5)

and that operator Pre satisfies the semi–group property, since:

PreN (Ω) = Pre(PreN −1 (Ω))) (1.7.6)

for all N ∈ N+ .

Definition 1.31 (The N – step (disturbed) reachable set) Given the non-empty set
Ω ⊂ Rn , the N -step reachable set for system x+ = f (x, u, w) and constraint set (X, U, W),
where N ∈ N+ , is defined as

ReachN (Ω) , φ(N ; x, uN , wN ) x ∈ Ω, uN ∈ UN , wN ∈ WN . (1.7.7)

The set of states reachable from Ω in 0 steps is defined as Reach0 (Ω) , Ω and the
reach set is defined by Reach(Ω) , Reach1 (Ω)

Remark 1.8 (The reach set) It is clear that:

Reach(Ω) = {y | y = f (x, u, w), x ∈ Ω, u ∈ U, w ∈ W } . (1.7.8)

and that Reach operator satisfies semi–group property, because:

ReachN (Ω) = Reach(ReachN −1 (Ω))) (1.7.9)

for all N ∈ N+ .
Finally we recall a well-known recursive procedure for computing the maximal RCI
set C∞ , for system x+ = f (x, u, w) and constraint set (X, U, W), contained in a given set
Ω ⊆ Rn [Ber71, Ber72, BR71b, Aub91, Ker00]:

20
C0 = Ω, Ci = Pre(Ci−1 ), ∀i ∈ N+ (1.7.10a)

The maximal RCI set C∞ is given by



\
C∞ = Ci . (1.7.10b)
i=0

The set sequence {Ci } is a monotonically non–increasing set sequence, i.e. Ci+1 ⊆ Ci
for all i ∈ N.

Remark 1.9 (The computation of C∞ ) Clearly, it is difficult to calculate C∞


from (1.7.10b). However, if there exists a finite index t ∈ N such that C∞ = Ct , then C∞
is said to be finitely determined. In fact it is easy to show the well known fact that if
Ci+1 = Ci for some finite t ∈ N then C∞ = Ci .
It is also clear that a set of minor modifications in Definition 1.30, Definition 1.31
and the recursive algorithm for computation of MRCI set, is necessary if the system does
not have an input and/or the disturbance is not present. It is hopefully clear how these
modifications can be made.

1.7.3 Optimal Control of constrained discrete time systems

Here we provide a basic mathematical formulation of model predictive control by recalling


the relevant material presented in [May01].
Deterministic Case – Model Predictive Control
Consider the control of nonlinear discrete time systems described by:

x+ = f (x, u) (1.7.11)

where x ∈ Rn is the current state (assumed to be measured), x+ is the successor state


and u ∈ Rm is the input. The system is subject to constraints:

(x, u) ∈ X × U (1.7.12)

The control objective are asymptotic or exponential stability, to minimize performance


criteria and to satisfy the constraints (1.7.12). The model predictive controller employs
the solution of a finite horizon optimal control problem as follows. The cost function is:
N
X −1
VN (x, u) , ℓ(xi , ui ) + Vf (xN ) (1.7.13)
i=0

where, for each i ∈ N, xi , φ(i; x, u), u = {u0 , u1 , . . . , uN −1 } is a control sequence and


N is horizon. The optimal control problem incorporates constraints:

(xi , ui ) ∈ X × U, ∀i ∈ NN −1 and xN ∈ Xf (1.7.14)

21
The terminal cost and terminal constraint set Xf are additional ingredients introduced
in order to ensure closed–loop stability. The constraints (1.7.14) constitute an implicit
constraint on u that is required to lie in the set UN (x) defined by:

UN (x) , {u | (φ(i; x, u), ui ) ∈ X × U, i ∈ NN −1 , φ(N ; x, πN , wk ) ∈ Xf } (1.7.15)

The set UN (x) is a set of admissible control sequences for a given state x. The resultant
optimal control problem is:

PN (x) : min{VN (x, u) | u ∈ UN (x)} (1.7.16)


u

The solution to PN (x), if it exists, yields the corresponding optimizing control sequence:

u0 (x) = {u00 (x), u01 (x), . . . , u0N −1 (x)} (1.7.17)

and the associated optimal state trajectory is:

x0 (x) , {x00 (x), x01 (x), . . . , x0N −1 (x), x0N (x)} (1.7.18)

where, for each i, x0i (x) , φ(i; x, u0 (x)). The value function for PN (x) is:

VN0 (x) = VN (x, u0 (x)) (1.7.19)

The set of states that are controllable, the domain of the value function VN0 (·) is:

XN , {x | UN (x) 6= ∅} (1.7.20)

Model predictive controller requires PN (x) to be solved at each event (x, i) (i.e. state
x at time i) in order to obtain the minimizing control sequence u0 (x) and control applied
to system is u00 (x). Hence, model predictive control implements an implicit control law
κN (·) defined by:
κN (x) , u00 (x) (1.7.21)

The control law κN (·) is time–invariant if the system controlled is time–invariant.


Deterministic Case – Dynamic Programming Solution and Model Predictive
Control
The previous formulation of model predictive control is emphasized by many researchers.
A relevant observation that connects the dynamic programming and model predictive
control, highlighted in [May01], is given next.
Consider the following standard dynamic programming recursion:

Vi0 (x) , min{ℓ(x, u) + Vi−1


0
(f (x, u)) | f (x, u) ∈ Xi−1 } (1.7.22a)
u∈U
0
κi (x) , arg min{ℓ(x, u) + Vi−1 (f (x, u)) | f (x, u) ∈ Xi−1 } (1.7.22b)
u∈U

Xi , {x ∈ X | ∃u ∈ U such that f (x, u) ∈ Xi−1 } (1.7.22c)

with boundary conditions:


V00 (·) = Vf (·), X0 = Xf (1.7.23)

22
The sets Xi for i ∈ NN are the domains of the value functions Vi0 (x). Conventional
optimal control employs the time–varying control law:

ui = κN −i (xi ) (1.7.24)

for i ∈ NN and the control is undefined for i > N . In contrast to optimal control, model
predictive control employs the time–invariant control law κN (·) defined in (1.7.21), the
model predictive control is therefore not optimal nor necessarily stabilizing.
Deterministic Case – Stability of Model Predictive Control
As a result of research by many authors stability of model predictive control is relatively
well understood. We recall an elementary result given in [May01]. First we recall a
standard definition of exponential stability (see Definitions 1.11– 1.15) we have:
Remark 1.10 (Exponential Stability of the origin) The origin is exponentially stable
(Lyapunov stable and exponentially attractive) for system x+ = f (x) with a region of
attraction XN if there exists two constants c > 0 and a γ ∈ (0, 1) such that any solution
x(·) of x+ = f (x) with initial state x(0) ∈ XN satisfies d(x(i), 0) ≤ cγ i d(x(0), 0) for all
i ∈ N+ .
We assume that:

A1 f (·), Vf (·) are continuous, f (0, 0) = 0 and ℓ(x, u) = |x|2Q + |u|2R where Q and R are
positive definite,

A2 X is closed, U and Xf are compact and that each set contain the origin in its
interior,

A3 Xf is a control invariant set for the system x+ = f (x, u) and constraint set (X, U),

A4 minu∈U {ℓ(x, u) + Vf (f (x, u)) | f (x, u) ∈ Xf } ≤ Vf (x), ∀x ∈ Xf , i.e. Vf : Xf → R


is a local control Lyapunov function,

A5 Vf (x) ≤ d|x|2 for all x ∈ Xf ,

A6 XN is bounded.

Theorem 1.1. (Exponential Stability Result for MPC) Suppose that A1– A6
are satisifed, then the origin is exponentially stable with domain of attraction XN
(See (1.7.20)).
The proof of this result is standard and it uses the value function VN0 (x) as a candi-
date Lyapunov function, the interested reader is referred to [May01] for a more detailed
discussion. It is a relevant observation that if Assumption A3 holds, then:

(i) The set sequence {Xi }, i ∈ NN is a monotonically non–decreasing sequence of


CI sets for systems x+ = f (x, u) and constraint set (X, U), i.e. Xi ⊆ Xi+1 for
all i ∈ NN −1 and each Xi is control invariant set for systems x+ = f (x, u) and
constraint set (X, U).

23
(ii) The control law κN (·) : XN → U is such that XN is a PI (positively invariant) set
for the system x+ = f (x, κN (x)) and constraint set XκN , {x ∈ XN | κN (x) ∈ U}.

Uncertain Case – Feedback Model Predictive Control


In this case we introduce the robust optimal control problem the solution of which yields
the control action applied to the system. Stability remarks will be postponed, since we
will address this issue in the sequel. We also remark that there are various approaches to
robust model predictive control discussed in an excellent survey paper [MRRS00]. Here
we are concerned with feedback robust model predictive control.
Consider the uncertain nonlinear discrete time systems described by:

x+ = f (x, u, w) (1.7.25)

where w is the disturbance and it models the uncertainty and as before x ∈ Rn is the
current state (assumed to be measured), x+ is the successor state and u ∈ Rm is the
input. The system is subject to constraints:

(x, u, w) ∈ X × U × W (1.7.26)

It is also possible to treat cases when w ∈ W(x, u) as is illustrated in ( [May01]). However,


here we present the case when W(x, u) is constant, i.e. W(x, u) = W. The set X is closed
and the sets U and W are compact, each contains the origin in its interior. The control
objective is asymptotic or exponential stability of a RCI set (that contains the origin
in its interior) while satisfying the constraints (1.7.2). The feedback model predictive
controller employs the solution of a finite horizon robust optimal control problem as
follows. The cost function is:
N
X −1
JN (x, π, w) , ℓ(xi , ui , wi ) + Vf (xN ) (1.7.27)
i=0

where, for each i ∈ N, xi , φ(i; x, π, w), ui , µi (φ(i; x, π, w); x), π =


{µ0 (·), µ1 (·), . . . µN −1 (·)} is a control policy (each µi (·) is a control law mapping the
state to control at time i), w = {w0 , w1 , . . . , wN −1 } is a disturbance sequence and N is
horizon. The terminal cost Vf (·) and terminal constraint set Xf are additional ingredi-
ents introduced in order to ensure closed–loop stability; they are discussed in more detail
later. The control policy is required to lie in the set of admissible control policies ΠN (x)
defined by:

ΠN (x) , {π | (φ(i; x, π, w), µi (φ(i; x, π, w); x) ∈ X × U, i ∈ NN −1 ,


φ(N ; x, π, w) ∈ Xf , ∀w ∈ WN } (1.7.28)

Given a state x, the cost due to policy π is defined to be:

VN (x, π) , sup{JN (x, π, w) | w ∈ WN } (1.7.29)


w

24
The resultant optimal control problem is:

PR
N (x) : inf {VN (x, π) | π ∈ ΠN (x)} (1.7.30)
π

An equivalent formulation of PR
N (x) is given by the following inf-sup optimal control
problem:
PR
N (x) : inf sup JN (x, π, w). (1.7.31)
π∈ΠN (x) w∈WN

The solution to PR
N (x), if it exists, is:

π 0 (x) = {µ00 (·, x), µ01 (·, x), . . . , µ0N −1 (·, x)} (1.7.32)

and the value function for PR


N (x) is:

VN0 (x) = VN (x, π 0 (x)) (1.7.33)

The set of states that are controllable, the domain of the value function VN0 (·) is:

XN , {x | ΠN (x) 6= ∅} (1.7.34)

With a slight deviation from the standard approaches in conventional robust MPC
(where the first term in the control policy is a control u0 ), the control action of robust
model predictive controller is constructed from the solution to PR
N (x) at each event (x, i).
Feedback model predictive control implements an implicit control law κN (·) defined by:

κN (x′ ) , µ00 (x′ , x) (1.7.35)

Uncertain Case – Dynamic Programming Solution and Feedback Model


Predictive Control

The value function VN0 (·) and implicit control law κN (·) can be obtained by dynamic
programming in a similar manner to that for the deterministic model predictive control.
However, in this case an inf–sup dynamic programming recursion is required, due to
the choice of the cost function. The following equations specify the inf–sup dynamic
programming recursion:

Vi0 (x) , inf sup {ℓ(x, u, w) + Vi−1


0
(f (x, u, w)) | f (x, u, W) ⊆ Xi−1 } (1.7.36a)
u∈U w∈W
0
κi (x) , arg inf sup {ℓ(x, u, w) + Vi−1 (f (x, u, w)) | f (x, u, W) ⊆ Xi−1 } (1.7.36b)
u∈U w∈W

Xi , {x ∈ X | ∃u ∈ U such that f (x, u, W) ⊆ Xi−1 } (1.7.36c)

with boundary conditions:


V00 (·) = Vf (·), X0 = Xf (1.7.37)

The notation f (x, u, W) ⊆ Xi−1 means that f (x, u, w) ∈ Xi−1 for all w ∈ W. The sets Xi
for i ∈ NN are the domains of the value functions Vi0 (x). If the set X0 = Xf is a RCI set
then, similarly to the deterministic case, we have:

25
(i) The set sequence {Xi }, i ∈ NN is a monotonically non–decreasing sequence of RCI
sets for system x+ = f (x, u, w) and constraint set (X, U, W), i.e. Xi ⊆ Xi+1 for all
i ∈ NN −1 and each Xi is robust control invariant set or system x+ = f (x, u, w) and
constraint set (X, U, W).

(ii) The control law κN (·) : XN → U is such that XN is a RPI (robust positively
invariant) set for the system x+ = f (x, κN (x), w) and constraint set (XκN , W),
where XκN , {x ∈ XN | κN (x) ∈ U}.

1.7.4 Some Set Theoretic Concepts and Efficient Algorithms

Here we provide a set of necessary results for computations with polygons, since the basic
set computations with polyhedra are incorporated and contained in most of the available
computational geometry software such as for example [Ver03, KGBM03].
The first result, which is adapted from [BMDP02, Thm. 3], allows one to compute
the set difference of two polyhedra:
Proposition 1.4 (Set Difference of polyhedra) Let A ⊂ Rn and B ,
{x ∈ Rn | c′i x ≤ di , i ∈ N+
r } be non-empty polyhedra, where all the ci ∈ Rn and di ∈ R.
If

S1 , x ∈ A c′1 x > d1 , (1.7.38a)

Si , x ∈ A c′i x > di , c′j x ≤ dj , ∀j ∈ Ni−1 , i = 2, . . . , r, (1.7.38b)
S
then A \ B = i∈N+
r
Si is a polygon. Furthermore, {Si 6= ∅ | i ∈ N+
r } is a partition of
A \ B.

Proof: See the proof of [BMDP02, Thm. 3].

QeD.

Remark 1.11 (Set Difference of polyhedra – Computational comment) In practice, com-


putation time can be reduced by checking whether A ∩ B is empty or whether A ⊆ B
before actually computing A \ B; if A ∩ B = ∅, then A \ B = A and if A ⊆ B, then
A \ B = ∅. Using an extended version of Farkas’ Lemma [Bla99, Lem. 4.1], checking
whether one polyhedron is contained in another amounts to solving a single LP. Alter-
natively, one can solve a finite number of smaller LPs to check for set inclusion [Ker00,
Prop. 3.4]. Once A \ B has been computed, the memory requirements can be reduced
by removing all empty Si and removing any redundant inequalities describing the non-
empty Si . Checking whether a polyhedron is non-empty can be done by solving a single
linear program (LP). Removing redundant inequalities can be done by solving a finite
number of LPs [Ker00, App. B]. As a result, it is a good idea to determine first whether
or not an Si is non-empty before removing redundant inequalities.

26
The second result allows one to compute the set difference of a polygon and a poly-
hedron:
S
Proposition 1.5 (Set Difference of a polygon and a polyhedra) Let C , j∈N+ Cj be
J
a polygon, where all the Cj , j ∈ N+
J, are non-empty polyhedra. If A is a non-empty
polyhedron, then
[
C\A= (Cj \ A) (1.7.39)
j∈N+
J

is a polygon.
S 
Proof: This follows trivially from the fact that C \ A = j∈N+ Cj ∩ Ac =
J
S c
j∈N+ (Cj ∩ A ).
J

QeD.


Remark 1.12 (Structure of Set Difference of a polygon and a polyhedra) If Cj j ∈ N+
J
 +
6 ∅, then Cj \ A 6= ∅ j ∈ NJ is a partition of C \ A if
is a partition of C and C \ A =
Proposition 1.4 is used to compute each polygon Cj \ A, j ∈ N+
J.
The following result allows one to compute the set difference of two polygons:
S S
Proposition 1.6 (Set Difference of polygons) Let C , j∈N+ Cj and D , k∈N+ Dk be
J K
polygons, where all the Cj , j ∈ N+ +
J , and Dk , k ∈ NK , are non-empty polyhedra. If

E0 , C, (1.7.40a)
Ek , Ek−1 \ Dk , ∀k ∈ N+
K, (1.7.40b)

then C \ D = EK is a polygon.

Proof: The result follows from noting that

C \ D = C ∩ Dc (1.7.41a)
c
= C ∩ ∪K
k=1 Dk (1.7.41b)

= C ∩ ∩K
k=1 Dk
c
(1.7.41c)
= C ∩ D1c ∩ D2c ∩ · · · ∩ DK
c
(1.7.41d)
= (C ∩ D1c ) ∩ D2c ∩ · · · ∩ DK
c
(1.7.41e)
= (C \ D1 ) ∩ D2c ∩ · · · ∩ DK
c
(1.7.41f)
c
= ((C \ D1 ) \ D2 ) ∩ · · · ∩ DK (1.7.41g)
= (· · · ((C \ D1 ) \ D2 ) \ · · · ) \ DK (1.7.41h)

and letting E0 , C and Ek , Ek−1 \ Dk , ∀k ∈ N+


K , yields the claim.

QeD.

27
Each polygon Ek−1 \ Dk , k ∈ N+
K , can be computed using Proposition 1.5.


Remark 1.13 (Structure of Set Difference of polygons) Note also that if Cj j ∈ N+
J
is a partition of C and C \ D =
6 ∅, then the sets which define Eq form a partition of C \ D
if Propositions 1.4 and 1.5 were used to compute all the Ek , k ∈ N+
K.
If C and B are two subsets of Rn it is known that (see for instance [Ser88]), C ⊖ B =
[C c ⊕ (−B)]c . It is important to note that in general C ⊖ B =
6 ∪j∈N+ (Cj ⊖ B), but only
J
∪j∈N+ (Cj ⊖ B) ⊆ C ⊖ B (set equality holds only in a very limited number of cases).
J
We propose an alternative result that can be used to implement efficiently the com-
putation of the Pontryagin Difference of a polygon and a polytope.

Proposition 1.7 (Pontryagin Difference between a polygon and a polytope) Let B ,


S +
j∈N+ Bj be a polygon, where all the Bj , j ∈ NJ , are non-empty polyhedra and let W
J
be a polytope. Let C , convh(B), D , C ⊖ W, E , C \ B, F , E ⊕ (−W). Then
G = D \ F = B ⊖ W.

Proof: ‘ D \ F ⊆ B ⊖ W part ’
We begin by noticing that:

D , C ⊖ W = {x | x ⊕ W ⊆ C} = {x | x + w ∈ C, ∀w ∈ W},

and:
E , C \ B = {x | x ∈ C and x ∈
/ B},

From the definition of the Minkowski sum we write that:

F , E ⊕ (−W) = {z | ∃ x ∈ E, w ∈ (−W) s.t. z = x + w},

We further write that:

F = {z | ∃ x ∈ E, w ∈ W s.t. z = x − w}
= {z | ∃ x ∈ E, w ∈ W s.t. x = z + w}
= {z | ∃ w ∈ W s.t. z + w ∈ E},

therefore we can write that:

F = {x | ∃ w ∈ W s.t. x + w ∈ E},

By definition of set difference we have:

D \ F , {x | x ∈ D and x ∈
/ F}
= {x ∈ D | ∄ w ∈ W s.t. x + w ∈ E}
= {x ∈ D | x + w ∈
/ E ∀w ∈ W}.

From the definition of the set D it follows that

D \ F = {x | x + w ∈ C and x + w ∈
/ E ∀w ∈ W}

28
But from definition of the set E it follows that:

D \ F = {x | x + w ∈ C and (x + w ∈
/ C or x + w ∈ B) ∀w ∈ W}
= {x | x + w ∈ C and x + w ∈
/ C ∀w ∈ W}
∪ {x | x + w ∈ C and x + w ∈ B ∀w ∈ W}
= {x | x + w ∈ B ∀w ∈ W}.

Hence we conclude that x ∈ D \ F ⇒ x ∈ B ⊖ W.


‘ B ⊖ W ⊆ D \ F part ’
Consider now an arbitrary x ∈ B ⊖ W. Notice that this means ∄ w ∈ W such that
x+w ∈
/ B, and suppose, contrary to what is to be proven, that there exist an x such
that x ∈
/ D \ F. Now x ∈
/ D \ F means that x ∈
/ D or x ∈ F. Consider case when
x ∈
/ D, then we have a w ∈ W such that x + w ∈
/ C implying that x + w ∈
/ B which
yields aa contradiction. Consider case when x ∈ F, then there exist an w ∈ W such that
x + w ∈ E implying that x + w ∈
/ B which again reveals contradiction hence we must
have x ∈ B ⊖ W ⇒ x ∈ D \ F which proves the claim that G = B ⊖ W = D \ F.

QeD.

Remark 1.14 (Structure of Pontryagin Difference between a polygon and a polytope) A


consequence of the previous proposition is that Pontryagin difference of the union of a
finite set of polytopes and a polytope is the union of a finite set of polytopes B ⊖ W =
∪j∈N+
q
∆j where ∆j , j ∈ N+
q are polyhedra, providing it is not an empty set.
An algorithmic implementation of Proposition 1.7 on a sample polygon and polytope
is illustrated in Figure 1.7.1. The proposed method is for computation of the Pontryagin
difference is conceptually similar to that proposed in [Ser88]. However, computing the
convex hull in the first step significantly reduces (in general) the number of sets obtained
in step 3, which, in turn, results in fewer Minkowski set additions. Since computation
of Minkowski set addition is expensive, a reasonable runtime improvement is expected.
In principle, computation of the convex hull can be replaced by the computation of any
convex set containing the polytope C. The necessary computations can be efficiently im-
plemented by using standard computational geometry software such as [Ver03, KGBM03].

29
5 5

4 C2 C1 4

3 3

2 2

1 1

B
x2

x2
0 0

−1 −1

−2 −2

−3 −3

−4 −4

S
−5 −5
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
x1 x1

(a) j Cj and B. (b) H = convh(C).


5 5

H
4 4

3 3 E
2 2

1 1
x2

x2

0 0

−1 −1

−2 −2

−3 −3

−4
D −4

−5 −5
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
x1 x1

(c) D = H ⊖ B. (d) E = H \ C.
5 5

4 4 D
3 3

2 2

1 1
F F
x2

x2

0 0

−1 −1

−2 −2

−3 −3

−4 −4

−5 −5
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
x1 x1

(e) F = E ⊕ (−B). (f) G = D \ F .

Figure 1.7.1: Graphical Illustration of Proposition 1.7.

30
Part I

Advances in Set Invariance and


Reachability Analysis

31
Chapter 2

Invariant Approximations of
robust positively invariant sets

The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas,
like the colours or the words, must fit together in a harmonious way. Beauty is the first
test: there is no permanent place in the world for ugly mathematics.

– Godfrey Harold Hardy

The motivation for this chapter is that often one would like to determine whether
the state trajectory of the system will be contained in a set X ⊂ Rn , given any allowable
disturbance sequence. It is the main purpose of this chapter to provide methods for com-
putation of invariant approximations of robust positively invariant sets for time invariant
linear discrete time systems subject to bounded disturbances. Particular attention is
given to methods for computation of invariant approximations of the minimal and the
maximal robust positively invariant (RPI) sets.
Finite time computations and explicit characterizations are fundamental problems
related to the computation and characterization of the minimal and the maximal RPI
sets. An appropriate approach to overcome these issues is to attempt to obtain alternative
methods by which appropriate and arbitrarily close robust positively approximations of
these sets can be computed or characterized in finite time. This chapter presents methods
for the computation of a robust positively invariant ε-outer approximation of the minimal
RPI set. Furthermore, a new recursive algorithm that calculates (approximates) the
maximal robust positively invariant set when it is compact (non-compact), is presented.
This is achieved by computing a sequence of robust positively invariant sets. Moreover,
we discuss a number of useful a priori efficient tests and determination of upper bounds
relevant to the proposed algorithms.

32
2.1 Invariant Approximations of RPI sets for Linear Sys-
tems
We consider the following autonomous discrete-time, linear, time-invariant (DLTI) sys-
tem:
x+ = Ax + w, (2.1.1)

where x ∈ Rn is the current state, x+ is the successor state and w ∈ Rn is an unknown


disturbance. We make the standing assumption that A ∈ Rn×n is a strictly stable matrix
(all the eigenvalues of A are strictly inside the unit disk). The disturbance w is persistent,
but contained in a convex and compact set W ⊂ Rn , which contains the origin.
Remark 2.1 (Closed Loop DLTI System) The system in (2.1.1) represents the closed–
loop dynamics of the standard DLTI system x+ = F x + Gu + w (where the pair (F, G)
is controllable) when the control law is u = Kx so that x+ = Ax + w and A , F + GK.
The existence of RPI sets is important for the satisfaction of constraints. It is well-
known [Bla99] that the solution φ(·) of the system will satisfy φ(k; x, w(·)) ∈ X for all
time k ∈ N and all allowable disturbance sequences w(·) ∈ MW if and only if there exists
a RPI set Ω (for system (2.1.1) and constraint set (X, W)) and the initial state x is in Ω.

Remark 2.2 (Robust Positively Invariant Sets) Recalling Definition 1.24, Definition
1.28 and Definition 1.29 one has:

• A set Ω ⊆ Rn is a robust positively invariant (RPI) set for system x+ = Ax + w


and constraint set (X, W) if Ω ⊆ X and Ax + w ∈ Ω, ∀w ∈ W, ∀x ∈ Ω.

• A set O∞ ⊆ X is the maximal robust positively invariant (MRPI) set for system
x+ = Ax + w and constraint set (X, W) if and only if O∞ is a RPI set for system
x+ = Ax + w and constraint set (X, W) and O∞ contains all RPI sets contained in
X.

• A set F∞ ⊆ Rn is the minimal robust positively invariant (mRPI) set for system
x+ = Ax + w and constraint set (Rn , W) if and only if F∞ is a RPI set for system
x+ = Ax + w and constraint set (Rn , W) and F∞ is contained in all RPI sets
contained in Rn .

An important set in the analysis and synthesis of controllers for constrained systems
is the mRPI set F∞ . The properties of the mRPI set F∞ are well-known. It is possible
to show [KG98, Sect. IV] that the mRPI set F∞ exists, is unique, compact and contains
the origin. It is also easy to show that the zero initial condition response of (2.1.1) is
bounded in F∞ , i.e. φ(k; 0, w(·)) ∈ F∞ for all w(·) ∈ MW and all k ∈ N. It therefore
follows, from the linearity and asymptotic stability of system (2.1.1), that F∞ is the
limit set of all trajectories of (2.1.1). In particular, F∞ is the smallest closed set in Rn
that has the following property: given any r > 0 and ε > 0, there exists a k̄ ∈ N such

33
that if x ∈ Bnp (r), then the solution of (2.1.1) satisfies φ(k; x, w(·)) ∈ F∞ ⊕ Bnp (ε) for all
w(·) ∈ MW and all k ≥ k̄.
Another important set in the analysis and synthesis of controllers for constrained
systems is the maximal RPI set (MRPI set).
The properties of the MRPI set O∞ are well-known and the reader is referred
to [KG98] for a detailed study of this set. The MRPI set, if it is non-empty, is unique.
If X is compact and convex, then O∞ is also compact and convex.
One of the reasons for our interest in the mRPI set F∞ stems from the following
well-known fact, which relates the mRPI set F∞ to the MRPI set O∞ :
Proposition 2.1 (Existence of the MRPI set) [KG98] The following statements are
equivalent:

• The MRPI set O∞ is non-empty.

• F∞ ⊆ X.

• X ⊖ F∞ contains the origin.

Remark 2.3(Conditions for existence of the MRPI set) A sufficient condition for check-
ing whether O∞ is non-empty is given in [KG98, Rem. 6.6], where the computation of
an inner approximation of X ⊖ F∞ is proposed; the approximation is then used to check
whether or not the origin lies in its interior. The results in this chapter can also be used to
compute an inner approximation of X ⊖ F∞ . The advantage of the results in this chapter
is that they allow one to specify an a priori level of accuracy for the approximation. As
a consequence, one can directly quantify the level of conservativeness in case the test for
non-emptiness of O∞ fails. This is not possible with the procedure proposed in [KG98,
Rem. 6.6].
The focus of next section is on the minimal robust positively invariant (mRPI) set
F∞ , also often referred to as the 0-reachable set [Gay91], i.e. the set of states that can
be reached from the origin under a bounded state disturbance. The mRPI set plays
an important role in the performance analysis and synthesis of controllers for uncertain
systems [Bla99, Sects. 6.4–6.5] and in computing and understanding the properties of the
maximal robustly positively invariant (MRPI) set [KG98].

2.2 Approximations of the minimal robust positively in-


variant set
We now turn our attention to methods for computing F∞ . If we were to define the
(convex and compact) set sequence {Fs } as
s−1
M
Fs , Ai W, s ∈ N+ , F0 , {0} (2.2.1)
i=0

34
then it is possible to show [KG98, Sect. IV] that Fs ⊆ F∞ and that Fs → F∞ as s → ∞,
i.e. for every ε > 0, there exists an s ∈ N such that F∞ ⊆ Fs ⊕ Bnp (ε). In fact the set
sequence {Fs } is a Cauchy sequence and since a family of compact sets (where each set is
a non–empty compact subset of Rn ) equipped with Hausdorff metric is a complete metric
space [Aub77, Chapter 4, Section 8], it follows that lims→∞ Fs exists and is unique.
Clearly, F∞ is then given by

M
F∞ = Ai W. (2.2.2)
i=0

Since F∞ is a Minkowski sum of infinitely many terms, it is generally impossible


to obtain an explicit characterization of it. However, as noted in [Las93, Sect. 3.3]
and [KG98, Rem. 4.2], it is possible to show that if there exist an integer s ∈ N+ and
L
a scalar α ∈ [0, 1) such that As = αI, then F∞ = (1 − α)−1 s−1 i
i=0 A W. It therefore
follows [MS97b, Thm. 3] that if A is nilpotent with index s (As = 0), then F∞ =
Ls−1 i
i=0 A W.
In this section, we relax the assumption that there exists an s ∈ N+ and a scalar
α ∈ [0, 1) such that As = αI. Since we can no longer compute F∞ exactly, we address
the problem of computing an RPI, outer approximation of the mRPI set F∞ .
Before proceeding, we make a clear distinction between the results reported in [HO03]
and this chapter. By applying the standard algorithm of [KG98], the authors of [HO03]
propose to compute the maximal robust positively invariant set (MRPI) contained in
(1 + ε)Fs (recall that 0 ∈ W ⇒ 0 ∈ Fs ⊆ (1 + ε)Fs ) , for a given ε > 0 and s ∈ N. This
set, if non-empty, is an RPI, outer approximation of the mRPI set F∞ . For a given ε > 0,
the algorithm is based on incrementing the integer s until the MRPI set contained in
(1 + ε)Fs is non-empty. This recursive calculation is necessary, since the authors clearly
state in [HO03, Rem. 6] that they do not have a criterion for the a priori determination
of the integer s such that the MRPI set contained in (1 + ε)Fs is non-empty.
In contrast to this method, we propose to compute an RPI, outer approximation of
the mRPI set F∞ by first computing a sufficiently large s, computing Fs and scaling
the latter by a suitable amount. The proposal in this chapter does not rely on the
computation of the MRPI sets. Thus, the method in this chapter is simpler and likely
to be more efficient than the recursive procedure reported in [HO03].

2.2.1 The origin is in the interior of W

We will first recall a relevant result established in [Kou02], this result allows one to com-
pute an RPI set that contains the mRPI set F∞ . This is achieved by scaling Fs by a
suitable amount. We will exploit this important result to address the problem of how to
compute an RPI, ε-outer approximation of the mRPI set F∞ . Before stating the result
we recall that the standing assumption is that the system transition matrix A is strictly
stable.
Theorem 2.1. (RPI, outer approximation of the mRPI set F∞ ) [Kou02] If 0 ∈

35
interior(W), then there exists a finite integer s ∈ N+ and a scalar α ∈ [0, 1) that satisfies

As W ⊆ αW. (2.2.3)

Furthermore, if (2.2.3) is satisfied, then

F (α, s) , (1 − α)−1 Fs (2.2.4)

is a convex, compact, RPI set for system (2.1.1) and constraint set (Rn , W). Furthermore,
0 ∈ interior(F (α, s)) and F∞ ⊆ F (α, s).
The proof of this result is given for a sake of completeness.

Proof: Existence of an s ∈ N+ and an α ∈ [0, 1) that satisfies (2.2.3) follows from the
fact that the origin is in the interior of W and that A is strictly stable.
Convexity and compactness of F (α, s) follows directly from the fact that Fs (and
hence F (α, s)) is the Minkowski sum of a finite set of convex and compact sets.
L
Let G(α, j, k) , (1 − α)−1 ki=j Ai W. It follows that

AG(α, 0, s − 1) ⊕ W = G(α, 1, s) ⊕ W (2.2.5a)


= (1 − α)−1 As W ⊕ G(α, 1, s − 1) ⊕ W (2.2.5b)
⊆ (1 − α)−1 αW ⊕ W ⊕ G(α, 1, s − 1) (2.2.5c)
= [(1 − α)−1 α + 1]W ⊕ G(α, 1, s − 1) (2.2.5d)
= (1 − α)−1 W ⊕ G(α, 1, s − 1) (2.2.5e)
= G(α, 0, s − 1). (2.2.5f)

In going from (2.2.5b) to (2.2.5c) we have used the fact that P ⊆ Q ⇒ P ⊕ R ⊆ Q ⊕ R


for arbitrary sets P ⊂ Rn , Q ⊂ Rn and R ⊂ Rn .
Since F (α, s) = G(α, 0, s − 1), it follows that AF (α, s) ⊕ W ⊆ F (α, s) holds, hence
F (α, s) is RPI for system (2.1.1) and constraint set (Rn , W). It follows trivially from the
definition of the mRPI set that F (α, s) contains F∞ . Note also that 0 ∈ interior(F∞ ) if
0 ∈ interior(W).

QeD.

Remark 2.4 (The set F (α, s) for constrained case) The set F (α, s) is RPI for sys-
tem (2.1.1) and constraint set (X, W) if and only if F (α, s) ⊆ X.
Note that
F (α0 , s) ⊂ F (α1 , s) ⇔ α0 < α1 . (2.2.6)

Furthermore, if A is not nilpotent, then

F (α, s0 ) ⊂ F (α, s1 ) ⇔ s0 < s1 . (2.2.7)

36
Clearly, based on these observations, one can obtain a better approximation of the
mRPI set F∞ , given an initial pair (α, s). Let

so (α) , inf s ∈ N+ | As W ⊆ αW , (2.2.8a)
αo (s) , inf {α ∈ [0, 1) | As W ⊆ αW } (2.2.8b)

be the smallest values of s and α such that (2.2.3) holds for a given α and s, respectively.

Remark 2.5 (Existence of so (α) & αo (s)) The infimum in (2.2.8a) exists for any choice
of α ∈ (0, 1); so (0) is finite if and only if A is nilpotent. Note that so (α) → ∞ as α ց 0
if and only if A is not nilpotent. The infimum in (2.2.8b) is also guaranteed to exist if s
is sufficiently large. Note that there exists a finite s such that αo (s) = 0 if and only if A
is nilpotent. However, if A is not nilpotent, then αo (s) ց 0 as s → ∞.
By a process of iteration one can use the above definitions and results to compute a
pair (α, s) such that F (α, s) is a sufficiently good RPI, outer approximation of F∞ .
Clearly, F (α, s), as defined above, is an RPI, outer approximation of the mRPI set
F∞ . However, the former could be a very poor approximation of the latter. We therefore
proceed to address the question as to whether, in the limit, F (α, s) tends to the true
mRPI set F∞ if we choose s sufficiently large and/or choose α sufficiently small.
Before proceeding we need the following:
Lemma 2.1 (Hausdorff Distance between Φ and (1−α)−1 Φ) If Φ is a convex and compact
set in Rn containing the origin and α ∈ [0, 1), then dpH (Φ, (1 − α)−1 Φ) ≤ α(1 − α)−1 M ,
where M , supz∈Φ |z|, and dpH (Φ, (1 − α)−1 Φ) → 0 as α ց 0.

Proof: Since α ∈ [0, 1) and 0 ∈ Φ, it follows that Φ ⊆ (1 − α)−1 Φ so that



dpH (Φ, (1 − α)−1 Φ) = sup d(Φ, x) x ∈ (1 − α)−1 Φ
x
 
−1
= sup inf |y − x| x ∈ (1 − α) Φ
x y∈Φ

= sup inf |y − (1 − α−1 )z|


z∈Φ y∈Φ

≤ sup |z − (1 − α−1 )z|


z∈Φ

= ((1 − α)−1 − 1)M = α(1 − α)−1 M,

where M , supz∈Φ |z|.


Hence, dpH (Φ, (1 − α)−1 Φ) ≤ α(1 − α)−1 M and therefore dpH (Φ, (1 − α)−1 Φ) → 0 as
α ց 0.

QeD.

We recall that {Fs } is Cauchy [KG98, Sect. IV] so that M∞ , lims→∞ supz∈Fs |z|
is finite. As Fs ⊆ F∞ , ∀s ∈ N we have that M (s) , supz∈Fs |z| ≤ M∞ is finite for all
s ∈ N. This fact and the above Lemma allows one to make a formal statement regarding

37
the limiting behavior of the approximation:
Theorem 2.2. (Limiting behavior of the RPI approximation) If 0 ∈ interior(W), then

(i) F (αo (s), s) → F∞ as s → ∞ and

(ii) F (α, so (α)) → F∞ as α ց 0.

Proof: (i) It follows from Lemma 2.1 that dpH (Fs , F (αo (s), s)) = dpH (Fs , (1 −
αo (s))−1 Fs ) ≤ αo (s)(1 − αo (s))−1 M (s), where M (s) ≤ M∞ < ∞ for all s ∈ N. Since
αo (s) ց 0 as s → ∞, we get that dpH (Fs , F (αo (s), s)) → 0 as s → ∞. However, since
F (αo (s), s) ⊇ F∞ ⊇ Fs for all s ∈ N and Fs → F∞ as s → ∞, we conclude that
F (αo (s), s) → F∞ as s → ∞.
(ii) It follows from Lemma 2.1 that dpH (Fso (α) , F (α, so (α))) = dpH (Fso (α) , (1 −
α)−1 Fso (α) ) ≤ α(1 − α)−1 M (so (α)), where M (so (α)) ≤ M∞ < ∞ for all α ∈ (0, 1),
hence dpH (Fso (α) , F (α, so (α))) → 0 as α ց 0. Note that so (α) → ∞ as α ց 0. Since
F (α, so (α)) ⊇ F∞ ⊇ Fso (α) for all α ∈ (0, 1) and Fso (α) → F∞ as α ց 0, we conclude
that F (α, so (α)) → F∞ as α ց 0.

QeD.

Remark 2.6 (Nilpotent Case) If A is nilpotent with index s̃ then αo (s) = 0 for all s ≥ s̃.
Since Fs̃ = F∞ it follows that F (αo (s), s) = F∞ for all s ≥ s̃, hence F (αo (s), s) → F∞
as s → ∞. A similar argument shows that F (α, so (α)) → F∞ as α ց 0, since Fs̃ = F∞
and α = 0 for a finite s̃ so that so (0) = s̃.
Clearly, the case when the origin is in the interior of W does not pose any problems
with regards the existence of an α ∈ [0, 1) and a finite s ∈ N+ that satisfy (2.2.3),
provided one bear in mind whether or not A is nilpotent.
Theorem 2.1 provides a way for the computation of an RPI, outer approximation of
F∞ and Theorem 2.2 establishes the limiting behavior of this approximation. However,
for a given pair (α, s) that satisfies (2.2.3), it is not immediately obvious whether or not
F (α, s) is a good approximation of the mRPI set F∞ .
Given a pair (α, s) satisfying the conditions of Theorem 2.1, it can be shown (along
similar lines as in the proof of Theorem 2.3) that if

ε ≥ α(1 − α)−1 max |x|p = α(1 − α)−1 min γ Fs ⊆ Bnp (γ) (2.2.9)
x∈Fs γ

then F∞ ⊆ F (α, s) ⊆ F∞ ⊕ Bnp (ε). In other words, F (α, s) is an RPI, outer ε-


approximation of F∞ if ε satisfies (2.2.9).
Though this observation allows one to determine a posteriori whether or not F (α, s)
is a good approximation of F∞ , it is perhaps more useful to have a result that allows one
to determine a priori how large s and/or how small α needs to be in order for F (α, s)
to be a sufficiently accurate approximation of F∞ . The following result establishes that
this is possible:

38
Theorem 2.3. (RPI, ε outer approximation of the mRPI set F∞ ) If 0 ∈ interior(W),
then for all ε > 0, there exist an α ∈ [0, 1) and an associated integer s ∈ N+ such
that (2.2.3) and
α(1 − α)−1 Fs ⊆ Bnp (ε) (2.2.10)

hold. Furthermore, if (2.2.3) and (2.2.10) are satisfied, then F (α, s) is an RPI, outer
ε-approximation of the mRPI set F∞ (for system (2.1.1) and constraint set (Rn , W)).

Proof: First, note from the proof of Theorem 2.2 that M (s) ≤ M∞ ≤ M (α, s) where
M (α, s) , supz∈F (α,s) |z|p , we refer to the proof of Theorem 2.2 for the definition of M∞ .
Let ε > 0 and recall that 0 < M∞ < ∞ and Fs ⊆ F∞ for all s ∈ N. Since Fs and
F∞ are convex and contain the origin, it follows that α(1 − α)−1 Fs ⊆ α(1 − α)−1 F∞
for any s ∈ N and α ∈ [0, 1). Note that the inclusion α(1 − α)−1 F∞ ⊆ Bnp (ε) is true if
α(1 − α)−1 M∞ ≤ ε or, equivalently, if α ≤ ε(ε + M∞ )−1 . Hence, (2.2.10) is true for any
s ∈ N and α ∈ [0, ᾱ], where ᾱ , ε(ε + M∞ )−1 ∈ (0, 1). Clearly, (2.2.3) is also true if we
choose α ∈ (0, ᾱ] and s = so (α). This establishes the existence of a suitable couple (α, s)
such that (2.2.3) and (2.2.10) hold simultaneously.
Let (α, s) be such that (2.2.3) and (2.2.10) are true. Since F (α, s) = (1 − α)−1 Fs
is a convex and compact set that contains the origin, F (α, s) = (1 − α)−1 Fs = (1 +
α(1 − α)−1 )Fs = Fs ⊕ α(1 − α)−1 Fs . Since Fs ⊆ F∞ ⊆ F (α, s) ⊆ Fs ⊕ Bnp (ε) ⊆
F∞ ⊕ Bnp (ε), it follows that F (α, s) is an RPI, outer ε-approximation of the mRPI set
F∞ (for system (2.1.1) and constraint set (Rn , W)).

QeD.

Remark 2.7 (Equivalence of (2.2.9) and (2.2.10)) For computational purposes, it is


useful to note that (2.2.9) and (2.2.10) are equivalent. As will be discussed in Section 2.4,
if W is a polytope and p = ∞, then it is important to note that it is not necessary to
compute Fs in order to check whether (2.2.10) holds.
As with Theorem 2.1, it is straightforward to develop a conceptual algorithm based
on Theorem 2.3. Note that (2.2.3) provides a lower bound on α such that F (α, s) is
guaranteed to be RPI and contain F∞ . In addition, the conditions (2.2.9) and (2.2.10)
give an upper bound on α such that F (α, s) is guaranteed to be an outer ε-approximation
of F∞ . Hence, for a given ε > 0, one can compute an RPI, outer ε-approximation of the
mRPI set F∞ by incrementing s until there exists an α ∈ [0, 1) such that both (2.2.3)
and (2.2.10) hold. Once this pair (α, s) is found, one can compute the RPI, outer ε-
approximation F (α, s) via the Minkowski sum (2.2.1) and scaling (2.2.4). The reader is
referred to Algorithm 2 in Section 2.4 for more details.

Remark 2.8 (Complexity of the description of the set F (α, s)) Note that a whole col-
lection of RPI, outer ε-approximations of the mRPI set F∞ can be computed and that

39
the complexity of the description of F (α, s) is highly dependent on the eigenstructure of
A and the description of W. However, for a given error bound ε, it is usually a good
idea to find the smallest value of the integer s for which there exists an α ∈ [0, 1) such
that (2.2.3) and (2.2.10) hold. This is because, for a given α, a lower value of s generally
results in a lower complexity for the description of F (α, s). In contrast, for a given s, the
value of α does not affect the complexity of F (α, s).
Up to now, we have made the assumption that the origin is in the interior of W;
this does not pose any problems with regards the existence of an α ∈ [0, 1) and a finite
s ∈ N+ that satisfy (2.2.3). However, we proceed to demonstrate that the results in this
section can be extended to the more general case when the interior of W is empty, but
the relative interior of W contains the origin.

2.2.2 The origin is in the relative interior of W

The results in the previous section can be extended to a more general case, when the
interior of W is empty, but the origin is in the relative interior of W.
Let the disturbance set now be given by

W , ED (2.2.11)

where the matrix E ∈ Rn×l and the set D ⊂ Rl is a convex, compact set containing the
origin in its interior. Clearly, W is convex and compact and the origin is in the relative
interior of W. However, if rank(E) < n, then the interior of W is empty.
We will now attempt to calculate an RPI, outer-approximation of the mRPI set F∞
under the above, relaxed assumption:
Theorem 2.4. (RPI, outer approximation of the mRPI set F∞ when the origin is in the
relative interior of W) Let 0 ∈ interior(D) and W , ED, with E ∈ Rn×l . There exist
positive integers p, r and s and a scalar α ∈ [0, 1) such that

As ED ⊆ αFp and Ar Fp ⊆ αFp . (2.2.12)

Furthermore, if (2.2.12) is satisfied, then


r−1
M
F (α, p, r, s) , Fs ⊕ α(1 − α)−1 Ai Fp (2.2.13)
i=0

is a convex, compact, RPI set for system (2.1.1) and constraint set (Rn , W), containing
F∞ .

Proof: It is obvious that there exist integers p and n̄ ≤ n such that for all j ≥ p,
 
rank E AE . . . Aj−1 E = n̄. (2.2.14)

The set
C(A, E) , range([E AE . . . Ap−1 E]) (2.2.15)

40
is then an n̄-dimensional subspace of Rn spanned by n̄ linearly independent columns of the
matrix [E AE . . . Ap−1 E], which can be chosen arbitrarily. For any j ≥ p and any set of
vectors d0 , . . . , dj−2 , dj−1 ∈ Rl it follows that Edj−1 +AEdj−2 +· · ·+Aj−1 Ed0 ∈ C(A, E).
Clearly, this implies that

F∞ ⊆ C(A, E) and Fj ⊆ C(A, E), ∀j ≥ p. (2.2.16)

Moreover, Ai W = Ai ED ⊆ C(A, E) for all i ∈ N0 . The reader should also note that,
since (2.2.14) holds, F∞ and Fj , with j ≥ p, are bounded, n̄-dimensional sets.
By recalling (2.2.12) and the fact that P ⊆ Q ⇒ P ⊕ R ⊆ Q ⊕ R, it follows that
r−1
!
M
AF (α, p, r, s) ⊕ ED = A Fs ⊕ α(1 − α)−1 Ai Fp ⊕ ED (2.2.17a)
i=0
s
! r
!
M M
= Ai ED ⊕ α(1 − α)−1 Ai Fp ⊕ ED (2.2.17b)
i=1 i=1
s
! r
!
M M
i −1 i
= ED ⊕ A ED ⊕ α(1 − α) A Fp (2.2.17c)
i=1 i=1
r−1
!
M
= Fs ⊕ As ED ⊕ α(1 − α)−1 Ai Fp ⊕ α(1 − α)−1 Ar Fp
i=1
(2.2.17d)
r−1
!
M
= Fs ⊕ As ED ⊕ α(1 − α)−1 Ar Fp ⊕ α(1 − α)−1 Ai Fp
i=1
(2.2.17e)
r−1
!
M
⊆ Fs ⊕ αFp ⊕ α2 (1 − α)−1 Fp ⊕ α(1 − α)−1 Ai Fp (2.2.17f)
i=1
r−1
!
 M
2 −1 −1 i
= Fs ⊕ α + α (1 − α) Fp ⊕ α(1 − α) A Fp (2.2.17g)
i=1
r−1
!
M
= Fs ⊕ α(1 − α)−1 Fp ⊕ α(1 − α)−1 Ai Fp (2.2.17h)
i=1
r−1
M
−1
= Fs ⊕ α(1 − α) Ai Fp . (2.2.17i)
i=0

Hence, AF (α, p, r, s) ⊕ ED ⊆ F (α, p, r, s) and the set F (α, p, r, s) is an RPI set for
system (2.1.1) and constraint set (Rn , W).
Convexity and compactness follows immediately from the properties of the Minkowski
sum. Since F (α, p, r, s) is closed and RPI, it follows immediately from the definition that
F∞ ⊆ F (α, p, r, s).

QeD.

Remark 2.9 (Theorem 2.4 – Special cases) If ED contains the origin in its interior, then

41
by letting p = 1, we get that

As ED ⊆ αED and Ar ED ⊆ αED, (2.2.18)

which for s = r becomes condition (2.2.3). The set F (α, 1, r, s) = F (α, s) is then given
by (2.2.4) and the case when W contains the origin in its interior is recovered. Also, if
the couple (E, A) is observable then the set Fn is a full dimensional set so that further
simplification of the results is possible.

Remark 2.10 (Extensions of Theorem 2.4) We also note that Theorems 2.2 and 2.3
can be extended to this case with some additional and relatively simple but tedious
mathematical analysis.
In practice, one often assumes disturbances on each of the states, hence it is quite often
the case that the origin is indeed contained in the interior of W. Because of this and the
fact that testing the conditions in (2.2.12) is a lot more complicated than testing (2.2.3),
we will not consider the case when the interior of W is empty in any further detail.

2.2.3 Computing the reachable set of an RPI set

We also present an alternative way for computing a robust positively invariant ε-outer
approximation of the mRPI set F∞ . The second method is based on the computation of
the reachable set of an RPI set.
Remark 2.11 (Reachable set for autonomous linear discrete time system) Definition
1.31 yields the following definition of the N -step reachable set for system (2.1.1):

• Given the non-empty set Ω ⊂ Rn , the N -step reachable set, where N ∈ N+ , is


defined as
ReachN (Ω) , {φ(N ; x, w(·)) | x ∈ Ω, w(·) ∈ MW } . (2.2.19)

The set of states reachable from Ω in 0 steps is defined as Reach0 (Ω) , Ω and the
set of states reachable in 1 step is defined as Reach(Ω) , Reach1 (Ω).

It is easy to show that for system (2.1.1) we have

ReachN (Ω) = A ReachN −1 (Ω) ⊕ W (2.2.20)

and
ReachN (Ω) = AN Ω ⊕ FN (2.2.21)

for all N ∈ N+ .
If Ω is closed, then ReachN (Ω) is also closed because the linear map of a closed set
is a closed set and the Minkowski sum of a finite number of closed sets is a closed set.
Similarly, ReachN (Ω) is bounded (compact) if Ω is bounded (compact).
Recalling that F∞ is the limit set of all trajectories of (2.1.1), it follows that
ReachN (Ω) → F∞ in the Hausdorff metric as s → ∞, i.e. dpH (Reachs (Ω), F∞ ) → 0

42
as s → ∞, for any non-empty set Ω. In particular:
Lemma 2.2 (ε-outer approximation of F∞ & ReachN (Ω)) If Ω is a compact set in Rn
and ε > 0, then there exists an integer N ∈ N such that

AN Ω ⊆ Bnp (ε). (2.2.22)

If (2.2.22) holds and F∞ ⊆ Ω, then ReachN (Ω) is a compact, ε-outer approximation of


F∞ .

Proof: Existence of an N ∈ N that satisfies (2.2.22) follows from the fact that Ω is
compact and that A is strictly stable. The proof is completed by recalling (2.2.21) and
the fact that FN ⊆ F∞ ⊆ Ω, hence F∞ ⊆ ReachN (Ω) for all N ∈ N+ .

QeD.

Lemma 2.3 (Decreasing sequence of closed, RPI sets) If Ω is a closed, RPI


set, then ReachN (Ω) is a closed, RPI set (for system (2.1.1) and constraint set
(X, W)) and ReachN +1 (Ω) ⊆ ReachN (Ω) for all N ∈ N. In other words,
{Ω, Reach1 (Ω), Reach2 (Ω), . . .} is a decreasing sequence of closed, RPI sets (for sys-
tem (2.1.1) and constraint set (X, W)).

Proof: The proof is by induction. Let ReachN (Ω) be a closed, RPI set (for sys-
tem (2.1.1) and constraint set (X, W)). This implies that

A ReachN (Ω) ⊕ W ⊆ ReachN (Ω). (2.2.23)

The fact that ReachN +1 (Ω) ⊆ ReachN (Ω) follows from (2.2.20).
Note also that

A ReachN +1 (Ω) ⊕ W = A(A ReachN (Ω) ⊕ W) ⊕ W (2.2.24a)


⊆ A ReachN (Ω) ⊕ W (2.2.24b)
= ReachN +1 (Ω). (2.2.24c)

This proves that ReachN +1 (Ω) is RPI (for system (2.1.1) and constraint set (X, W)).
The proof is completed by checking, in a similar fashion as above, that Reach1 (Ω) ⊆ Ω
and that Reach1 (Ω) is RPI (for system (2.1.1) and constraint set (X, W)).

QeD.

We can now state the main result of this section:


Theorem 2.5. (RPI, ε outer approximation of the mRPI set F∞ – II approach) If Ω
is a compact, RPI set (for system (2.1.1) and constraint set (X, W)), then there exists
an N ∈ N that satisfies (2.2.22), in which case ReachN (Ω) is a compact, RPI, ε-outer
approximation of F∞ (for system (2.1.1) and constraint set (X, W)).

43
Proof: The result follows from Lemmas 2.2 and 2.3 by recalling that F∞ is contained
in all closed, RPI sets (for system (2.1.1) and constraint set (X, W)).

QeD.

Corollary 2.1 (The mRPI set F∞ & ReachN (Ω) – Special Case) If Ω is a closed, RPI
set (for system (2.1.1) and constraint set (X, W)) such that F∞ ⊆ Ω and ReachN (Ω) =
ReachN +1 (Ω) for some N ∈ N, then F∞ = ReachN (Ω).

Proof: Suppose that ReachN (Ω) = ReachN +1 (Ω). It follows from (2.2.20) that
ReachN (Ω) = ReachN +k (Ω) for all k ∈ N. From (2.2.21) it follows that ReachN +k (Ω) =
Ak ReachN (Ω) ⊕ Fk so that ReachN +k (Ω) → F∞ as k → ∞, which proves claim that
ReachN (Ω) = F∞ if ReachN (Ω) = ReachN +1 (Ω).

QeD.

Remark 2.12 (Initial RPI set Ω) Clearly, any RPI set can be used as an initial RPI set
for the set computations required by the second method. This set can be obtained using
the results in [Bla94], an arbitrary F (α, s) or the O∞ obtained by replacing X with a
sufficiently large, compact subset of X, are also suitable candidates for Ω in Theorem 2.5.

2.3 The maximal robust positively invariant MRPI set


We now turn our attention to the maximal robust positively invariant MRPI set.

Remark 2.13 (Predecessor set for autonomous linear discrete time system) Recall-
ing Definition 1.30, it follows that the N -step predecessor set for system (2.1.1) is:

• Given the non-empty set Ω ⊂ Rn , the N -step predecessor set PreN (Ω), where
N ∈ N+ , is defined as

PreN (Ω) , {x ∈ X| φ(N ; x, w(·)) ∈ Ω, φ(k; x, w(·)) ∈ X,


∀k ∈ N[0,N −1] , ∀w(·) ∈ MW }. (2.3.1)

The predecessor set is defined as Pre(Ω) , Pre1 (Ω) and Pre0 (Ω) , Ω.

It follows immediately that

Pre(Ω) = {x ∈ X | Ax + w ∈ Ω, ∀w ∈ W } = {x ∈ X | Ax ∈ Ω ⊖ W } (2.3.2)

and
PreN (Ω) = Pre(PreN −1 (Ω)) (2.3.3)

44
for all N ∈ N+ .
It is well-known [Bla99, KG98] that the maximal robust positively invariant (MRPI)
is the set of all initial states in X for which the evolution of the system remains in X, i.e.

O∞ = x ∈ X φ(k; x, w(·)) ∈ X, ∀k ∈ N+ , ∀w(·) ∈ MW . (2.3.4)

Let Ot to be the set of all initial states in X for which the evolution of the system remains
in X for t steps, i.e.

Ot , {x ∈ X | φ(k; x, w(·)) ∈ X, ∀k ∈ Nt , ∀w(·) ∈ MW } (2.3.5a)


= Pret (X). (2.3.5b)

Note that Ot ⊆ Ot−1 for all t ∈ N+ , i.e. {X, O1 , O2 , . . .} is a decreasing sequence of sets.

Remark 2.14 (Subset Observation) Given a non-empty set Ω in Rn , it follows imme-


diately from the definition of Ot that Ω ⊆ Ot if and only if Reachk (Ω) ⊆ X for all
k ∈ Nt .
It is well-known that O∞ can be calculated [Ber72, Aub91, Bla99, KG98] from the
recursion
O0 = X, Ot = Pre(Ot−1 ), ∀t ∈ N+ (2.3.6a)

and that the MRPI set is then given by



\
O∞ = Ot . (2.3.6b)
t=0

Clearly, it is very difficult to calculate O∞ from (2.3.6b). However, if there exists a


finite index t ∈ N such that O∞ = Ot , then O∞ is said to be finitely determined, i.e. it
can be calculated in a finite number of steps.

2.3.1 On the determinedness index of O∞

A necessary and sufficient condition for the finite determination of O∞ is that Ot = Ot+1
holds for some finite t ∈ N. The smallest index t such that Ot = Ot+1 is called the
determinedness index, and will be denoted by t∗ . As shown in [KG98], O∞ is finitely
determined if there exists an ℓ ∈ N such that Oℓ is compact. We will present here a result
that allows one to compute an upper bound on the determinedness index t∗ of O∞ .
We present a number of results, which closely follow results reported in [KG98,
Kou02]. However, the emphasis here is different, because we are interested in com-
puting a priori whether or not O∞ is finitely determined and in computing an inner
robust positively approximation of the MRPI set. The results stated in the following two
subsections allow one to do this.

Theorem 2.6. (Finite time determination conditions) Given any Oℓ , ℓ ∈ N, if t ∈ N


satisfies
Reacht+ℓ+1 (Oℓ ) ⊆ Oℓ (2.3.7)

45
then Ot+ℓ = Ot+ℓ+1 . If Oℓ is compact and F∞ ⊆ interior(Oℓ ), then there exists a finite t
such that (2.3.7) holds.
Alternatively, if Ω is any set such that F∞ ⊆ Ω ⊆ Oℓ and

At+ℓ+1 Oℓ ⊆ Oℓ ⊖ Ω, (2.3.8)

then Ot+ℓ = Ot+ℓ+1 . If Oℓ is compact and Ω ⊆ interior(Oℓ ), then there exists a finite t
such that (2.3.8) holds.
In other words, the determinedness index t∗ of the MRPI set O∞ is less than or equal
to t + ℓ if (2.3.7) or (2.3.8) holds.

Proof: Recalling that Ot+ℓ ⊆ Oℓ , it follows that

At+ℓ+1 Ot+ℓ ⊆ At+ℓ+1 Oℓ , (2.3.9)

hence

At+ℓ+1 Ot+ℓ ⊕ Ft+ℓ+1 ⊆ At+ℓ+1 Oℓ ⊕ Ft+ℓ+1 . (2.3.10)

From (2.2.21) it follows that

Reacht+ℓ+1 (Ot+ℓ ) ⊆ Reacht+ℓ+1 (Oℓ ). (2.3.11)

If (2.3.7) holds, then

Reacht+ℓ+1 (Ot+ℓ ) ⊆ Oℓ ⊆ X. (2.3.12)

Recalling Remark 2.14, this result implies that Ot+ℓ ⊆ Ot+ℓ+1 . However, since Ot+ℓ ⊇
Ot+ℓ+1 is always true, it follows that Ot+ℓ = Ot+ℓ+1 .
The existence of a finite t such that (2.3.7) holds follows from Lemma 2.2.
For the second part of the statement, recall that (P ⊖ Q) ⊕ Q ⊆ P for any two sets
P ⊂ Rn and Q ⊂ Rn . If (2.3.8) is satisfied, then

At+ℓ+1 Oℓ ⊕ Fi+ℓ+1 ⊆ (Oℓ ⊖ Ω) ⊕ Fi+ℓ+1 (2.3.13a)


⊆ (Oℓ ⊖ Ω) ⊕ Ω (2.3.13b)
⊆ Oℓ , (2.3.13c)

hence (2.3.12) is satisfied.


The existence of a finite t such that (2.3.8) holds follows from the first part of
Lemma 2.2. This is because 0 ∈ Ω and Ω ⊆ interior(Oℓ ), hence 0 ∈ interior(Oℓ ⊖ Ω).
This implies that there exists an ε > 0 such that Bnp (ε) ⊆ Oℓ ⊖ Ω.

QeD.

Corollary 2.2 (A Condition for Finite Time Determination) If t ∈ N satisfies

Reacht+1 (X) ⊆ X, (2.3.14)

46
then Ot = Ot+1 . If, in addition, X is compact and F∞ ⊆ interior(X), then there exists a
finite t such that (2.3.14) holds.
Alternatively, if Ω is any set such that F∞ ⊆ Ω ⊆ X and

At+1 X ⊆ X ⊖ Ω, (2.3.15)

then Ot = Ot+1 . If, in addition, X is compact and Ω ⊆ interior(X), then there exists a
finite t such that (2.3.15) holds.
In other words, the determinedness index t∗ of the MRPI set O∞ is less than or equal
to t if (2.3.14) or (2.3.15) holds.
The results in previous sections can be applied here. For example, let the conditions
in Theorem 2.1 hold. If F (α, s) ⊆ interior(Oℓ ), F (α, s) ⊆ interior(X) or F (α, s) ⊆ Ω,
then F∞ ⊆ interior(Oℓ ), F∞ ⊆ interior(X) or F∞ ⊆ Ω, respectively. Of course, one could
let Ω , F (α, s) if the first two conditions are satisfied.
In many cases, it is not possible to guarantee that the assumptions in this section
hold. It is then important to find an alternative way to compute an RPI approximation
of the set O∞ . This problem is addressed next.

2.3.2 Inner approximation of the MRPI set

We will consider the computation of the predecessor sets of an RPI set. Before proceed-
ing, recall the following result, which is a special case of a procedure suggested in [Ker00,
Sect. 3.2] for improving on an inner approximation of the MRPI set:
Proposition 2.2 (Increasing Sequence of RPI sets) If Ω is a RPI set (for system (2.1.1)
and constraint set (X, W)), then PreN (Ω) is a RPI (for system (2.1.1) and con-
straint set (X, W)) set and PreN +1 (Ω) ⊇ PreN (Ω) for all N ∈ N+ . In other words,
{Ω, Pre1 (Ω), Pre2 (Ω), . . .} is an increasing sequence of RPI sets (for system (2.1.1) and
constraint set (X, W)).
Remark 2.15 (PreN (Ω) & O∞ for a RPI set Ω) Clearly, if Ω is a RPI set (for sys-
tem (2.1.1) and constraint set (X, W)), then PreN (Ω) ⊆ O∞ for all N ∈ N+ .
For the sake of completeness, we also recall the following result, which is a special
case of [Bla94, Prop. 2.1]:
Proposition 2.3 (Dilatation of a RPI set) Let Ω be a convex, RPI set (for system (2.1.1)
and constraint set (X, W)) containing the origin. If the scalar µ ≥ 1, then µΩ is also a
convex, RPI set (for system (2.1.1) and constraint set (X, W)) containing the origin.
We now present the first main result of this subsection:
Theorem 2.7. (Increasing Sequence of RPI sets) If Ω is a convex RPI set containing
the origin (for system (2.1.1) and constraint set (X, W)) and

µo , sup {µ ∈ [1, ∞) | µΩ ⊆ X } , (2.3.16)

then {Ω, µo Ω, Pre1 (µo Ω), Pre2 (µo Ω), . . .} is an increasing sequence of RPI sets (for sys-
tem (2.1.1) and constraint set (X, W)).

47
Proof: The proof follows immediately from Propositions 2.2 and 2.3.

QeD.

We recall the following definition:


Definition 2.1 (Maximal stabilizable set) Given any RPI set Ω (for system (2.1.1) and
constraint set (X, W)) that contains the mRPI set F∞ in its interior, we define the
maximal stabilizable set S∞ (Ω) as

[
S∞ (Ω) , PreM (Ω). (2.3.17)
M =0

Clearly, since F∞ is the limit set of all trajectories of system (2.1.1), S∞ (Ω) is the set
of initial states in X such that, given any allowable disturbance sequence, the solution of
the system will be in X for all time, enter Ω in some finite time and remain in Ω thereafter,
while converging to F∞ . The proof of the second main result of this subsection follows
immediately from recognizing this fact and is therefore omitted:
Theorem 2.8. (Inner approximation of the MRPI set O∞ ) Let Ω be a RPI set (for
system (2.1.1) and constraint set (X, W)) containing F∞ in its interior.

(i) If there exists an M ∈ N+ such that PreM (Ω) = PreM +1 (Ω), then O∞ = S∞ (Ω) =
PreM (Ω).

(ii) If Oℓ is compact for some ℓ ∈ N, then there exists a finite M ∈ N+ such that
O∞ = S∞ (Ω) = PreM (Ω).

The results in previous sections can be applied in Theorems 2.7 and 2.8.
For example, let the conditions of Theorem 2.1 hold. If F (α, s) ⊆ X and
µo is defined as in (2.3.16) with Ω , F (α, s), then the sequence of sets
{F (α, s), µo F (α, s), Pre1 (µo F (α, s)), Pre2 (µo F (α, s)), . . .} is an increasing sequence of
RPI sets (for system (2.1.1) and constraint set (X, W)). Clearly, any set obtained using
the results in [Bla94] or the O∞ obtained by replacing X with a sufficiently large, compact
subset of X, are also suitable candidates for Ω in Theorems 2.7 and 2.8.

2.4 Efficient computations and a priori upper bounds


This section will present results that allow for the development of efficient
tests and computations of a priori upper bounds for the conditions presented
in (2.2.3), (2.2.10), (2.2.22), (2.3.7), (2.3.8), (2.3.14) and (2.3.15). Results will also
be given that allow for the efficient computation of so (α) and αo (s) in (2.2.8) and µo
in (2.3.16).
Note that if all the sets mentioned in this chapter are polyhedra or polytopes (bounded
polyhedra), then efficient computations are possible. Computations are also much simpler
if the sets contain the origin in their interiors. As such, we will assume throughout this

48
section W, X and Ω, where appropriate, are polyhedra that contain the origin in their
interiors.
If X, Ω and W are polyhedra/polytopes, then the computation of the Minkowski
sum, Pontryagin difference, linear maps and inverses of linear maps can be done by using
standard software for manipulating polytopes. These packages therefore allow one to
compute, for example, Fs , F (α, s), ReachN (Ω), PreN (Ω), Ot , O∞ , etc.

2.4.1 Efficient Computation if W is a Polytope

However, often we are not interested in the explicit computation of these sets, but only
whether the conditions presented in (2.2.3), (2.2.10), (2.2.22), (2.3.7), (2.3.8), (2.3.14)
and (2.3.15) are satisfied or whether F (α, s) ⊆ X, where X is any polyhedron. In partic-
ular, results are given that allow one to test whether or not Fs is contained in a given
polyhedron X without having to compute Fs explicitly. The methods for deriving the
results in this section are well-known in the set invariance literature and we therefore
omit detailed derivations. However, the interested reader is referred to [Bla99, KG98]
and [RKKM04a] for details.
We recall that the support function (See Definition 1.9 or [KG98]) of a set W ⊂ Rm ,
evaluated at a ∈ Rm , is defined as

h(W, a) , sup aT w. (2.4.1)


w∈W

Clearly, if W is a polytope (bounded and closed polyhedron), then h(W, a) is finite.


Furthermore, if W is described by a finite set of affine inequality constraints, then h(W, a)
can be computed by solving a linear program (LP). Testing whether (2.2.3) and (2.2.10)
hold can be implemented by evaluating the support function of W at a finite number
of points [KG98], or by solving a single Phase I LP [Bla99, Lem. 4.1]. The set Fs (and
hence F (α, s)) is easily computed using standard computational geometry software for
computing the Minkowski sum of polytopes, such as [Ver03] and [KGBM03].

Remark 2.16 (Value of the support function for the special case) It is important to note
that the computation of the value of the support function is trivial if W is the affine map
of a hypercube in the form W , {Ed + c | |d|∞ ≤ η }, where E ∈ Rn×n and c ∈ Rn . This
is because an LP is no longer necessary to compute h(W, a), since one can write down
an analytical expression for the value of the support function, i.e.

W , {Ed + c | |d|∞ ≤ η } =⇒ h(W, a) = η|E T a|1 + aT c. (2.4.2)

In order to be as general as possible, we will consider the case when W is in the form
W , {w ∈ Rn | fiT w ≤ gi , i ∈ I}, where fi ∈ Rn , gi ∈ R and I is a finite index set (if W
is given as in (2.4.2) and E is invertible, then it is a trivial matter of computing all the
fi and gi ).

49
Following a standard procedure [KG98] it is possible to show that

As W ⊆ αW ⇐⇒ h(W, (As )T fi ) ≤ αgi , ∀i ∈ I. (2.4.3)

This observation allows for the efficient checking of whether or not (2.2.3) is satisfied.
Hence, it also allows for the efficient computation of so (α) and αo (s). For example, recall
that W contains the origin in its interior if and only if gi > 0 for all i ∈ I. It then follows
that
αo (s) = max h(W, (As )T fi )/gi . (2.4.4)
i∈I

It is also possible to check whether the set Fs is contained in a given polyhedron X ,


{x ∈ Rn | cTj x ≤ dj , j ∈ J }, where cj ∈ Rn , dj ∈ R and J is a finite index set, without
having to compute Fs explicitly. As with (2.4.3), it is easy to show that [RKKM04a]:
s−1
X
Fs ⊆ X ⇐⇒ h(W, (Ai )T cj ) ≤ dj , ∀j ∈ J . (2.4.5)
i=0

This observation allows one to determine whether F (α, s) (and hence F∞ ) is in a given
P
set X, i.e. F (α, s) ⊆ X ⇔ Fs ⊆ (1 − α)X ⇔ s−1 i T
i=0 h(W, (A ) cj ) ≤ (1 − α)dj , ∀j ∈ J .
One can also use the support function to a priori compute an error bound on the
approximation F (α, s) if the ∞-norm is used to define the error bound, i.e. p = ∞
in (2.2.10). Proceeding in a similar fashion as above, it is possible to show that
( s−1 s−1
)
X X
M (s) , min {γ | Fs ⊆ Bn∞ (γ) } = max h(W, (Ai )T ej ), h(W, −(Ai )T ej ) ,
γ j∈{1,...,n}
i=0 i=0
(2.4.6)
where ej is the j th standard basis vector in Rn . Note that if α ∈ (0, 1), then (2.2.10) is
equivalent to Fs ⊆ α−1 (1 − α)Bpn (ε). Hence, if p = ∞ in (2.2.10), then a straightforward
algebraic manipulation gives

α(1 − α)−1 Fs ⊆ B∞
n
(ε) ⇐⇒ α ≤ ε/(ε + M (s)). (2.4.7)

Clearly, (2.4.4) is an easily-computed lower bound and (2.4.7) is an easily-computed


upper bound on α such that F (α, s) is an RPI, outer ε-approximation of the mRPI set
F∞ . We are now in a position to put together a prototype algorithm for computing an
RPI, outer ε-approximation of F∞ if the ∞-norm is used to bound the error. These steps
are outlined in Algorithm 2 [RKKM04a].
Remark 2.17 (Comment on the Computations) In order to reduce computational effort,
P
note that in step 5 of Algorithm 2 it is not necessary to compute s−2 i T
i=0 h(W, (A ) ej )
P
and s−2 i T
i=0 h(W, −(A ) ej ) at each iteration. These sums would have been computed at
previous iterations. All that is needed is to update these sums by computing and adding
h(W, (As−1 )T ej ) and h(W, −(As−1 )T ej ), respectively.

2.4.2 A priori upper bounds if A is diagonizable

Many of the conditions in the previous sections, such as (2.2.3), (2.2.22), (2.3.8), and
(2.3.15) have the specific form
Ai Π ⊆ Ψ. (2.4.8)

50
Algorithm 2 Computation of an RPI, outer ε-approximation of the mRPI set F∞
Require: A, W and ε > 0
Ensure: F (α, s) such that F∞ ⊆ F (α, s) ⊆ F∞ ⊕ Bn∞ (ε)
1: Choose any s ∈ N (ideally, set s = 0).
2: repeat
3: Increment s by one.
4: Compute αo (s) as in (2.4.4) and set α = αo (s).
5: Compute M (s) as in (2.4.6).
6: until α ≤ ε/(ε + M (s))
7: Compute Fs as the Minkowski sum (2.2.2) and scale it to give F (α, s) , (1 − α)−1 Fs .

This section shows how one can efficiently obtain a priori upper bounds on

io (A, Π, Ψ) , inf i ∈ N Ai Π ⊆ Ψ , (2.4.9)

which is the smallest i such that (2.4.8) holds.


We first present the following result:
Lemma 2.4 (Simple Subset Condition) Let Π and Ψ be two non-empty polytopes in Rn
containing the origin and the matrix L ∈ Rn×n .
Let βin (Ψ) be the size of the largest hypercube in Ψ and βout (Π) be the size of the
smallest hypercube containing Π.

(i) If LBn∞ (βout (Π)) ⊆ Bn∞ (βin (Ψ)), then LΠ ⊆ Ψ.

(ii) If |L|∞ ≤ βin (Ψ)/βout (Π), then LΠ ⊆ Ψ.

Proof:

(i) Note that Π ⊆ Bn∞ (βout (Π)) so that LΠ ⊆ LBn∞ (βout (Π)).

Since Bn∞ (βin (Ψ)) ⊆ Ψ, if LBn∞ (βout (Π)) ⊆ Bn∞ (βin (Ψ)), then LBn∞ (βout (Π)) ⊆ Ψ.

Since LΠ ⊆ LBn∞ (βout (Π)), LΠ ⊆ Ψ as claimed.

(ii) Note that for any x ∈ LBn∞ (βout (Π)) we have |x|∞ ≤ |L|∞ βout (Π) so that
LBn∞ (βout (Π)) ⊆ {x | |x|∞ ≤ |L|∞ βout (Π) }.

If |L|∞ ≤ βin (Ψ)/βout (Π) it follows that LBn∞ (βout (Π)) ⊆ {x | |x|∞ ≤ βin (Ψ) } so
that LBn∞ (βout (Π)) ⊆ Bn∞ (βin (Ψ)), as claimed.

QeD.

The previous result turns out to be very useful in providing an upper bound on
io (A, Π, Ψ):
Proposition 2.4 (Simple Upper Bound) Let Π and Ψ be two non-empty polytopes in
Rn containing the origin and the matrix L ∈ Rn×n .

51
Let βin (Ψ) be the size of the largest hypercube in Ψ and βout (Π) be the size of the
smallest hypercube containing Π.
Let A be diagonizable with A = V ΛV −1 , where Λ is a diagonal matrix of the eigen-
values of A and the spectral radius ρ(A) ∈ (0, 1).
It follows that
  
io (A, Π, Ψ) ≤ ln βin (Ψ)/ βout (Π)|V |∞ |V −1 |∞ /lnρ(A) . (2.4.10)

Proof: From Lemma 2.4 it follows that (2.4.8) is satisfied if

|Ai |∞ ≤ βin (Ψ)/βout (Π). (2.4.11)

From the basic properties of operator norms it follows that

|Ai |∞ = |V Λi V −1 |∞ (2.4.12a)
≤ |V |∞ |Λi |∞ V −1 |∞ (2.4.12b)
= |V |∞ ρ(A)i |V −1 |∞ (2.4.12c)

The proof is completed by multiplying (2.4.11) with |V |∞ and |V −1 |∞ and solving for i.

QeD.

The above result shows that the upper bound on io (A, Π, Ψ) depends on the mag-
nitudes of the eigenvalues (in particular, the spectral radius) and the eigenvectors of
A.
Proposition 2.4 is particularly useful in obtaining upper bounds on the power of the
integer on the left hand side in (2.2.3), (2.2.22), (2.3.8) and (2.3.15). For example, an
upper bound on so (α) is easily obtained. By applying Proposition 2.4 with Π = W and
Ψ = αW, it follows that
  
so (α) ≤ ln αβin (W)/ βout (W)|V |∞ |V −1 |∞ /lnρ(A) . (2.4.13)

In order to save space, the details for upper bounds on the other conditions are not
given. It is hopefully clear how one could proceed.

2.5 Illustrative Examples


In order to illustrate our results on invariant approximations of the minimal robust
positively invariant set (i.e. F (α, s)), we consider a double integrator:
" # " #
1 1 1
x+ = x+ u+w (2.5.1)
0 1 1

52
K −[0.72 0.98] −[0.56 1.24] −[1.17 1.03] −[0.02 0.28]
s∗ 4 7 4 50
α∗ 0.0119 0.0304 0.0261 0.0463
ε(α∗ , s∗ ) 0.0244 0.0805 0.0686 2.4049

Table 2.1: Data for 2nd order example

x space x space
3 3

2 2

1 1

F (9.3 · 10−10 , 17) {Fs }10


s=1
0 0

−1 −1

−2 −2

F (0.0119, 4) F (1.9 10−5 , 10)


−3 −3
−1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5
(a) Sets F (αi∗ , s∗i ) (b) Sets Fs and F (αi∗ , s∗i )

Figure 2.5.1: Invariant Approximations of F∞ – Sets: Fs and F (α∗ , s∗ )


with the additive disturbance W , w ∈ R2 | |w|∞ ≤ 1 .
We apply four different state feedback control laws, reported in Table 2.1, and com-
pute the corresponding RPI sets F (α, s) for (2.2.1) and constraint set (Rn , W) with:
" # " #
1 1 1
A= + K .
0 1 1

The particular values of s∗ , so (α) and α∗ , αo (so (α)) are reported in Table 2.1.
Also, the corresponding values of ε(α∗ , s∗ ) , α∗ (1 − α∗ )−1 maxx∈Fs∗ |x|∞ are given in
Table 2.1. The initial value of α was chosen to be 0.05.
The invariant sets F (α∗ , s∗ ) for the third example are shown in Figure 2.5.1 (a) for two
couples (α∗ , s∗ ). The examples illustrate various approximations of the mRPI set F∞ . In
particular, Figure 2.5.1 (a) illustrates the difference between choosing ε = 10−8 a priori
and computing F (0.0119, 4). An RPI, outer ε-approximation (with ε = 10−8 ) is the set
F (9.3 · 10−10 , 17). The corresponding values of ε(α∗ , s∗ ) , α∗ (1 − α∗ )−1 supz∈Fs∗ |z|∞ are
ε(0.0119, 4) = 0.0686, and ε(9.3 · 10−10 , 17) = 2.4 · 10−9 . The sets Fs , for s = 1, 2, . . . , 10,
for the third example are shown in Figure 2.5.1 (b) together with the set F (1.9 · 10−5 , 10)
for which ε(1.9·10−5 , 10) = 5·10−5 ; it is clear that the sequence {Fs } is monotonically non-
decreasing sequence and it converges to F∞ and that F (1.9·10−5 , 10) is a sufficiently good
approximation of F∞ . The reachable sets for the third example are shown in Figure 2.5.2.
The initial invariant set Ω is the maximal robust positively invariant set contained in a
polytope X, i.e. Ω , O∞ , where

X , {x ∈ R2 | −10 ≤ x2 ≤ 10, −0.7506x1 −0.6608x2 ≤ 0.6415, 0.7506x1 +0.6608x2 ≤ 0.6415}.

53
x space
2


1.5

1
Reach14 (Ω)

0.5

−0.5

−1

−1.5

−2
−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

Figure 2.5.2: Reach sets of Ω , O∞ for third example

It is possible to say that Reach14 (Ω) ≈ F∞ with an accuracy of 8 · 10−8 , in other


words Reach14 (Ω) is an ε-outer approximation of F∞ , where ε = 8 · 10−8 was computed
by using (2.2.22).

2.6 Summary
This chapter presented new insights regarding the robust positively invariant sets for
linear systems. It was shown how to compute invariant, outer approximations of the
minimal robustly positively invariant sets. An algorithm for the computation of the
maximal robustly invariant set or its approximation was also presented. This algorithm
improves on existing algorithms, since it involves the computation of a sequence of robust
positively invariant sets. Hence, the computational results are useful at any iteration of
the algorithm. Furthermore, a number of useful a-priori bounds and efficient tests were
given. The presented results enable robust control of constrained linear discrete time
systems subject to constraints and additive but bounded disturbances.

54
Chapter 3

Optimized Robust Control


Invariance

Chance favours only the prepared mind.

– Louis Pasteur

In this chapter we introduce the concept of optimized robust control invariance for
a discrete-time, linear, time-invariant system subject to additive state disturbances. A
novel characterization of a family of the polytopic robust control invariant sets for system
x+ = Ax + Bu + w and constraint set (Rn , Rm , W) is given. The existence of a member of
this family that is a RCI set for system x+ = Ax + Bu + w and constraint set (X, U, W)
can be checked by solving a single linear programming problem. The solution of the same
linear programming problem yields the corresponding feedback controller.

3.1 Preliminaries
We consider the following discrete-time linear time-invariant (DLTI) system:

x+ = Ax + Bu + w, (3.1.1)

where x ∈ Rn is the current state, u ∈ Rm is the current control action x+ is the successor
state, w ∈ Rn is an unknown disturbance and (A, B) ∈ Rn×n × Rn×m . The disturbance
w is persistent, but contained in a convex and compact set W ⊂ Rn that contains the
origin. We make the standing assumption that the couple (A, B) is controllable.
The system (3.1.1) is subject to the following set of hard state and control constraints:

(x, u) ∈ X × U (3.1.2)

where X ⊆ Rn and U ⊆ Rm are polyhedral and polytopic sets respectively and both
contain the origin as an interior point.

55
Remark 3.1 (Robust Control Invariant Sets) Recalling Definition 1.23 one has:

• A set Ω ⊂ X is a robust control invariant (RCI) set for system (3.1.1) and constraint
set (X, U, W) if for all x ∈ Ω there exists a u ∈ U such that Ax + Bu + w ∈ Ω for
all w ∈ W.

Most of the previous research (see for instance Chapter 2 or [KG98] and referencces
therein) considered the case u = ν(x) = Kx and the corresponding autonomous DLTI
system:
x+ = AK x + w, AK , (A + BK),

where AK ∈ Rn×n and all the eigenvalues of AK are strictly inside the unit disk. Given
any K ∈ Rm×n let XK , {x | x ∈ X, Kx ∈ U} ⊂ Rn . An RPI set for system
x+ = AK x + w and constraint set (XK , W) exists if and only if the mRPI set F∞
K satisfies

K ⊆ X
F∞ K K
K where F∞ or F (α, s) can be obtained by the methods of Chapter 2; more
K ⊆ X
precisely by (2.2.2) and (2.2.4) respectively. The condition F∞ K is not necessarily
satisfied for an arbitrary selected stabilizing feedback controller K. Moreover, there does
not exists an efficient design procedure for determining a stabilizing feedback controller
K ⊆ X is guaranteed to hold a-priori.
K such that the set inclusion F∞ K
In contrast to the existing methods, that fail to account directly for the geometry of
state and control constraints, we provide a method for checking existence of an RCI set
for system (3.1.1) and constraint set (X, U, W) (a member of a novel family of RCI sets)
as well as the computation of the corresponding controller via an optimization procedure.

3.2 Robust Control Invariance Issue


First, we characterize a family of the polytopic RCI sets for the system (3.1.1) for un-
constrained case, i.e. for system x+ = Ax + Bu + w and constraint set (Rn , Rm , W).
Let Mi ∈ Rm×n , i ∈ N and for each k ∈ N let Mk , (M0 , M1 , . . . , Mk−2 , Mk−1 ).
An appropriate characterization of a family of RCI sets for (3.1.1) and constraint set
(Rn , Rm , W) is given by the following sets for k ≥ n:
k−1
M
Rk (Mk ) , Di (Mk )W (3.2.1)
i=0

where the matrices Di (Mk ), i ∈ Nk , k ≥ n are defined by:


i−1
X
D0 (Mk ) , I, Di (Mk ) , Ai + Ai−1−j BMj , i ≥ 1 (3.2.2)
j=0

provided that Mk satisfies:


Dk (Mk ) = 0 (3.2.3)

Since the couple (A, B) is assumed to be controllable, such a choice exists for all k ≥ n.
Let Mk denote the set of all matrices Mk satisfying condition (3.2.3):

Mk , {Mk | Dk (Mk ) = 0} (3.2.4)

56
Remark 3.2 (Condition Dk (Mk ) = 0) The condition (3.2.3) enables us to synthesize a
controller that rejects completely the ‘current’ disturbance effect in k time steps. This
condition plays a crucial role in the proof of next result. It is important to observe that
this condition can be relaxed as shown in the sequel of this chapter.
We can state the following relevant result:
Theorem 3.1. (Characterization of a novel family of RCI sets) Given any Mk ∈
Mk , k ≥ n and the corresponding set Rk (Mk ) there exists a control law ν : Rk (Mk ) →
Rm such that Ax + Bν(x) ⊕ W ⊆ Rk (Mk ), ∀x ∈ Rk (Mk ), i.e. the set Rk (Mk ) is RCI
for system (3.1.1) and constraint set (Rn , Rm , W).

Proof: Fix k ≥ n and let Mk ∈ Mk . Let x be an arbitrary element of Rk (Mk ). Since


x ∈ Rk (Mk ) it follows by definition of the set Rk (Mk ):

x = Dk−1 (Mk )w0 + Dk−2 (Mk )w1 + . . . + D1 (Mk )wk−2 + D0 (Mk )wk−1
= (Ak−1 + Ak−2 BM0 + . . . + BMk−2 )w0 + (Ak−2 + Ak−3 BM0 + . . . + BMk−3 )w1
+ . . . + (A + BM0 )wk−2 + wk−1 (3.2.5)

for some wi ∈ W, i ∈ Nk−1 . For each x ∈ Rk (Mk ) let

W(x) , {w | w ∈ Wk , Dw = x} (3.2.6)

where w , {w0 , . . . , wk−1 }, Wk , W × W × . . . × W and D = [Dk−1 (Mk ) . . . D0 (Mk )].


For each x ∈ Rk (Mk ) let w0 (x) be the unique solution of the following quadratic program:

Pw (x) : w0 (x) = arg min{|w|2 | w ∈ W(x)} (3.2.7)


w

Hence, w0 (x) = {w00 (x), w10 (x), . . . wk−1


0 (x)} and since x ∈ R (M ) it follows that:
k k

x = Dk−1 (Mk )w00 (x) + Dk−2 (Mk )w10 (x) + . . . + D1 (Mk )wk−2
0 0
(x) + D0 (Mk )wk−1 (x)
= (Ak−1 + Ak−2 BM0 + . . . + BMk−2 )w00 (x) + (Ak−2 + Ak−3 BM0 + . . . + BMk−3 )w10 (x)
0 0
+ . . . + (A + BM0 )wk−2 (x) + wk−1 (x) (3.2.8)

Let the feedback control law ν : Rk (Mk ) → Rm be defined by:

ν(x) , Mk−1 w00 (x) + Mk−2 w10 (x) + . . . + M1 wk−2


0 0
(x) + M0 wk−1 (x) = Mk w0 (x) (3.2.9)

Hence for all x ∈ Rk (Mk ) and any arbitrary w ∈ W:

x+ =Ax + Bν(x) + w
= (Ak + Ak−1 BM0 + . . . + ABMk−2 )w00 (x)
+ (Ak−1 + Ak−2 BM0 + . . . + ABMk−3 )w10 (x)
+ . . . + (A2 + ABM0 )wk−2
0 0
(x) + Awk−1 (x)
+ BMk−1 w00 (x) + . . . + BM1 wk−2
0 0
(x) + BM0 wk−1 (x) + w
= (Ak + Ak−1 BM0 + . . . + BMk−1 )w00 (x) + (Ak−1 + Ak−2 BM0 + . . . + BMk−2 )w10 (x)
0
+ . . . + (A + BM0 )wk−1 (x) + w (3.2.10)

57
Hence

x+ = Dk (Mk )w00 (x) + Dk−1 (Mk )w10 (x) + . . .


0 0
+ D2 (Mk )wk−2 (x) + D1 (Mk )wk−1 (x) + D0 (Mk )w (3.2.11)

where each wi0 (x) ∈ W, i ∈ Nk−1 by construction. Since Mk ∈ Mk it follows that


Dk (Mk )w00 (x) = 0 because Ak + Ak−1 BM0 + . . . + BMk−1 = 0 by (3.2.3) so that :

x+ = Dk−1 (Mk )w10 (x) + +D2 (Mk )wk−2


0 0
(x) + D1 (Mk )wk−1 (x) + D0 (Mk )w (3.2.12)

Hence x+ = Ax + Bν(x) + w ∈ Rk (Mk ) for all w ∈ W. It follows that Ax + Bν(x) ⊕ W ⊆


Rk (Mk ) for all x ∈ Rk (Mk ) with ν(x) defined by (3.2.7) and (3.2.9).

QeD.

Remark 3.3 (Note on the existence of the dead beat control law µ : Rk (Mk ) → Rm )
It can be easily seen from the proof of Theorem 3.1 that any state x ∈ Rk (Mk ) can be
steered to the origin in no more than k time steps if no disturbances were acting on the
system.
An interesting observation is that the feedback control law ν(·) satisfying the condi-
tions of Theorem 3.1 is in fact set valued as we remark next.
Remark 3.4 (Note on the selection of the feedback control law ν(·)) If we define:

U(x) , Mk W(x) (3.2.13)

It follows that the feedback control law ν : Rk (Mk ) → Rm is in fact set valued, i.e. ν(·)
is any control law satisfying:
ν(x) ∈ U(x) (3.2.14)

In other words the feedback control law can be chosen to be:

ν(x) = Mk w(x), w(x) ∈ W(x) (3.2.15)

However, an appropriate selection is given by (3.2.7) and (3.2.9).


The function w0 (·) is piecewise affine, being the solution of a parametric quadratic
programme; it follows that the feedback control law ν : Rk (Mk ) → Rm is piecewise
affine (being a linear map of a piecewise affine function).
Remark 3.5 (Explicit Form of the feedback control law ν(·)) We note that since the
problem Pw (x) defined in (3.2.7) is a quadratic program, it can be solved by employ-
ing parametric mathematical programming techniques (See Chapter 11). The function
w0 (·) is piecewise affine:

w0 (x) = Li x + li , x ∈ Ri , i ∈ N+
J

58
where J is a finite integer and the sets Ri , i ∈ N+J have mutually disjoint interiors and
S
their union covers the set Rk (Mk ), i.e. Rk (Mk ) = i∈N+ Ri . It follows that the feedback
J
control law ν(·) is:
ν(x) = Mk Li x + Mk li , x ∈ Ri , i ∈ N+
J

Theorem 3.1 states that for any k ≥ n the RCI set Rk (Mk ), finitely determined by
k, is easily computed if W is a polytope (being a Minkowski sum of a finite number
of polytopes). The set Rk (Mk ) and the feedback control law ν(·) are parametrized by
the matrix Mk ; this allows us to formulate an LP that yields the set Rk (Mk ) while
minimizing an appropriate norm, for instance any polytopic (Minkowski) norm, of the
set Rk (Mk ).

3.2.1 Optimized Robust Control Invariance

We provide a full exposition for the case when:

W , {Ed + f | |d|∞ ≤ η} (3.2.16)

where d ∈ Rt , E ∈ Rn×t and f ∈ Rn . We are interested in the computation of a RCI set


Rk (Mk ) and constraint set (Rn , Rm , W) contained in a ‘minimal’ p norm ball, i.e. we
wish to find Rk0 = Rk (M0k ) where:

(M0k , α0 ) = arg min {α | Rk (Mk ) ⊆ Bnp (α), α > 0} (3.2.17)


Mk ,α

We show that our problem can be posed as an LP if p = 1, ∞ by considering a more


general problem:
Pk : (M0k , α0 ) = arg min {α | (Mk , α) ∈ Ω} (3.2.18)
Mk ,α

where
Ω , {(Mk , α) | Mk ∈ Mk , Rk (Mk ) ⊆ P (α), α > 0}, (3.2.19)

P (α) , {x | Cp x ≤ αcp }, α > 0 with Cp ∈ Rq×n and cp ∈ Rq and P (1) is a polytope


that contains the origin in its interior. Before proceeding we recall a relevant preliminary
result established easily from Proposition 1.2 and Proposition 1.3 in Chapter 1 (see also
[RKKM04b]):

Proposition 3.1 (Row – wise maximum) Let matrices A ∈ Rn×n , C ∈ Rq×n , D ∈ Rn×p
and M ∈ Rp×n and let w ∈ W where W = {w = Ed + f | |d|∞ ≤ η} and E ∈ Rn×t and
f ∈ Rn . Then

max C(A + DM )w = ηabs(C(A + DM )E)1t + C(A + DM )f (3.2.20)


w∈W

where the maximization is taken row-wise. Moreover, there exists a matrix L ∈ Rq×t
such that:
−L ≤ C(A + DM )E ≤ L (3.2.21)

59
where the inequality is element-wise. The solution of (3.2.20) is:

max C(A + DM )w = ηL1q + C(A + DM )f (3.2.22)


w∈W

The set inclusion Rk (Mk ) ⊆ P (α), by Proposition 1.1, is true if and only if:

max Cp x ≤ αcp , (3.2.23)


x∈Rk (Mk )

where the maximization is taken row-wise.


Let in the sequel of this chapter, with some abuse of notation, Di , Di (Mk ). It
follows from Proposition 1.3 and Proposition 3.1 that there exist a set of matrices
Λk , {L0 , L1 , . . . , Lk−1 } and Li ∈ Rq×t , i ∈ Nk−1 such that:
k−1
X
max Cp x = (ηLi 1t + Cp Di f ) (3.2.24)
x∈Rk (Mk )
i=0

where each Li satisfies:


−Li ≤ Cp Di E ≤ Li , i ∈ Nk−1 (3.2.25)

Since each Di = Di (Mk ) is affine in Mk it follows by the basic properties of the


Kronecker product (in particular vec(ABC) = (C ′ ⊗ A)vec(B)) that the set inclusion
Rk (Mk ) ⊆ P (α) can be expressed as a set of linear inequalities in (vec(Mk ), vec(Λk ), α).
The condition Mk ∈ Mk is a set of linear equalities in (vec(Mk ), vec(Λk ), α). Since the
cost (of Pk ) is a linear function of (vec(Mk ), vec(Λk ), α) we can state the following:
Proposition 3.2 (Pk is an LP) The minimization problem Pk defined in (3.2.18) is a
linear programming problem.

Remark 3.6 (The RCI set Rk (Mk ) in a ‘minimal’ polytopic norm ball – LP formulation)
An LP formulation of the problem Pk is:

Pk : min{α | γ ∈ Γ} (3.2.26)
γ

where γ , (vec(Mk ), vec(Λk ), α) and:


k−1
X
Γ , {γ | Mk ∈ Mk , (ηLi 1t + Cp Di f ) ≤ αc, −Li ≤ Cp Di E ≤ Li , i ∈ Nk−1 , α > 0}
i=0
(3.2.27)

3.2.2 Optimized Robust Control Invariance Under Constraints

Since the set Rk (Mk ) and the feedback control law ν(·) are parametrized by the matrix
Mk we illustrate that in constrained case, one can formulate an LP, whose feasibility
establishes existence of a RCI set Rk (Mk ) for system (3.1.1) and constraint set (X, U, W)
(i.e. x ∈ X, ν(x) ∈ U and Ax + Bν(x) ⊕ W ⊆ Rk (Mk ) for all x ∈ Rk (Mk )). The control
law ν(x) satisfies ν(x) ∈ U (Mk ) for all x ∈ Rk (Mk ) where:
k−1
M
U (Mk ) , Mi W (3.2.28)
i=0

60
The state and control constraints (3.1.2) are satisfied if:

Rk (Mk ) ⊆ αX, U (Mk ) ⊆ βU (3.2.29)

where αX , {x | Cx x ≤ αcx }, βU , {u | Cu u ≤ βcu }, (with Cx ∈ Rqx ×n , cx ∈ Rqx ,


Cu ∈ Rqu ×n , cu ∈ Rqu ) and (α, β) ∈ [0, 1] × [0, 1].
Let now:

Ω̄ , {(Mk , α, β, δ) | Mk ∈ Mk , Rk (Mk ) ⊆ αX, U (Mk ) ⊆ βU,


(α, β) ∈ [0, 1] × [0, 1], qα α + qβ β ≤ δ} (3.2.30)

where Rk (Mk ) is given by (3.2.1) and U (Mk ) by (3.2.28).


Consider the following minimization problem:

P̄k : (M0k , α0 , β 0 , δ 0 ) = arg min {δ | (Mk , α, β, δ) ∈ Ω̄} (3.2.31)


Mk ,α,β,δ

It easily follows, from discussion above, that:


Proposition 3.3 (P̄k is an LP) The minimization problem P̄k is a linear programming
problem.

Remark 3.7 (The RCI set Rk (Mk ) for system (3.1.1) and constraint set (X, U, W) –
LP formulation) The problem P̄k is an LP:

P̄k : min{δ | γ ∈ Γ̄} (3.2.32)


γ

where γ , (vec(Mk ), vec(Λk ), vec(Θk ), α, β, δ) and :


k−1
X
Γ̄ , {γ | Mk ∈ Mk , (ηLi 1t + Cx Di f ) ≤ αcx ,
i=0

− Li ≤ Cx Di E ≤ Li , i ∈ Nk−1 ,
k−1
X
(ηTi 1t + Cu Si Mk f ) ≤ βcu ,
i=0

− Ti ≤ Cu Si Mk E ≤ Ti , i ∈ Nk−1 ,
(α, β) ∈ [0, 1] × [0, 1], qα α + qβ β ≤ δ} (3.2.33)

where Θk , {T0 , T1 , . . . Tk−1 } (each Ti ∈ Rqu ×t ) and Si is selection matrix of the form
Si = [0 0 . . . I . . . 0 0]. The design variables qα and qβ are weights reflecting a desired
contraction of state and control constraints.
The solution to problem P̄k (which exists if Ω̄ 6= ∅) yields a set Rk0 , Rk (M0k ) and
feedback control law ν 0 (x) = M0k w0 (x) satisfying:

Rk0 ⊆ α0 X, ν 0 (x) ∈ U (Mk ) ⊆ β 0 U, ∀x ∈ Rk0 (3.2.34)

Remark 3.8 (The set Rk0 is a RPI set for system x+ = Ax + Bν 0 (x) + w and (Xν 0 , W))

61
It follows from Theorem 3.1, Definition 1.24 and the discussion above that the set Rk0 ,
if it exists, is a RPI set for system x+ = Ax + Bν 0 (x) + w and constraint set (Xν 0 , W),
where Xν 0 , α0 X ∩ {x | ν 0 (x) ∈ β 0 U}.

Remark 3.9 (Non–uniqueness of the set Rk0 ) Generally, there might exist more than
one set Rk0 = Rk (M0k ) that yields the optimal cost δ 0 of P̄k . The cost function can be
modified. For instance, an appropriate choice is a positively weighted quadratic norm of
the decision variable γ that yields a unique solution, since in this case problem becomes
a quadratic programming problem of the form minγ {|γ|2Q | γ ∈ Γ̄}, where Q is positive
definite and it represents the suitable weight.
An important observation is:
Proposition 3.4 (The sets Rk0 as k increases) Suppose that the problem P̄k is feasible
for some k ∈ N and the optimal value of δk is δk0 , then for every integer s ≥ k the problem
P̄s is also feasible and the corresponding optimal value of δs satisfies δs0 ≤ δk0 .

Proof: Let with some abuse of notation:

(M0k , αk0 , βk0 , δk0 ) , arg min {δk | (Mk , αk , βk , δk ) ∈ Ω̄k }


Mk ,αk ,βk ,δk

where Ω̄k is defined in (3.2.30). Define M∗k+1 , {M0k , 0}. It follows from (3.2.1)
and (3.2.28) that (M∗k+1 , αk0 , βk0 , δk0 ) ∈ Ω̄k+1 . Hence, δk+1
0 ≤ δk0 . The proof is completed
by induction.

QeD.

Remark 3.10 (Computational comment on k and the set Rk0 ) The relevant consequence
of Proposition 3.4 is the fact that problem P̄k can, in principle, be solved for sufficiently
large k ∈ N in order to check whether there exists a RCI set Rk (Mk ) for system (3.1.1)
and constraint set (X, U, W).

3.2.3 Relaxing Condition Mk ∈ Mk

If the origin is an interior point of W, the condition (3.2.3) can be replaced by the
following condition:
Mk ∈ M̄k , {Mk | Dk (Mk )W ⊆ ϕW} (3.2.35)

for ϕ ∈ [0, 1) and k ≥ n. A family of the sets R(ϕ,k) (Mk ) defined by:

R(ϕ,k) (Mk ) , (1 − ϕ)−1 Rk (Mk ) (3.2.36)

for couples (ϕ, k) such that (3.2.35) is true, is a family of the polytopic RCI sets:
Theorem 3.2. (Characterization of a novel family of RCI sets – II) Given any couple
(ϕ, Mk ) ∈ [0, 1)× M̄k , k ≥ n and the corresponding set R(ϕ,k) (Mk ), there exists a control

62
law ν : R(ϕ,k) (Mk ) → Rm such that Ax + Bν(x) ⊕ W ⊆ R(ϕ,k) (Mk ), ∀x ∈ R(ϕ,k) (Mk ),
i.e. the set R(ϕ,k) (Mk ) is RCI for system (3.1.1) and constraint set (Rn , Rm , W).
Proof of this result follows the arguments of the proof of Theorem 3.2, with a set of
minor modifications.

Remark 3.11 (The sets Rk (Mk ) and R(ϕ,k) (Mk )) It is clear that Rk (Mk ) = R(0,k) (Mk ),
however if ϕ 6= 0 the condition (3.2.35) requires that 0 ∈ interior(W).
Without going into too much detail we remark that discussion following Theorem
3.2, given in Sections 3.2.1 and 3.2.2 can be repeated for this case. This discussion is a
relatively simple extension and is omitted here. Instead of detailed discussion we only
provide a formulation of the resulting optimization problem:

P(ϕ,k) : (M0k , α0 , β 0 , ϕ0 , δ 0 ) = arg min {δ | (Mk , α, β, ϕ, δ) ∈ Ω(ϕ,k) } (3.2.37)


Mk ,α,β,ϕ,δ

where the constraint set Ω(ϕ,k) is defined by:

Ω(ϕ,k) , {(Mk , α, β, ϕ, δ) | Mk ∈ M̄k , Rk (Mk ) ⊆ αX, U (Mk ) ⊆ βU,


(α, β, ϕ) ∈ [0, 1] × [0, 1] × [0, 1],
α + ϕ ≤ 1, β + ϕ ≤ 1, qα α + qβ β + qϕ ϕ ≤ δ} (3.2.38)

with Rk (Mk ) defined by (3.2.1), U (Mk ) by (3.2.28) and as before the design variables qα ,
qβ and qϕ are weights reflecting a desired contraction of state, control and disturbance
constraints. Note that δ (a suitable variable for minimization) occurs in the last line of
the definition of the constraint set Ω(ϕ,k) , which is specified by (3.2.38).

3.3 Comparison with Existing Methods


In order to demonstrate the advantages of our method over existing methods, some of
which are given in Chapter 2, we proceed as follows. Let

K , {K ∈ Rm×n | | λmax (A + BK)| < 1} (3.3.1)

where λmax (A) denotes the largest eigenvalue of the matrix A. For each K ∈ K let:
K
F(ζ K ,sK )
, (1 − ζK )−1 FsKK (3.3.2)

where FsKK is defined by (2.2.1) so that:


sM
K −1

FsKK = (A + BK)i W (3.3.3)


i=0

K
So that F(ζ is RPI ε (for a–priori specified and arbitrarily small ε > 0) outer
K ,sK )
K for system x+ =
approximation of of the minimal robust positively invariant set F∞
(A+BK)x+w and constraint set (Rn , W). We remark that the couple (ζK , sK ) ∈ [0, 1)×N
is such that the following set inclusions hold:

(A + BK)sK W ⊆ ζK W, ζK (1 − ζK )−1 FsKK ⊆ Bnp (ε) (3.3.4)

63
Let:

K K
K , {K ∈ K | F(ζ K ,sK )
⊆ αX, KF(ζ K ,sK )
⊆ βU, (α, β) ∈ (0, 1) × (0, 1)} (3.3.5)

K
where KF(ζ K
, {Kx | x ∈ F(ζ }.
K ,sK ) K ,sK )

Before stating our next main result, we need the following simple observation.
Given any s ∈ N let

Ks = [K ′ (K(A + BK))′ . . . (K(A + BK)s−2 )′ (K(A + BK)s−1 )′ ]′ (3.3.6)

and note that for any integer k ≤ s:

(A + BK)k = Ak + [Ak−1 B Ak−2 B . . . AB B 0 . . . 0]Ks (3.3.7)

We can now state the following result:


Proposition 3.5 (Comparison comment – I) Let K ∈ K, where K is defined in (3.3.5),
and the couple (ζK , sK ) ∈ [0, 1) × N satisfies (3.3.4) for an arbitrarily small ε > 0. Then
(KsK , αK , βK , qa αK +qb βK +qp ζK , ζK ) satisfies (KsK , αK , βK , qa αK +qb βK +qp ζK , ζK ) ∈
Ω(ζK ,sK ) where Ω(ζK ,sK ) is defined in (3.2.38).
The proof of this results follows from straight–forward verification that
(KsK , αK , βK , qa αK + qb βK + qp ζK , ζK ) ∈ Ω(ζK ,sK ) .

Remark 3.12 (Comparison comment – II) Proposition 3.5 implies that for any K ∈ K
the minimization problem P(ϕ,sK ) defined in (3.2.37) yields δ 0 that is smaller or equal
than the value of qa αK + qb βK + qp ζK .
In view of the previous remark we conclude that our method does at least as well as
existing methods. However recalling the fact that the feedback control law of Theorem
3.1 is a piecewise affine function, it is easy to conclude that our method improves upon
existing methods.

3.3.1 Comparison – Illustrative Example

In order to illustrate our results we consider the second order systems:


" # " #
1 1 1
x+ = x+ u+w (3.3.8)
0 1 1

with additive disturbance:



W , w ∈ R2 | |w|∞ ≤ 1 . (3.3.9)

The following set of hard state and control constraints is required to be satisfied:

X = {x | − 3 ≤ x1 ≤ 1.85, −3 ≤ x2 ≤ 3, x1 + x2 ≥ −2.2}, U = {u | |u| ≤ 2.4} (3.3.10)

where xi is the ith coordinate of a vector x. The state constraint set X is shown in
Figures 3.3.1 – 3.3.3 as a dark shaded set.

64
In the first attempt we obtain the closed loop dynamics by applying three various
state feedback control laws to a second order double integrator example (3.3.8):

K1 = −[0.72 0.98], K2 = −[0.96 1.24], K3 = −[1 1] (3.3.11)

K
and compute the corresponding sets F(ζ K
. The invariant sets F(ζ computed by
K ,sK ) K ,sK )

using methods of [Kou02, RKKM03, RKKM04a] are shown in Figure 3.3.1.

4
x2 x2
4
x2
4

3 3 3

X X X
2 2 2

1 1 1

0 F(ζ(K1 ),s(K1 )) (K1 ) 0 F(ζ(K2 ),s(K2 )) (K2 ) 0 F(ζ(K3 ),s(K3 )) (K3 )


−1 −1 −1

−2 −2 −2

−3 −3 −3

−4 −4 −4
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x1 x1 x1
(a) Set for Controller K1 (b) Set for Controller K2 (c) Set for Controller K3

Ki : Sets F Ki
Figure 3.3.1: Invariant Approximations of F∞ (ζK i
,sKi ) , i = 1, 2, 3

All of the computed sets violate the state constraints as illustrated in Figure 3.3.1. We
also report that for these state feedback controllers the corresponding control polytopes
are:

U (K1 ) = {u | |u| ≤ 2.4680}, U (K2 ) = {u | |u| ≤ 6.4578}, U (K3 ) = {u | |u| ≤ 3}


(3.3.12)
where U (K) , KD(ζK ,sK ) (K) so that the control constraints are also violated.
By solving the optimization problem P̄k defined in (3.2.30) we computed the robust
control invariant sets Rk i (Mk 0i ), i = 1, 2, 3 and they are shown in Figure 3.3.2.

4 4 4
x2 x2 x2
3 3 3

X X X
2 2 2

1 1 1

0 Rk1 (Mk 01 ) 0
Rk2 (Mk 02 ) 0 Rk3 (Mk 03 )
−1 −1 −1

−2 −2 −2

−3 −3 −3

−4 −4 −4
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x1 x1 x1
(a) Set for Controller Mk 01 (b) Set for Controller Mk 02 (c) Set for Controller Mk 03

Figure 3.3.2: Invariant Sets Rk i (Mk 0i ), i = 1, 2, 3

The optimization problem P̄k was posed with the following design parameters:

(k, qα , qβ )1 = (5, 1, 1), (k, qα , qβ )2 = (5, 0, 1), (k, qα , qβ )3 = (5, 1, 0) (3.3.13)

65
The optimization problem P̄k yielded the following matrices Mk 0i , i = 1, 2, 3:
     
−0.5 −1 −0.4875 −1 −0.5038 −1
     
 0.2378 0   0.2199 0   0.2456 0 
     
Mk 01 0  , Mk 02 =  0.1154 0  , Mk 03 =  0.1132
     
=  0.1139 0 
     
 0.0590 0   0.0596 0   0.0521 0 
     
0.0894 0 0.0926 0 0.0930 0
(3.3.14)
and the corresponding control polytopes are:

U (Mk 01 ) = {u | |u| ≤ 2}, U (Mk 02 ) = {u | |u| ≤ 1.9750}, U (Mk 03 ) = {u | |u| ≤ 1.9750}


(3.3.15)
All the sets constructed from the solution of the optimization problem P̄k satisfy state
and control constraints as it can be seen from Figure 3.3.2 and (3.3.15). Note that if k
was increased there is a possibility that better results would be obtained.
To make comparison in this simple example as fair as possible we consider also the
following three state feedback control laws constructed from the first row of the optimized
matrices Mk :
K4 = −[0.5 1], K5 = −[0.4875 1], K6 = −[0.5038 1] (3.3.16)
K
The corresponding sets F(ζ are shown in Figure 3.3.3. The corresponding control
K ,sK )

polytopes are:

U (K4 ) = {u | |u| ≤ 2}, U (K5 ) = {u | |u| ≤ 1.975}, U (K6 ) = {u | |u| ≤ 2.076} (3.3.17)

so that the control constraints are satisfied, but unfortunately all of the computed sets
violate the state constraints.
4 4 4
x2 x2 x2
3 3 3

X X X
2 2 2

1 1 1

0 F(ζ(K4 ),s(K4 )) (K4 ) 0 F(ζ(K5 ),s(K5 )) (K5 ) 0 F(ζ(K6 ),s(K6 )) (K6 )


−1 −1 −1

−2 −2 −2

−3 −3 −3

−4 −4 −4
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
x1 x1 x1
(a) Set for Controller K4 (b) Set for Controller K5 (c) Set for Controller K6

Ki : Sets F Ki
Figure 3.3.3: Invariant Approximations of F∞ (ζK i
,sKi ) , i = 4, 5, 6

This simple example and Proposition 3.5 indicate clear superiority of our method if
system is subject to state and control constraints. The crucial advantages of our method
lie in the facts that: (i) hard state and control constraints are incorporated directly into
the optimization problem and, (ii) the feedback control law ν : Rk (Mk ) → U is piecewise
affine function of x ∈ Rk (Mk ).

66
3.4 Conclusions and Summary
The results of this chapter can be used in the design of robust reference governors,
predictive controllers and time-optimal controllers for constrained, linear discrete time
systems subject to additive, but bounded disturbances. This fact will be illustrated
in the subsequent chapters. An obvious extension of the reported results allows for the
design of dead – beat controllers for constrained linear discrete time systems (disturbance
free case). A set of simple and rather straight forward modifications of the presented
procedures allows for the construction of set induced Lyapunov functions.
The main contribution of this chapter is a novel characterization of a family of the
polytopic RCI sets for which the corresponding control law is non-linear (piecewise affine)
enabling better results to be obtained compared with existing methods where the control
law is linear. Construction of a member of this family contained in the minimal p norm
ball or reference polytopic set can be obtained from the solution of an appropriately
specified LP. The optimized robust control invariance algorithms were illustrated by an
example, in which superiority over existing methods was illustrated.
The procedure presented here has been extended to enable the computation of a
polytopic RCI set that contains the maximal p norm ball or reference polytopic set. This
extension allows for finite time computation of a robust control invariant approximation of
the maximal robust control invariant set. It can be shown that the resulting optimization
problem is the minimization of the upper bound of Hausdorff distance between the robust
control invariant approximation Rk (Mk ) and the maximal robust control invariant set.
The results can be extended to the case when disturbance belongs to an arbitrary
polytope. Moreover, it is also possible to extend the results to the case when the sys-
tems dynamics are parametrically uncertain. These relevant extensions will be presented
elsewhere.

67
Chapter 4

Abstract Set Invariance – Set


Robust Control Invariance

[Truth is] the offspring of silence and unbroken meditation.

– Sir Isaac Newton

In this chapter we introduce the concept of set robust control invariance. The family
of set robust control invariant sets is characterized and the most important members of
this family, the minimal and the maximal set robust control invariant sets, are identified.
This concept generalizes the standard concept of robust control invariance for trajectories
starting from a single state belonging to a set of states to trajectories of a sequence of
sets of states starting from a set of states. Set robust control invariance for discrete
time linear time invariant systems is studied in detail and the connection between the
standard concepts of set invariance (control invariance and robust control invariance)
and set robust control invariance is established.
The main motivation for this generalization lies in the fact that when uncertainty is
present in the controlled system we are forced to consider a tube of trajectories instead of
a single isolated trajectory. A tube trajectory resulting from the uncertainty is a sequence
of a set of states indicating a need for generalization of standard and well established
concepts in the set invariance theory.

4.1 Preliminaries
We consider the following discrete-time, time-invariant system:

x+ = f (x, u, w) (4.1.1)

where x ∈ Rn is the current state, u ∈ Rm is the current control input and x+ is the
successor state; the bounded disturbance w is known only to that extent that it belongs to

68
the compact (i.e. closed and bounded) set W ⊂ Rp . The function f : Rn ×Rm ×Rp → Rn
is assumed to be continuous.
The system is subject to hard state and input constraints:

(x, u) ∈ X × U (4.1.2)

where X and U are closed and compact sets respectively, each containing the origin in its
interior.
To motivate introduction of set robust control invariance we briefly recall some basic
properties of the RCI sets. A relevant property of a given robust control invariant set
Ω for system (4.1.1) and constraint set (X, U, W) is that, for any state x ∈ Ω ⊆ X,
there exists a control input u ∈ U such that {f (x, u, w) | w ∈ W} ⊆ Ω. This important
property is illustrated in Figure 4.1.1, where it is also shown that a suitable choice of a
control u ∈ U has to be made.
{f (x, u2 , w) | w ∈ W} 6⊆ Ω, u2 ∈ U

{f (x, u1 , w) | w ∈ W} ⊆ Ω, u1 ∈ U

x∈Ω

Ω⊆X

Figure 4.1.1: Graphical Illustration of Robust Control Invariance Property

If a given set Ω is a RCI set for system (4.1.1) and constraint set (X, U, W) then
there exists a control law ν : Ω → U such that the set Ω is a RPI set for system
x+ = f (x, ν(x), w) and constraint set (Xν , W) where Xν , X ∩ {x | ν(x) ∈ U}. Given an
initial state x ∈ Ω, a set of possible state trajectories due to the uncertainty is precisely
described by the following set recursion:

Xi (x) , {f (y, ν(y), w) | y ∈ Xi−1 (x), w ∈ W}, i ∈ N+ , X0 (x) , {x} (4.1.3)

The set sequence {Xi (x)} is the exact ‘tube’ containing all the possible state trajec-
tory realizations due to the uncertainty and it contains the actual state trajectory cor-
responding to a particular uncertainty realization. The sets Xi (x), i ∈ N, x ∈ Ω satisfy
that Xi (x) ⊆ Ω, ∀i ∈ N because x ∈ Ω and Ω is a RPI set for system x+ = f (x, ν(x), w)
and constraint set (Xν , W) where Xν , X ∩ {x | ν(x) ∈ U}. It is important, therefore,
to observe that the shapes of the sets Xi (x), i ∈ N change with time i and that are, in
general, complex geometrical objects (depending on the properties of f (·) and geometry

69
of constraint set (X, U, W)). Thus, the uncertainty generates a whole set of possible
trajectories despite the fact that the initial state is a singleton. This is one of the main
reasons for generalization of standard and well established concepts in set invariance. We
demonstrate in the sequel of this chapter that, for certain classes of discrete time systems,
it is possible to generate a sequence of set of states ‘tube’ of a fixed cross–section that
is an outer – bounding tube to a true tube and is of a relatively simple shape and has
certain robust control invariance properties.
First, we introduce the concept of set robust control invariance:
Definition 4.1 (Set Robust Control Invariant Set) A set of sets Φ is set robust control
invariant (SRCI) for system x+ = f (x, u, w) and constraint set (X, U, W) if for any set
X ∈ Φ: (i) X ⊆ X and, (ii) there exists a (single) set Y ∈ Φ such that for all x ∈ X,
there exists a u ∈ U such that f (x, u, W) ⊆ Y .
This concept is graphically illustrated for the case when f (·) is linear (f (x, u, w) =
Ax + Bu + w) and a set of sets Φ has a specific form, i.e. Φ , {z ⊕ R | z ∈ Z} is a set
of sets, each of the form z ⊕ R, z ∈ Z where R is a set, in Figure 4.1.2.
X ⊆ X, ∀x ∈ X, ∃u ∈ U s.t. {Ax + Bu + w | w ∈ W} ⊆ Y ∈ Φ

Φ , {z ⊕ R | z ∈ Z}

X ∈ Φ ⇔ X ⊂ Φ X , ΦX , Z ⊕ R

Figure 4.1.2: Graphical Illustration of Definition 4.1

By inspection of Definition 4.1 and Definition 1.23 it is clear that by letting each
X ∈ Φ to be a single state, i.e. X = {x} where x ∈ Ω, in Definition 4.1 we obtain
the standard concept of a robust control invariant set for system x+ = f (x, u, w) and
constraint set (X, U, W).
To clarify this concept, we are interested in characterizing a family (or a set) of sets
Φ such that for any member X ⊆ X of the family Φ and any state x ∈ X, there exists an
admissible control u ∈ U such that f (x, u, W ) = {f (x, u, w) | w ∈ W} is a subset of some
set Y and Y is itself a member of Φ. Note that, for a given set X ∈ Φ, we require that
we are able to find a single set Y with the properties discussed above. Before proceeding
to characterize a set of sets Φ for the case when system considered is linear we provide
the definition of set robust positive invariance:
Definition 4.2 (Set Robust Positively Invariant Set) A set of sets Φ is set robust
positively invariant (SRPI) for system x+ = g(x, w) and constraint set (X, W) if any set

70
X ∈ Φ satisfies: (i) X ⊆ X and, (ii) g(X, W) ⊆ Y for some Y ∈ Φ, where g(X, W) =
{g(x, w) | x ∈ X, w ∈ W}.

4.2 Set Robust Control Invariance for Linear Systems


We study in more detail the relevant case when f (x, u, w) = Ax + Bu + w and the couple
(A, B) ∈ Rn×n × Rn×m is assumed to be controllable. Hence, we consider the system:

x+ = Ax + Bu + w (4.2.1)

subject to hard constraints:


(x, u, w) ∈ X × U × W (4.2.2)

The sets U and W are compact, the set X is closed; each contains the origin as an interior
point. We also define the corresponding nominal system:

z + = Az + Bv, (4.2.3)

where z ∈ Rn is the current state, v ∈ Rm is the current control action z + is the successor
state of the nominal system.
We will consider a set of sets characterized as follows:

Φ , {z ⊕ R | z ∈ Z} (4.2.4)

where R ⊂ Rn and Z ⊂ Rn are sets. We are interested in characterizing all those Φ that
are set robust control invariant.
Remark 4.1 (Set Robust Control Invariance for Linear Systems) Definition 4.1 yields
the following:

• A set of sets Φ is set robust control invariant (SRCI) for system x+ = Ax + Bu + w


and constraint set (X, U, W) if any set X ∈ Φ satisfies: (i) X ⊆ X and, (ii) there
exists a (single) set Y ∈ Φ such that for all x ∈ X, there exists a u ∈ U such that
Ax + Bu ⊕ W ⊆ Y .

If the set of sets Φ is characterized by (4.2.4) it follows that the sets X and Y in
Definition 4.1 (and the previous remark) should have the following form X = z1 ⊕ R and
Y = z2 ⊕ R with z1 , z2 ∈ Z.
We assume that:
A1(i): The set R is a RCI set for system (4.2.1) and constraint set (αX, βU, W) where
(α, β) ∈ [0, 1) × [0, 1)
A1(ii): The control law ν : R → βU is such that R is RPI for system x+ = Ax +
Bν(x) + w and constraint set (Xν , W), where Xν , αX ∩ {x | ν(x) ∈ βU}.
The control law ν(·) in A1(ii) exists by A1(i).
Let Uν be defined by:
Uν , {ν(x) | x ∈ R}. (4.2.5)

71
and let
Z , X ⊖ R, V , U ⊖ Uν (4.2.6)

We also assume:
A2(i): The set Z is a CI set for the nominal system (4.2.3) and constraint set (Z, V),
A2(ii): The control law ϕ : Z → V is such that Z is PI for system z + = Az + Bϕ(z)
and constraint set Zϕ , where Zϕ , Z ∩ {z | ϕ(z) ∈ V}.
Existence of the control law ϕ(·) in A2(ii) is guaranteed by A2(i).
We can now establish the following relevant result:
Theorem 4.1. (Characterization of a Family of Set Robust Control Invariant Sets)
Suppose that A1 and A2 are satisfied. Then Φ , {z ⊕ R | z ∈ Z} is set robust control
invariant for system x+ = Ax + Bu + w and constraint set (X, U, W).

Proof: Let X ∈ Φ, then X = z ⊕ R for some z ∈ Z. For every x ∈ X we have


x = z + y, where y , x − z ∈ R. Let the control law θ : Z ⊕ R → U be defined by
θ(x) , ϕ(z) + ν(y) and let u = θ(x). Then x+ = Ax + Bθ(x) + w = A(z + y) + B(ϕ(z) +
ν(y)) + w = Az + Bϕ(z) + Ay + Bν(y) + w. It follows from A2 that Az + Bϕ(z) ∈ Z
and, by A1, we have Ay + Bν(y) + w ∈ R, ∀w ∈ W, ∀y ∈ R. Hence we conclude that
Ax + Bθ(x) + w ∈ Y, ∀w ∈ W, ∀x ∈ X where Y , z + ⊕ R ∈ Φ. The fact that X ⊆ X
follows from A1 and A2 because Z ⊕ R ⊆ X. The fact that u = θ(x) ∈ U for all x ∈ X
and every X ∈ Φ follows from A1 and A2 since ϕ(z) ∈ U ⊖ Uν ⊆ (1 − β)U, ∀z ∈ Z and
ν(y) ∈ βU, ∀y ∈ R implying that u = θ(x) ∈ U, ∀x ∈ X and every X ∈ Φ.

QeD.

It follows from Theorem 4.1 and Definition 4.2 that the set Φ = {z ⊕ R | z ∈ Z}
where the sets R and Z satisfy A1 and A2 is set robust positively invariant for system
x+ = Ax + Bθ(x) + w and constraint set (Xθ , W). The control law θ : Z ⊕ R → U is
defined as in the proof of Theorem 4.1:

θ(x) = ϕ(z) + ν(y) (4.2.7)

with x = z + y, x ∈ X, X = z ⊕ R ∈ Φ. The set Xθ is given by Xθ = X ∩ {x | θ(x) ∈ U}.


Remark 4.2 (Exploiting Linearity) Theorem 4.1 exploits linearity as illustrated in
Figure 4.2.1.

4.3 Special Set Robust Control Invariant Sets


We observe that generally there exists an infinite number of set robust control invariant
sets Φ, since given a set R satisfying A1 there exists, in general, an infinite number of
CI sets Z satisfying A2. Our attention is therefore restricted to some important cases
such as the minimal and the maximal set robust control invariant set for a given set R
satisfying A1(i). We shall define the minimal and maximal set robust control invariant

72
x

y
X + = z + ⊕ R, z ∈ Z
z

z+

X = z ⊕ R, z ∈ Z
y+

x+

Figure 4.2.1: Exploiting Linearity – Theorem 4.1

sets for a given robust control invariant set R, i.e. for a given set R satisfying A1(i).
Before proceeding we need the following definitions:

Definition 4.3 (Maximal Set Robust Control Invariant Set for a given RCI set R) Given
a RCI set R satisfying A1(i), a set Φ∞ (R) = {z ⊕ R | z ∈ Z} is a maximal set robust
control invariant (MSRCI) set for system x+ = Ax+Bu+w and constraint set (X, U, W)
if the set Z is the maximal control invariant set satisfying A2(i).
Let:
ρ(z) , sup |z|p (4.3.1)
z∈Z

Definition 4.4 (Minimal Set Robust Control Invariant Set for a given RCI set R) Given
a RCI set R satisfying A1(i), a set Φ0 (R) = {z ⊕ R | z ∈ Z} is a minimal set robust
control invariant (mSRCI) set for system x+ = Ax + Bu + w and constraint set (X, U, W)
if the set Z is a control invariant set satisfying A2(i) and Z minimizes ρ(z) over all control
invariant sets satisfying A2(i) (i.e. Z is contained in the minimal p norm ball).
It is important to note that Definitions 4.3 and 4.4 provide a way for direct char-
acterization of the minimal and the maximal set robust control invariant set as we will
establish shortly. However, these definitions are given for the case when the set R sat-
isfying A1 is specified a–priori. If one attempts to make a complete generalization of
these concepts, various technical issues (such as for instance non–uniqueness) will occur.
In order to have well defined concept we consider the relevant case when R is an a–priori
fixed set satisfying A1.
The following observation is a direct consequence of Definitions 4.3 and 4.4.
Proposition 4.1 (The minimal and the maximal SRCI sets for a given R) Let a set R
satisfying A1 be given. Then:

(i) The minimal set robust control invariant set Φ0 (R) is Φ0 (R) = {z ⊕ R | z ∈ {0}},

(ii) The maximal set robust control invariant set Φ∞ (R) is Φ∞ (R) = {z ⊕ R | z ∈ Z∞ },
where Z∞ is the maximal control invariant set satisfying A2.

73
Proof of this observation follows directly from Definitions 4.3 and 4.4, the fact that
{0} is the control invariant set satisfying A2 and ρ(z) = 0, and the fact that Z∞ – the
maximal control invariant set satisfying A2 exists and is unique [Aub77, Aub91].
Note that if W = {0} we can choose the set R = {0}, since R = {0} satisfies A1. For
this deterministic case we observe the following:
Remark 4.3 (Uncertainty Free Case) If W = {0} the sets Φ0 (R) and Φ∞ (R), respec-
tively, tend (in the Hausdorff metric) to the origin and the maximal control invariant set
for system x+ = Ax + Bu and constraint set (X, U).

4.4 Dynamical Behavior of X ∈ Φ


We will now consider the set trajectory with initial set X0 ∈ Φ. This set trajectory is
defined as a sequence of the sets:

Yi , {y | y = Ax + Bθ(x) + w, x ∈ Yi−1 , w ∈ W}, i ∈ N, Y0 , X0 , (4.4.1)

The set sequence {Yi }, i ∈ N is the set trajectories starting from the set X0 and can be
considered as a forward reachable tube in which any individual trajectory of the system
x+ = Ax + Bθ(x) + w, w ∈ W with initial condition x0 ∈ X0 is contained. An interesting
observation can be established under the following assumption:
A3: There exists a Lyapunov function V : Z∞ → R (see Definition 1.22, and note that
one of the properties V is V (Az + Bϕ(z)) − V (z) < 0, ∀z 6= 0, z ∈ Z∞ ).
A relevant observation is:
Proposition 4.2 (Convergence of the outer – bounding tube) Suppose that A1, A2 and
A3 hold and that the sets R and Z∞ are compact. Let X0 ∈ Φ∞ (R) and let the set
sequence {Yi }, i ∈ N be defined as in (4.4.1), then there exists a set sequence {Xi }, i ∈ N
such that Xi ∈ Φ∞ (R), ∀i ∈ N (Xi = zi ⊕ R with zi ∈ Z∞ for all i ∈ N) and Yi ⊆ Xi for
all i ∈ N; moreover Xi → R ∈ Φ0 (R) ⊆ Φ∞ (R) in the Hausdorff metric as i → ∞.

Proof: If Assumptions A1, A2 hold we can conclude that Φ∞ (R) 6= ∅. Let X0 ∈


Φ∞ (R) so that X0 = z0 ⊕ R for some z0 ∈ Z∞ . We define the set sequence {Xi } such
that Xi ∈ Φ(R), ∀i as follows:

Xi = zi ⊕ R, zi = Azi−1 + Bϕ(zi−1 ), i ∈ N+ , X0 = z0 ⊕ R.

The proof of the assertion that Yi ⊆ Xi for some Xi ∈ Φ∞ (R) for all i ∈ N is by
induction. Suppose that Yk ⊆ Xk with Xk ∈ Φ∞ (R) for some finite integer k ∈ N+ .
Since Xk ∈ Φ∞ (R) we have that Xk , zk ⊕ R for some zk ∈ Z∞ , by Theorem 4.1 we
have:
{Ax + Bθ(x) + w | x ∈ Xk , w ∈ W} ⊆ Xk+1 , zk+1 ⊕ R,

where zk+1 = Azk + Bϕ(zk ) ∈ Z∞ so that Xk+1 ∈ Φ∞ (R). Since Yk ⊆ Xk , it follows that

Yk+1 = {Ax + Bθ(x) + w | x ∈ Yk , w ∈ W} ⊆ {Ax + Bθ(x) + w | x ∈ Xk , w ∈ W}.

74
Hence Yk+1 ⊆ Xk+1 and Xk+1 ∈ Φ∞ (R). Since Y0 = X0 and since for any k ∈ N+ the
set inclusion Yk ⊆ Xk implies that Yk+1 ⊆ Xk+1 we conclude that Yk ⊆ Xk , ∀k ∈ N.
Since Xk = zk ⊕ R, k ∈ N and zk = Azk−1 + Bϕ(zk−1 ) it follows by A3 that zk → 0
as k → ∞ for all z0 ∈ Z∞ so that Xk = zk ⊕ R → R (in the Hausdorff metric) as k → ∞,
because dpH (Xi , R) ≤ d(zk , 0) → 0 as i → ∞ (where d(zk , 0) , |zk − 0|p ).

QeD.

An appropriate graphical illustration of Proposition 4.2 is given in Figure 4.4.1.


X0 ⊆ Φ∞ (R)X ⇒ Yi ⊆ Xi , Azi−1 + Bϕ(zi−1 ) ⊕ R ⊆ Φ∞ (R)X , ∀i,
Yi , {Ax + Bθ(x) + w | x ∈ Xi−1 , w ∈ W}, and Xi → R as i → ∞

Z∞
X0 , z0 ⊕ R, z0 ∈ Z∞

Φ∞ (R)X

Figure 4.4.1: A Graphical Illustration of Convergence Observation

Corollary 4.1 (Exponential Convergence) If the Lyapunov function V (·) of A3 satisfies


that V (Az + Bϕ(z)) − V (z) < −α|z|, ∀z 6= 0, z ∈ Zf (with α > 0), then the set sequence
{Xi } (of Proposition 4.2) converges exponentially to the set R in the Hausdorff metric.

Proof: The proof of this result follows from Proposition 4.2 and exponential conver-
gence of the sequence {zi } to the origin. More detailed analysis is given in the proof
of Theorem 9.1.

QeD.

4.5 Constructive Simplifications


Here we provide a possible choice for the sets R, Z and the corresponding Lyapunov
function V (·) such that satisfaction of Assumptions A1–A3 is easy to verify.

4.5.1 Case I

The set R and the corresponding feedback controller ν(·) satisfying A1 can be constructed
by the methods considered in Chapter 3. The set Z the corresponding control law ϕ(·)
satisfying A2 can be chosen as follows. The control law ϕ(·) can be chosen to be any

75
stabilizing linear state feedback control law for system z + = Az + Bv, consequently
ϕ(z) = Kz and the suitable choice for the set Z is any positively invariant set Z for system
z + = (A + BK)z and tighter constraints specified in A2. In this case an appropriate
choice for Lyapunov function V (·) satisfying A3 is any solution to the Lyapunov discrete
time equation, i.e.
(A + BK)′ P (A + BK) − P < 0, P > 0.

Alternatively one can also use the solution of the discrete time algebraic Ricatti equation.
In this case, A2 and A3 will be trivially satisfied providing that A1 is satisfied. Sat-
isfaction of A1 is easily verified by solving the optimization problem specified in (3.2.31)
or (3.2.37).
The corresponding control law θ(·) is then given by:

θ(x) = Kz + ν(y), x ∈ X = z ⊕ R (4.5.1)

with y = x − z and X ∈ Φ. It is also clear that in this case one has that the minimal
and the maximal set robust positively invariant sets for a given set R are given by:

Φ0 (R) = {z ⊕ R |z ∈ {0}}, Φ∞ (R) = {z ⊕ R |z ∈ Z∞ } (4.5.2)

where Z∞ is the maximal robust positively invariant set for system z + = (A + BK)z and
constraint set (Z, V) defined in (4.2.6).

4.5.2 Case II

Further simplification is obtained if the set R is chosen to be a robust positively invariant


set for the system y + = (A + BK1 )y + w, i.e. the corresponding feedback controller ν(·)
is restricted to be linear. The set R can be constructed by the methods considered in
Chapter 2. As in the first simplifying case the set Z and the corresponding control law
ϕ(·) satisfying A2 can be chosen as follows. The control law ϕ(z) = K2 z , where K2 is any
stabilizing linear state feedback control law for system z + = Az + Bv, consequently the
suitable choice for the set Z is any positively invariant set Z for system z + = (A + BK2 )z
and tighter constraints specified in A2. In this case an appropriate choice for Lyapunov
function V (·) satisfying A3 is any solution to the Lyapunov discrete time equation, i.e.

(A + BK2 )′ P (A + BK2 ) − P < 0, P > 0.

The solution of the discrete time algebraic Ricatti equation can be also used.
K1 defined in (2.2.2), the minimal RPI set for system y + = (A+BK )y +w
If the set F∞ 1

satisfying A1 exists (which practically means the set F K1 (ζ, s) of Theorem 2.1 or Theo-
rem 2.3 satisfies the assumption A1(i) with ν(y) = K1 y) Assumptions A2 and A3 will
be satisfied. The corresponding control law θ(·) is then given by:

θ(x) = K2 z + K1 y, x ∈ X = z ⊕ R (4.5.3)

76
with y = x − z and X ∈ Φ. In this case one has that the minimal and the maximal set
robust positively invariant sets for a given set R = F K1 (ζ, s) are given by:

Φ0 (R) = {z ⊕ R |z ∈ {0}}, Φ∞ (R) = {z ⊕ R |z ∈ Z∞ } (4.5.4)

where Z∞ is the maximal robust positively invariant set for system z + = (A + BK2 )z
and constraint set (Z, V) defined in (4.2.6).

4.5.3 Case III

The simplest but most conservative case is when the control laws ν(·) and ϕ(·) are chosen
to be the same linear state feedback control law. In this case the set R can be constructed
by the methods considered in Chapter 2 (see also brief discussion for case II) and the set
Z is any positively invariant set Z for system z + = (A + BK)z and tighter constraints
specified in A2. In this case an appropriate choice for Lyapunov function V (·) satisfying
A3 is as in simplifying case II, i.e. any solution to the Lyapunov discrete time equation
or the solution of the discrete time algebraic Ricatti equation.
Assumptions A2 and A3 are in this case satisfied if A1 is satisfied. The corresponding
control law θ(·) is then given by:

θ(x) = Kx = K(z + y), x ∈ X = z ⊕ R (4.5.5)

with y = x − z and X ∈ Φ = {z ⊕ R | z ∈ Z}. Similarly to the case II, the minimal and
the maximal set robust positively invariant sets for a given set R = F K (ζ, s) are:

Φ0 (R) = {z ⊕ R |z ∈ {0}}, Φ∞ (R) = {z ⊕ R |z ∈ Z∞ } (4.5.6)

where Z∞ is the maximal robust positively invariant set for system z + = (A + BK)z and
constraint set (Z, V) defined in (4.2.6).

Remark 4.4 (Ellipsoidal Sets) It is important to observe that our concept does not
require any particular shape of the sets R and Z but merely certain properties of these
sets as illustrated in Figure 4.5.1 by using ellipsoidal sets R and Z satisfying A1 and
A2.

4.5.4 Numerical Example

In order to illustrate our results we consider a simple, second order, linear discrete time
system defined by: " # " #
+ 1 0.2 0
x = x+ u + w, (4.5.7)
0 1 1
which is subject to the following set of control constraints:

u ∈ U , { u ∈ R | − 2 ≤ u ≤ 2} (4.5.8)

and state constraints:

x ∈ X , { x = (x1 , x2 )′ ∈ R2 | − 10 ≤ x1 ≤ 1, −10 ≤ x2 ≤ 10} (4.5.9)

77
20

x2
15
Φ∞ (R)
10

X∞ ∈ Φ0 (R) = R
−5
X0 ∈ Φ∞ (R)
−10

−15

Z∞
−20
−10 −8 −6 −4 −2 0 2 4 6 8 10
x1

Figure 4.5.1: Sample Set Trajectory for a set X0 ∈ Φ∞ (R) – Ellipsoidal Sets

while the disturbance is bounded:

w ∈ W , { w ∈ R2 | |w|∞ ≤ 0.1}. (4.5.10)

We illustrate our results by considering the simplest case – case III. The local, linear
state feedback, control law is
u = −[2.4 1.4]x (4.5.11)

and it places the eigenvalues of the closed loop system to 0.2 and 0.4. The invariant set
R is computed by using the methods of Chapter 2 and [RKKM03]. In Figure 4.5.2 we
show trajectory of a set starting from the set X0 ∈ Φ∞ (R) where X0 = z0 ⊕ R, z0 ∈ Z∞
and Z0 is one of the vertices of Z∞ . As it can be seen the set trajectory converges to
X∞ ∈ Φ0 (R) = R.

4.6 Summary
In this chapter we have introduced the concept of set robust control invariance. This
concept is a generalization of the standard concepts in the set invariance theory. A novel
family of set robust control invariant sets has been characterized and the most impor-
tant members of this family, the minimal and the maximal set robust control invariant
sets, have been identified. The concept has been discussed in more detail for a relevant
class of the discrete time systems – linear systems. A set of constructive simplifications
and methods have been also provided. These simplifications allow for devising a set of
efficient algorithms based on the standard algorithms of set invariance and the results
given in Chapters 2 and 3. These results are very useful in the design of robust model
predictive controllers as will be illustrated in the sequel of this thesis. The proposed

78
5
x2

3
Φ∞ (R)
2

1
X0 ∈ Φ∞ (R)

−1
Z∞
−2
X∞ ∈ Φ0 (R) = R

−3
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5
x1

Figure 4.5.2: Sample Set Trajectory for a set X0 ∈ Φ∞ (R)

concept has been illustrated, for one of the simplifying cases, by an appropriate numer-
ical example. An extension of the presented results to imperfect state information case
has been considered and a set of preliminary results has been established, but it will be
presented elsewhere. This line of research enables for an appropriate treatment of set
invariance for output feedback case.

79
Chapter 5

Regulation of discrete-time linear


systems with positive state and
control constraints and bounded
disturbances

The gift of mental power comes from God, Divine Being, and if we concentrate our minds
on that truth, we become in tune with this great power.

– Nikola Tesla

Most studies of constrained control include the assumption that the origin is in the
interior of constraint sets, see for example [MRRS00, BM99] and references therein. This
assumption is not always satisfied in practice. In some practical problems the controlled
system is required to operate as close as possible or at the boundary of constraint sets.
This issue has been discussed in [RR99, PWR03]. In fact, a variety of control problems
require the control action and/or state to be positive. Typical applications include situ-
ations where the operating point maximizes (steady state) efficiency so that the steady
state control and/or the steady state itself lie on the boundaries of their respective con-
straint sets. Any deviation of the control and/or state from its steady state value must
therefore be directed to the interior of its constraint set.
Here we consider a more general problem – the regulation problem for discrete-time
linear systems with positive state and control constraints subject to additive and bounded
disturbances. Control under positivity constraints raises interesting problems that are
amplified if the system is subject to additive and bounded disturbances.
Instead of controlling the system to the desired reference couple (x̂, û) that lie on
the boundary of the respective constraint sets, we control the system to a robust control
invariant set centered at an equilibrium point (x̄, ū) while minimizing an appropriate dis-

80
tance from the reference couple (x̂, û). The basic idea, used in this chapter, is graphically
illustrated in Figure 5.0.1.
u

X × U ⊆ Rn+ × Rm
+

(x̄, ū)

(x̂, û) x

Z , {(x, u) ∈ X × U | Ax + Bu ⊕ W ⊆ ProjX (Z)}, x̄ = Ax̄ + B ū

Figure 5.0.1: Graphical Illustration of translated RCI set

The first subproblem is the construction of a suitable target set, that is an appropri-
ately computed robust control invariant set centered at a suitable equilibrium (x̄, ū). This
set is then used to implement a standard robust time-optimal control scheme. We also re-
mark that the recent methodology of parametric programming [BMDP02, DG00, MR03b]
can be used to obtain controllers of low/moderate complexity [GPM03] that ensure ro-
bust constraints satisfaction as well as robust time–optimal convergence to the target set.
The local control law then keeps the state trajectory inside of this set while satisfying
the positive control and state constraints despite the disturbances.
Our first step is to extend the results of Chapter 3 and characterize a novel family of
the polytopic robust control invariant sets for linear systems under positivity constraints.
The existence of a member of this family, that is a RCI set for system x+ = Ax + Bu + w
and constraints set (X, U, W), can be checked by solving a single linear or quadratic
programming problem. The solution of this optimization problem yields the corresponding
controller. Our second step is to exploit these results and standard ideas in robust time-
optimal control [BR71b, Bla92, MS97b] to devise an efficient control algorithm – a robust
time–optimal control scheme for regulation of uncertain linear systems under positivity
constraints. The resultant computational scheme has certain stability properties and we
established robust finite–time attractivity of an appropriate RCI set S – the ‘translated
origin’ for the uncertain system.

5.1 Preliminaries
We consider the following discrete-time linear time-invariant (DLTI) system:

x+ = Ax + Bu + w, (5.1.1)

81
where x ∈ Rn is the current state, u ∈ Rm is the current control action x+ is the successor
state, w ∈ Rn is an unknown disturbance and (A, B) ∈ Rn×n × Rn×m . The disturbance
w is persistent, but contained in a convex and compact (i.e. closed and bounded) set
W ⊂ Rn that contains the origin. We make the standing assumption that the couple
(A, B) is controllable. The system (5.1.1) is subject to the following set of hard state
and control constraints:
(x, u) ∈ X × U (5.1.2)

where X ⊆ Rn+ and U ⊆ Rm


+ are polyhedral and polytopic sets respectively having non–
empty interiors.
Let the set F denote the set of equilibrium points for the nominal system x+ =
Ax + Bu:
F , {(x̄, ū) | (A − I)x̄ + B ū = 0} (5.1.3)

We proceed by addressing the first subproblem, that is the existence and construction
of the target set, that is an appropriately computed robust control invariant set for
system (5.1.1) and constraint set (X, U, W) centered at a suitable equilibrium (x̄, ū).

5.2 Robust Control Invariance Issue – Revisited


First, we characterize a family of the polytopic RCI sets for the system (5.1.1) and
constraint set (Rn , Rm , W) case by extending the relevant results established in Chapter 3.
We shall repeat some necessary discussion of Chapter 3 in order to enable the reader to
follow arguments of this chapter as easily as possible. Let Mi ∈ Rm×n , i ∈ N and for each
k ∈ N let Mk , (M0 , M1 , . . . , Mk−2 , Mk−1 ). It is shown in Chapter 3 that an appropriate
characterization of a family of RCI sets for system (5.1.1) and constraint set (Rn , Rm , W)
is given by the following sets for k ≥ n:
k−1
M
Rk (Mk ) , Di (Mk )W (5.2.1)
i=0

where the matrices Di (Mk ), i ∈ Nk , k ≥ n are defined by:


i−1
X
i
D0 (Mk ) , I, Di (Mk ) , A + Ai−1−j BMj , i ≥ 1 (5.2.2)
j=0

providing that Mk satisfies:


Dk (Mk ) = 0. (5.2.3)

It was remarked in Chapter 3 that since the couple (A, B) is assumed to be controllable
such a choice exists for all k ≥ n. Let Mk denote the set of all matrices Mk satisfying
condition (5.2.3):
Mk , {Mk | Dk (Mk ) = 0} (5.2.4)

It is established in Chapter 3 in Theorem 3.1 that the family of sets (5.2.1) is a family
of polytopic RCI sets for system (5.1.1) and constraint set (Rn , Rm , W). However, the

82
family of RCI sets (5.2.1) is merely a subset of a richer family of RCI sets for system (5.1.1)
and constraint set (Rn , Rm , W) defined by the following sets for k ≥ n and any triplet
(x̄, ū, Mk ) ∈ F × Mk :
Sk (x̄, ū, Mk ) , x̄ ⊕ Rk (Mk ) (5.2.5)

We can now present the next result:


Theorem 5.1. (Characterization of a novel family of RCI sets) Given any triple
(x̄, ū, Mk ) ∈ F × Mk , k ≥ n and the corresponding set Sk (x̄, ū, Mk ) there exists a control
law µ : Sk (x̄, ū, Mk ) → Rm such that Ax+Bµ(x)⊕W ⊆ Sk (x̄, ū, Mk ), ∀x ∈ Sk (x̄, ū, Mk ),
i.e. the set Sk (x̄, ū, Mk ) is RCI for system (5.1.1) and constraint set (Rn , Rm , W).

Proof: Let x ∈ Sk (x̄, ū, Mk ) so that x = x̄ + y for some (x̄, ū, y) ∈ F × Rk (Mk ). Let
µ(x) = ū + ν(y), where ν(·) is the control law of Theorem 3.1 so that x+ ∈ A(x̄ + y) +
B(ū + ν(y)) ⊕ W. Since (x̄, ū) ∈ F, x̄ = Ax̄ + B ū. Also Ay + Bν(y) ⊕ W ⊆ Rk (Mk ), ∀y ∈
Rk (Mk ) by Theorem 3.1. Hence, Ax + Bµ(x) ⊕ W = Ax̄ + B ū + Ay + Bν(y) ⊕ W =
x̄ + Ay + Bν(y) ⊕ W ⊆ x̄ ⊕ Rk (Mk ) = Sk (x̄, ū, Mk ), ∀x ∈ Sk (x̄, ū, Mk ).

QeD.

Similarly to remark following Theorem 3.1 in Chapter 3 we make a relevant remark


on the selection of the feedback control law µ(·) of Theorem 5.1. Before proceeding we
define, for each x ∈ Sk (x̄, ū, Mk ):

W(x) , {w | w ∈ Wk , x̄ + Dw = x} (5.2.6)

where w , {w0 , . . . , wk−1 }, Wk , W × W × . . . × W and D = [Dk−1 (Mk ) . . . D0 (Mk )].

Remark 5.1 (Note on the selection of the feedback control law µ(·)) If we define:

U(x) , ū + Mk W(x) (5.2.7)

where (x̄, ū) ∈ F. It follows that the feedback control law µ : Rk (Mk ) → IRm is in fact
set valued, i.e. µ(·) is any control law satisfying:

µ(x) ∈ U(x) (5.2.8)

In other words the feedback control law µ(·) can be chosen to be:

µ(x) = ū + Mk w(x), w(x) ∈ W(x) (5.2.9)

with (x̄, ū) ∈ F.


We proceed to characterize an appropriate selection of the feedback control law µ(·).
The feedback control law µ : Sk (x̄, ū, Mk ) → Rm can be defined by:

µ(x) , ū + Mk w0 (x) (5.2.10)

83
where (x̄, ū, Mk ) ∈ F ×Mk and for each x ∈ Sk (x̄, ū, Mk ) the optimal disturbance sequence
w0 (x) = {w00 (x), w10 (x), . . . , wk−1
0 (x)} is the unique solution of the quadratic program

Pw (x):
w0 (x) , arg min{|w|2 | w ∈ W(x)}, (5.2.11)
w

It is important to observe that since the feedback control law µ(x) = ū + Mk w0 (x) and
since (5.2.11) defines a piecewise affine function w0 (·) of the state due to the constraint
w ∈ Wk , it follows that µ : Sk (x̄, ū, Mk ) → Rm is a piecewise affine function of state
x ∈ Sk (x̄, ū, Mk ) because it is an affine map of a piecewise affine function. Implementa-
tion of µ(·) can be simplified by noticing that w0 (x) in (5.2.10) can be replaced by any
disturbance sequence w , {w0 , w1 , . . . , wk−1 } ∈ W(x) as already remarked.
Remark 5.2 (Dead beat nature and explicit form of the feedback control law µ(·)) We
notice that Remarks 3.3 and 3.5 hold for this case with appropriate and obvious modifi-
cations.
Theorem 5.1 states that for any k ≥ n the RCI set Sk (x̄, ū, Mk ) , finitely determined
by k, is easily computed if W is a polytope. More importantly, the set Sk (x̄, ū, Mk ) and
the feedback control law µ(·) are parametrized by the couple (x̄, ū) and the matrix Mk .
This relevant consequence of Theorem 5.1 allows us to formulate an LP or QP that yields
the set Sk (x̄, ū, Mk ) while minimizing an appropriate norm of the set Sk (x̄, ū, Mk ) or the
standard Euclidean distance of the couple (x̄, ū) from desired reference couple (x̂, û) in
the case of hard positive state and control constraints.

5.2.1 Optimized Robust Controlled Invariance Under Positivity Con-


straints

We will provide more detailed analysis for the following relevant case frequently encoun-
tered in practice:
W , {Ed + f | |d|∞ ≤ η} (5.2.12)

where d ∈ Rt , E ∈ Rn×t and f ∈ Rn and:



F 6= ∅ (5.2.13)

where F , F ∩ (interior(X) × interior(U)). We illustrate that in this case, one can for-
mulate an LP or QP, whose feasibility establishes existence of a RCI set Sk (x̄, ū, Mk ) for
system (5.1.1) and constraint set (X, U, W), i.e. x ∈ X, µ(x) ∈ U and Ax + Bµ(x) ⊕ W ⊆
Sk (x̄, ū, Mk ) for all x ∈ Sk (x̄, ū, Mk ). The control law µ(x) satisfies µ(x) ∈ U (x̄, ū, Mk )
for all x ∈ Sk (x̄, ū, Mk ) where:
k−1
M
U (x̄, ū, Mk ) , ū ⊕ Mi W (5.2.14)
i=0

The constraints (5.1.2) are satisfied if:

Sk (x̄, ū, Mk ) ⊆ Xεx , U (x̄, ū, Mk ) ⊆ Uεu (5.2.15)

84
where Xεx , {x | Cx x ≤ cx − εx }, βU , {u | Cu u ≤ cu − εu }, (with Cx ∈ Rqx ×n , cx ∈ Rqx ,
Cu ∈ Rqu ×n , cu ∈ Rqu ) and (εx , εu ) ≥ 0.
Let γ , (x̄, ū, Mk , εx , εu , α, β) and

Γ , {γ | (x̄, ū, Mk ) ∈ F × Mk ,
Sk (x̄, ū, Mk ) ⊆ Xεx ∩ Bnp (x̂, α),
U (x̄, ū, Mk ) ⊆ Uεu ∩ Bm
p (û, β),

(εx , εu , α, β) ≥ 0} (5.2.16)

where Sk (x̄, ū, Mk ) is given by (5.2.5) and U (x̄, ū, Mk ) by (5.2.14) and (x̂, û) is the desired
reference couple.
Let

d1 (γ) , qα α + qβ β
d2 (γ) , |x̄ − x̂|2Q + |ū − û|2R (5.2.17)

where the couple (qα , qβ ) and the positive definite weighting matrices Q and R are design
variables. Consider the following minimization problems:

Pik : γ 0 = arg min{di (γ) | γ ∈ Γ}, i = 1, 2 (5.2.18)


γ

where γ 0 , (x̄, ū, Mk , εx , εu , α, β)0 . If the corresponding norm in (5.2.16) is polytopic (for
instance p = 1, ∞) an appropriate but simple modification of the discussion of Proposition
3.2 allows one to establish that:
Proposition 5.1 (Mathematical programming problems Pik , i = 1, 2 are LP and QP
respectively) The problem P1k is a linear programming problem and the problem P2k is a
quadratic programming problem.
If the set Γ 6= ∅ there exists an RCI set Sk = Sk (x̄, ū, Mk ), for system (5.1.1) and
constraint set (X, U, W). The solution to Pik , i = 1, 2 (which exists if Γ 6= ∅) yields a
set Sk0 = Sk (x̄0 , ū0 , M0k ) and the corresponding control law µ(·) defined by (5.2.10) and
(5.2.11) with (x̄, ū, Mk ) = (x̄0 , ū0 , M0k ).

Remark 5.3 (The set Sk0 is a RPI set for system x+ = Ax + Bµ0 (x) + w and (Xµ0 , W))
It follows from Theorem 5.1, Definition 1.24 and the discussion above that the set Sk0 ,
if it exists, is a RPI set for system x+ = Ax + Bµ0 (x) + w and constraint set (Xµ0 , W),
where Xµ0 , Xε0x ∩ {x | µ0 (x) ∈ Uε0u }.
Remark 5.4 (Non–uniqueness of the set Sk0 ) A relevant observation is that if the set
Γ 6= ∅, there might exist more than one set Sk (x̄, ū, Mk ) that yields the optimal cost of
Pik , i = 1, 2. The cost function might be modified. For instance, an appropriate choice for
the cost function is a positively weighted quadratic norm of the decision variable γ that
yields a unique solution, since in this case problem becomes a quadratic programming
problem of the form minγ {|γ|2P | γ ∈ Γ}, where P is positive definite and it represents a
suitable weight.

85
Similarly to Proposition 3.4 we can establish the following relevant observation:
Proposition 5.2 (The sets Sk0 as k increases) Suppose that the problem P1k (P2k ) is
feasible for some k ∈ N and the optimal value of d1 k (d2 k ) is d1 0k (d2 0k ), then for every
integer s ≥ k the problem P1s (P2s ) is also feasible and the corresponding optimal value of
d1s (d2s ) satisfies d1 0s ≤ d1 0k (d2 0s ≤ d2 0k ).
A minor modification of the arguments of the proof of Proposition 3.4 yields the proof
of Proposition 5.2.

Remark 5.5 (An obvious extension of Theorem 3.2) We observe that it is an easy
exercise to extend the results of Theorem 3.2 to this case. Hence the detailed discussion
is omitted.
Thus we conclude that the first subproblem (checking the existence and the construc-
tion of a suitable target set that is an appropriately computed robust control invariant set
and the computation of the corresponding feedback controller) can be efficiently realized
by solving a single LP or QP (if necessary for sufficiently large k ∈ N). Recalling the
discussion of Section 3.3 we observe that the crucial advantage of proposed method lie in
the fact that the hard positive state and control constraints are incorporated directly into
the optimization problem allowing for construction of an appropriate RCI set (target set)
with a local piecewise affine feedback control law µ : Sk (x̄, ū, Mk ) → U for system (5.1.1)
and constraint set (X, U, W). These results can be used in the synthesis of the robust
time–optimal controller as we illustrate next.

5.3 Robust Time–Optimal Control under positivity con-


straints
In this section we assume that there exists an RCI set T , Sk (x̄0 , ū0 , M0k ) for sys-
tem (5.1.1) and constraint set (X, U, W) obtained by solving the problem P1k or (P2k ) for
some k ∈ N. The set T is compact, RCI and contains the point x̄0 in its interior; this set
is a suitable target set.
The robust time–optimal control problem P(x) is defined, as usual, by:

N 0 (x) , inf {N | (π, N ) ∈ ΠN (x) × NNmax }, (5.3.1)


π,N

where Nmax ∈ N is an upper bound on the horizon and ΠN (x) is defined as follows:

ΠN (x) , {π | (xi , ui ) ∈ X × U, ∀i ∈ NN −1 , xN ∈ T, ∀w(·)} (5.3.2)

where for each i ∈ N, xi , φ(i; x, π, w(·)) and ui , µi (φ(i; x, π, w(·))). It should be


observed that the solution is sought in the class of the state feedback control laws because
of the additive disturbances, i.e. π is a control policy (π = {µi (·), i ∈ NN −1 }, where for
each i ∈ NN −1 , µi (·) : X → U ). The solution to P(x) is

π 0 (x), N 0 (x) , arg inf {N | (π, N ) ∈ ΠN (x) × NNmax }. (5.3.3)
π,N

86
Note that, the value function of the problem P(x) satisfies N 0 (x) ∈ NNmax and for
any integer i, the robustly controllable set Xi , {x | N 0 (x) ≤ i} is the set of initial states
that can be robustly steered (steered for all w(·)) to the target set T , in i steps or less
while satisfying all state and control constraints for all admissible disturbance sequences.
Hence N 0 (x) = i for all x ∈ Xi \ Xi−1 .
The robustly controllable sets {Xi } and the associated robust time-optimal control
laws κi : Xi → 2U can be computed by the following standard recursion:

Xi , {x ∈ X | ∃u ∈ U s.t. Ax + Bu ⊕ W ⊆ Xi−1 } (5.3.4)


κi (x) , {u ∈ U | Ax + Bu ⊕ W ⊆ Xi−1 }, ∀x ∈ Xi (5.3.5)

for i ∈ NNmax with the boundary condition X0 = T = Sk (x̄0 , ū0 , M0k ).


In view of our assumption that there exists a RCI set T , Sk (x̄0 , ū0 , M0k ) for
system (5.1.1) and constraint set (X, U, W) an appropriate choice for the control law
κ0 : T → 2U is:
κ0 (x) , µ(x) (5.3.6)

where µ(·) is defined by (5.2.10)– (5.2.11) with (x̄, ū, Mk ) = (x̄0 , ū0 , M0k ).
The time-invariant control law κ0 : XNmax → 2U defined, for all i ∈ NNmax , by

κi (x), ∀x ∈ Xi \ Xi−1 , i ≥ 1
κ0 (x) , (5.3.7)
κ (x), ∀x ∈ X
0 0

robustly steers any x ∈ Xi to X0 in i steps or less to X0 , while satisfying state and


control constraints, and thereafter maintains the state in X0 . We now recall a standard
result in robust time–optimal control:
Proposition 5.3 (A sequence of RCI sets) Suppose X0 = Sk (x̄0 , ū0 , M0k ) 6= ∅, then the
set sequence {Xi } computed using the recursion (5.3.4) is a non-decreasing sequence of
RCI sets for system (5.1.1) and constraint set (X, U, W), i.e. Xi ⊆ Xi+1 ⊆ X for all
i ∈ NNmax ; moreover for each i ∈ NNmax , Xi contains the point x̄0 in its interior.
Note that if X is compact or A is invertible then the sequence {Xi } is a sequence of
compact sets. The following property of the set-valued control law κ0 (·) defined in (5.3.7)
follows directly from the construction of κ0 (·):
Theorem 5.2. (Robust – Finite Time Attractivity of X0 ) The target set X0 is robustly
finite-time attractive for the closed-loop system x+ ∈ Ax + Bκ0 (x) ⊕ W with a region of
attraction XNmax .
We observe that for any i ∈ NNmax an appropriate selection of the control law κi (x) for
all x ∈ Xi \Xi−1 can be obtained by employing the parametric mathematical programming
as we briefly demonstrate next. For each i ≥ 1, i ∈ NNmax let:

Zi , {(x, u) ∈ X × U | Ax + Bu ∈ Xi−1 ⊖ W} (5.3.8)

and let Vi (x, u) be any linear or quadratic (strictly convex) function in (x, u) , for instance:

Vi (x, u) , |Ax + Bu|2Q (5.3.9)

87
Since Zi is a polyhedral set and since Vi (x, u) is a linear or a quadratic (strictly convex)
function it follows that for each i ≥ 1, i ∈ NNmax the optimization problem Pi (x):

θi0 (x) , arg min{Vi (x, u) | (x, u) ∈ Zi } (5.3.10)


u

is a parametric linear/quadratic problem. As is well know [BMDP02, DG00, MR03b,


BBM03a], the solution takes the form of a piecewise affine function of state x ∈ Xi :

θi0 (x) = Si,j x + si,j , x ∈ Ri,j , j ∈ Nli (5.3.11)

where li is a finite integer and the union of polyhedral sets Ri,j partition the set Xi , i.e.
S
Xi = j∈Nl Ri,j .
i
If we let:
i0 (x) , arg min{i ∈ NNmax | x ∈ Xi } (5.3.12)
i

it follows that θi00 (x) (x) ∈ κi (x) for all i ≥ 1, i ∈ NNmax .

Remark 5.6 (Applicability of the proposed method) Our final remark is that the presented
results are also applicable, with a minor set of appropriate modifications, when the hard
control and state constraints are arbitrary polytopes not necessarily satisfying X × U ⊆
Rn+ × Rm
+.

5.4 Illustrative Example


Our numerical example is the second order unstable system that is a linearized model of
a flight vehicle sampled every 0.2 s.:
" # " #
+ 0.9625 −0.1837 0.0618
x = x+ u+w (5.4.1)
0.3633 0.8289 −0.5990

where

w ∈ W , w ∈ R2 | |w|∞ ≤ 0.05 .

The following set of hard semi–positive state and positive control constraints is required
to be satisfied:

X ={x | 0 ≤ x1 ≤ 10, −0.5 ≤ x2 ≤ 10},


U ={u | 0 ≤ u ≤ 1} (5.4.2)

where xi is the ith coordinate of a vector x. The control objective is to bring the system
as close as possible to the origin, i.e. (x̂, û) = (0, 0) that is a point on the boundary of
the constraint sets. The appropriate target set is constructed from the solution of the
modified version of the problem P1k , in which p = ∞ and (εx , εu ) was set to 0 and with
the following design parameters:

(k, qα , qβ ) = (9, 1, 1), (5.4.3)

88
The optimal values of a triple (x̄0 , ū0 , M0k ) are as follows: x̄0 = (0.2421, −0.0000)′ ,
ū0 = 0.1468 and:  
−0.0627 1.4081
 

 −0.0000 0.0000 
 
 −0.0000 0.0000 
 
 
 −0.0000 0.0000 
 
Mk 0 =
 0.0000 −0.0000 
 (5.4.4)
 

 0.0000 −0.0000 

 
 1.1753 −0.0212 
 

 0.1611 −0.1083 

0.0000 0.0000

The RCI set X0 = Sk (x̄0 , ū0 , M0k ) is shown together with the RCI set sequence
{Xi }, i ∈ N13 computed by (5.3.4) in Figure 5.4.1.
3
x2
X0 = Sk (x̄0 , ū0 , M0k )
2.5

1.5

0.5

0 X9

−0.5

−1
−1 0 1 2 3 4 5
x1

Figure 5.4.1: RCI Set Sequence {Xi }, i ∈ N13

5.5 Summary
The main contribution of this chapter is a novel characterization of a family of polytopic
RCI sets for which the corresponding control law is non-linear (piecewise affine) enabling
better results to be obtained compared with existing methods where the control law is
linear. Construction of a member of this family that is constraint admissible can be
obtained from the solution of an appropriately specified LP or QP. The optimized robust
controlled invariance algorithms were employed to devise robust–time optimal controller
that is illustrated by an example.
The results can be extended to the case when disturbance belongs to an arbitrary
polytope. An appropriate and relatively simple extension of the presented results allows
for efficient robust model predictive control of linear discrete time systems subject to

89
positive state and control constraints and additive, but bounded disturbances. The
detailed analysis will be presented elsewhere.

90
Chapter 6

Robust Time Optimal Obstacle


Avoidance Problem for
discrete–time systems

Let no man who is not a mathematician read my work.

– Leonardo da Vinci

This chapter presents results that allow one to compute the set of states which can
be robustly steered in a finite number of steps, via state feedback control, to a given
target set while avoiding pre–specified zones or obstacles. A general procedure is given
for the case when the system is discrete-time, nonlinear and time-invariant and subject to
constraints on the state and input. Furthermore, we provide a set of specific results, which
allow one to perform the set computations using polyhedral algebra, linear programming
and computational geometry software, when the system is piecewise affine or linear with
additive state disturbances.
The importance of the obstacle avoidance problem is stressed in a seminal plenary lec-
ture by A.B. Kurzhanski [Kur04], while a more detailed discussion is given in [KMV04].
In these important papers, the obstacle avoidance problem is considered in a continu-
ous time framework and when the system is deterministic (disturbance free case). The
solution to this reachability problem is obtained by specifying an equivalent dynamic
optimization problem. This conversion (of the reachability problem into an optimization
problem) is achieved by introducing an appropriate value function. The value function is
the solution of a standard Hamilton–Jacobi–Bellman (HJB) equation. The set of states
that can be steered to a given target set, while satisfying state and control constraints
and avoiding obstacles, is characterized as the set of states belonging to the ‘zero’ level
set of the value function.
The main purpose of this chapter is to demonstrate that the obstacle avoidance prob-
lem in the discrete time setup has considerable structure, even when the disturbances are

91
present, that allows one to devise an efficient algorithm based on basic set computations
and polyhedral algebra in some relevant and important cases.

6.1 Preliminaries
We consider the following discrete-time, time-invariant system:

x+ = f (x, u, w) (6.1.1)

where x ∈ Rn is the current state, u ∈ Rn is the current control input and x+ is the
successor state; the bounded disturbance w is known only to that extent that it belongs to
the compact (i.e. closed and bounded) set W ⊂ Rp . The function f : Rn ×Rm ×Rp → Rn
is assumed to be continuous.
The system is subject to hard state and input constraints:

(x, u) ∈ X × U (6.1.2)

where X and U are closed and compact sets respectively, each containing the origin in
its interior. Additionally it is required that the state trajectories avoid a predefined open
set Z.
/Z
x∈ (6.1.3)

The set Z is in general specified as the union of a finite number of open sets:
[
Z, Zj , (6.1.4)
j∈Nq

where q ∈ N is a finite integer.

Remark 6.1 (Time Varying Obstacles) An interesting case is when the set Z is time
varying set. An extension of the results in this chapter to this case will be presented
elsewhere.
The problems considered in this chapter are: (i) Characterization of the set of states
that can be robustly steered to a given compact target set T in minimal time while satis-
fying the state and control constraints (6.1.2) and (6.1.3), for all admissible disturbance
sequences, and (ii) Synthesis of a robust time – optimal control strategy.
We first treat the general case in Section 6.2 and then provide a detailed analysis for
the case when the system being controlled is piecewise affine or linear, the corresponding
constraints sets X, U in (6.1.2) are, respectively, polyhedral and polytopic set and Z in
(6.1.3) is a polygon (the union of a finite number of open polyhedra).

92
6.2 Robust Time Optimal Obstacle Avoidance Problem –
General Case
The state constraints, specified in (6.1.2) – (6.1.4) may be converted in a single state
constraint x ∈ XZ where:
[
XZ , X \ Z = X \ Zj (6.2.1)
j∈Nq

Remark 6.2 (Properties & Structure of the set XZ ) If Z ⊆ interior(X), XZ is a non–


empty and closed set. Additionally, if the set Z is an open polygon and X is a closed
polyhedral set then the set XZ is a closed polygon. In this case by Proposition 1.6 the
set XZ given by:
[
XZ , XZj , (6.2.2)
j∈Nr

where r is a finite integer.


In order to have a well–defined problem we make the following standing assumption:
Assumption 6.1 The sets X, T, Z satisfy that (i) Z ⊆ interior(X) and (ii) T ⊆ XZ =
X \ Z.
The robust time–optimal obstacle avoidance problem P(x) is defined, as usual in
robust time–optimal control problems (See Section 5.3 of Chapter 5), by:

N 0 (x) , inf {N | (π, N ) ∈ ΠN (x) × NNmax }, (6.2.3)


π,N

where Nmax ∈ N is an upper bound on the horizon and ΠN (x) is defined as follows:

ΠN (x) , {π | (xi , ui ) ∈ XZ × U, ∀i ∈ NN −1 , xN ∈ T, ∀w(·)} (6.2.4)

where for each i ∈ N, xi , φ(i; x, π, w(·)) and ui , µi (φ(i; x, π, w(·))). The solution is
sought in the class of the state feedback control laws because of the additive disturbances,
i.e. π is a control policy (π = {µi (·), i ∈ NN −1 }, where for each i ∈ NN −1 , µi : XZ → U).
The solution to P(x) is

π 0 (x), N 0 (x) , arg inf {N | (π, N ) ∈ ΠN (x) × NNmax }. (6.2.5)
π,N

Note that, the value function of the problem P(x) satisfies N 0 (x) ∈ NNmax and for
any integer i, the robustly controllable set Xi , {x | N 0 (x) ≤ i} is the set of initial
states that can be robustly steered (steered for all w(·)) to the target set T, in i steps
or less while satisfying all state and control constraints and avoiding the obstacles for all
admissible disturbance sequences. Hence N 0 (x) = i for all x ∈ Xi \ Xi−1 .
The robust controllable sets {Xi } and the associated robust time-optimal control laws
κi : Xi → 2U can be computed by the following standard recursion:

Xi , {x ∈ XZ | ∃u ∈ U s.t. f (x, u, W) ⊆ Xi−1 } (6.2.6)


κi (x) , {u ∈ U | f (x, u, W) ⊆ Xi−1 }, ∀x ∈ Xi (6.2.7)

93
for i ∈ NNmax with the boundary condition X0 = T and where f (x, u, W) =
{f (x, u, w) | w ∈ W}.
We now introduce the following assumption:
Assumption 6.2 (i) The set T is a robust control invariant set for system (6.1.1) and
constraint set (XZ , U, W).
(ii) The control law ν : XZ → U is such that T is RPI for system (6.1.1) and constraint
set (Xν , W), where Xν , XZ ∩ Xν and Xν is defined by:

Xν , {x | ν(x) ∈ U}. (6.2.8)

The control law ν(·) in Assumption 6.2(ii) exists by Assumption 6.2(i).


In view of Assumption 6.2 X0 = T is a RCI set for system (6.1.1) and constraint
set (XZ , U, W). An appropriate choice for the control law κ0 : T → 2U is:

κ0 (x) , ν(x) (6.2.9)

The time-invariant control law κ0 : XNmax → 2U defined, for all i ∈ NNmax , by



κi (x), ∀x ∈ Xi \ Xi−1 , i ≥ 1
κ0 (x) , (6.2.10)
κ (x), ∀x ∈ X
0 0

robustly steers any x ∈ Xi to X0 in i steps or less to X0 , while satisfying state and


control constraints and avoiding the obstacles, and thereafter maintains the state in X0 .
We now recall a standard result in robust time–optimal control:
Proposition 6.1 (RCI property of set sequence {Xi }) Suppose that Assumption 6.2
holds and let X0 , T where T satisfies Assumption 6.2(i), then the set sequence
{Xi } computed using the recursion (6.2.6) is a non-decreasing sequence of RCI sets for
system (6.2.1) and constraint set (XZ , U, W), i.e. Xi ⊆ Xi+1 ⊆ XZ for all i ∈ NNmax .
The following property of the set-valued control law κ0 (·) defined in (6.2.10) follows
directly from the construction of κ0 (·):
Theorem 6.1. (Robust Finite Time Attractivity of X0 = T) Suppose that Assumption
6.2 holds and let X0 , T where T satisfies Assumption 6.2(i). The target set X0 , T
is robustly finite-time attractive for the closed-loop system x+ ∈ f (x, κ0 (x), W) with a
region of attraction XNmax .
It is clear that the solution of the robust time optimal obstacle avoidance problem
requires a set of efficient computational algorithms for performing set the operations
in (6.2.1), (6.2.6) and (6.2.7). Our next step is to demonstrate that in certain relevant
and important cases is possible to employ standard computational geometry software
(polyhedral algebra) in order to characterize the sets sequence {Xi } and the correspond-
ing set valued control laws {κi (·)}.

94
6.3 Robust Time Optimal Obstacle Avoidance Problem –
Linear Systems
Consider the relevant case when the system defined in (6.1.1) is linear:

x+ = Ax + Bu + w (6.3.1)

where couple (A, B) ∈ Rn×n × Rn×m is assumed to be controllable. The hard state
and input constraints (6.1.2), the sets X and U are closed polyhedron and a polytope
(bounded and closed polyhedron), respectively, and the disturbance set W is a polytope;
each of the sets contain the origin as an interior point. The set Z is an open polygon
(the union of a finite number of open polyhedra).
The standard recursion for the computation of the robust controllable sets {Xi } and
the associated robust time-optimal control laws κi : Xi → 2U (6.2.6) and (6.2.7) is:

Xi , {x ∈ XZ | ∃u ∈ U s.t. Ax + Bu ⊕ W ⊆ Xi−1 } (6.3.2)


κi (x) , {u ∈ U | Ax + Bu ⊕ W ⊆ Xi−1 }, ∀x ∈ Xi (6.3.3)

for i ∈ NNmax with the boundary condition X0 = T.


Our next step is provide a detailed characterization of the sets {Xi } under assumption
that the set X0 = T is a polygon (the set T is generally a polytope and hence a polygon).
We proceed as follows:

Xi , {x ∈ XZ | ∃u ∈ U s.t. Ax + Bu ⊕ W ⊆ Xi−1 }
= {x ∈ XZ | ∃u ∈ U s.t. Ax + Bu ∈ Xi−1 ⊖ W}
[
= {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Xi−1 ⊖ W}
j∈Nr
[
= {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Xi−1 ⊖ W} (6.3.4)
j∈Nr

Since the sets Xi , i ∈ NNmax are generally polygons, the set Xi−1 ⊖ W is in general a
polygon for every i ∈ NNmax . See Proposition 1.7 for more detail and the computation
of the sets Xi−1 ⊖ W, i ∈ NNmax . It follows from Proposition 1.7 that the sets:

Yi , Xi−1 ⊖ W (6.3.5)
S
for i ∈ NNmax are also polygons (Yi = k∈Nqi Y(i,k) where qi is a finite integer). It follows
from (6.3.4) – (6.3.5) that:
[
Xi = {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Yi }
j∈Nr
[ [
= {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Y(i,k) }
j∈Nr k∈Nqi
[
= {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Y(i,k) }
(j,k)∈Nr ×Nqi
[
= X(i,j,k) , X(i,j,k) , {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Y(i,k) } (6.3.6)
(j,k)∈Nr ×Nqi

95
A similar argument shows that for all (i, j, k) ∈ NNmax × Nr × Nqi :

κ(i,j,k) (x) ⊆ κi (x), ∀x ∈ X(i,j,k) , (6.3.7)

where
κ(i,j,k) (x) , {u ∈ U | Ax + Bu ∈ Y(i,k) }, ∀x ∈ X(i,j,k) , (6.3.8)

with X(i,j,k) defined in (6.3.6). If we let for every x ∈ Xi :

Ni (x) , {(j, k) ∈ Nr × Nqi | x ∈ X(i,j,k) }, (6.3.9)

it follows that:
[
κi (x) = κ(i,j,k) (x), ∀x ∈ Xi . (6.3.10)
(j,k)∈Ni (x)

Remark 6.3 (Comments on the computation of X(i,j,k) ) It is necessary to consider those


integer couples (j, k) ∈ Nr × Nqi for which X(i,j,k) 6= ∅.
The set X(i,j,k) , {x ∈ XZj | ∃u ∈ U s.t. Ax + Bu ∈ Y(i,k) } is easily computed by the
standard computational software, since:

X(i,j,k) = ProjX Z(i,j,k) , Z(i,j,k) , {(x, u) ∈ XZj × U | Ax + Bu ∈ Y(i,k) } (6.3.11)

Further simplification is obtained in the case when the system transition matrix A is
invertible, in which case:
\
X(i,j,k) = A−1 Y(i,k) ⊕ (−A−1 BU) XZj (6.3.12)

Construction of the set T satisfying Assumption 6.2(i) can be obtained by exploiting


results presented in Chapters 2 – 5. In principle, one can construct a set T with the
methods of Chapter 3 and then checkif T is RCI for system (6.3.1) and constraint set
(XZ , U, W). However, the constraint set XZ is generally non–convex complicating the
problem of the computation of a RCI set for system x+ = Ax + Bu + w and constraint
set (XZ , U, W). This issue is under current investigation.

Remark 6.4 (RCI property of set sequence {Xi } and Robust Finite Time Attractivity
of X0 = T) If the Assumption 6.2 (with f (x, u, w) = Ax + Bu + w) holds the results
of Proposition 6.1 and Theorem 6.1 are directly applicable to this relevant case. Finally,
we have from the discussion above that if the target set T is a RCI polygon, the set
sequence {Xi } is also a RCI sequence of polygons.

6.4 Robust Time Optimal Obstacle Avoidance Problem –


Piecewise Affine Systems
In this section we treat another important class of discrete time systems, a relevant case
when the system defined in (6.1.1) is piecewise affine:

x+ = f (x, u, w) = fl (x, u, w), ∀(x, u) ∈ Pl ,


fl (x, u, w) , Al x + Bl u + cl + w, ∀l ∈ N+
t (6.4.1)

96
The function f (·) is assumed to be continuous and the polytopes Pl , i ∈ N+
t , have
disjoint interiors and cover the region Y , X × U of state/control space of interest so
S T
that k∈N+ Pk = Y ⊆ Rn+m and interior(Pk ) interior(Pj ) = ∅ for all k 6= j, k, j ∈ N+t .
t
The set of sets {Pk | k ∈ N+
q } is a polytopic partition of Y.
Our assumptions on the constraint sets are the same as for the linear case. Thus, the
the sets X and U are polyhedral and a polytopic, respectively, and the disturbance set
W is polytopic; each of the sets contains the origin as an interior point. The set Z is an
open polygon.
In this case, the standard recursion for the computation of the robustly controllable
sets {Xi } and the associated robust time-optimal control laws κi : Xi → 2U (6.2.6)
and (6.2.7) is:

Xi , {x ∈ XZ | ∃u ∈ U s.t. f (x, u, W) ⊆ Xi−1 } (6.4.2)


κi (x) , {u ∈ U | f (x, u, W) ⊆ Xi−1 }, ∀x ∈ Xi (6.4.3)

for each i ∈ NNmax with the boundary condition X0 = T.


Our next step is to provide a detailed characterization of the sets {Xi } under assump-
tion that the set X0 = T is a polygon:

Xi , {x ∈ XZ | ∃u ∈ U s.t. f (x, u, W) ⊆ Xi−1 } (6.4.4)


= {x ∈ XZ | ∃u ∈ U s.t. f (x, u, 0) ∈ Xi−1 ⊖ W} (6.4.5)

In going from (6.4.4) to (6.4.5) we have used the fact that f (x, u, w) = f (x, u, 0) + w for
the system defined in (6.4.1). We proceed by exploiting the definition of f (·):
[
Xi = {x ∈ XZ | ∃u ∈ U s.t. (x, u) ∈ Pl , fl (x, u, 0) ∈ Xi−1 ⊖ W}
l∈N+
t
[ [
= {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Xi−1 ⊖ W}
l∈N+
t
j∈Nr
[
= {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Xi−1 ⊖ W}
(j,l)∈Nr ×N+
t

(6.4.6)
S
It follows from (6.4.6) and by recalling (6.3.5) (Yi , Xi−1 ⊖ W = k∈Nqi Y(i,k) where qi
is a finite integer) that:
[
Xi = {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Yi }
(j,l)∈Nr ×N+
t
[ [
= {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Y(i,k) }
(j,l)∈Nr ×N+
t
k∈Nqi
[
= {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Y(i,k) }
(j,l,k)∈Nr ×N+
t ×Nqi
[
= X(i,j,l,k) ,
(j,l,k)∈Nr ×N+
t ×Nqi

X(i,j,l,k) , {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Y(i,k) } (6.4.7)

97
A similar argument shows that for all (i, j, l, k) ∈ NNmax × Nr × N+
t × Nqi :

κ(i,j,l,k) (x) ⊆ κi (x), ∀x ∈ X(i,j,l,k) , (6.4.8)

where

κ(i,j,l,k) (x) , {u ∈ U | (x, u) ∈ Pl , Al x + Bl u + cl ∈ Y(i,k) }, ∀x ∈ X(i,j,l,k) , (6.4.9)

with X(i,j,l,k) defined in (6.4.7). For every x ∈ Xi let:

Ni (x) , {(j, l, k) ∈ Nr × N+
t × Nqi | x ∈ X(i,j,k) }, (6.4.10)

so that:
[
κi (x) = κ(i,j,l,k) (x), ∀x ∈ Xi . (6.4.11)
(j,l,k)∈Ni (x)

Remark 6.5 (Comments on the computation of X(i,j,l,k) ) As already observed for the
linear case, it is necessary to consider those integer triplets (j, l, k) ∈ Nr × N+
t × Nqi for
which X(i,j,l,k) 6= ∅.
The set X(i,j,l,k) , {x ∈ XZj | ∃u ∈ U s.t. (x, u) ∈ Pl , Al x + Bl u + cl ∈ Y(i,k) } is easily
computed by the standard computational software, since as observed in linear case:

X(i,j,l,k) = ProjX Z(i,j,l,k) ,


\
Z(i,j,l,k) , {(x, u) ∈ XZj × U Pl | Al x + Bl u + cl ∈ Y(i,k) } (6.4.12)

Remark 6.6 (RCI property of set sequence {Xi } and Robust Finite Time Attractivity
of X0 = T – Piecewise Affine Systems) If the Assumption 6.2 (with f (·) defined
in (6.4.1)) holds the results of Proposition 6.1 and Theorem 6.1 are directly applicable
to this relevant case. A final and relevant conclusion, for the case when the considered
system is piecewise affine, is that if the target set T is a RCI polygon, the set sequence
{Xi } is also a RCI sequence of polygons.

6.5 An Appropriate Selection of the feedback control laws


κ(i,j,k) (·) and κ(i,j,l,k) (·)
As already observed in Section 5.3 of Chapter 5 we remark that for any i ∈ NNmax
an appropriate selection of the control laws κ(i,j,k) (·) and κ(i,j,l,k) (·) can be obtained by
employing the parametric mathematical programming as we briefly demonstrate next.
(p,l)
For each i ≥ 1, i ∈ NNmax let Vil (x, u) and Vi (x, u) be any linear or quadratic (strictly
convex) function in (x, u), for instance:

Vil (x, u) , |Ax + Bu|2Q (6.5.1)


(p,l)
Vi (x, u) , |Al x + Bl u + cl |2Q (6.5.2)

These function are defined for linear and piecewise affine case respectively.

98
Consider the linear case and an appropriate way of selecting the feedback control law
κ(i,j,k) (·). Since Z(i,j,k) defined in (6.3.11) is a polyhedral set and since Vi (x, u) is a linear
or a quadratic (strictly convex) function it follows that for each i ≥ 1, i ∈ NNmax the
optimization problem Pli (x):

0
θ(i,j,k) (x) , arg min{Vil (x, u) | (x, u) ∈ Z(i,j,k) } (6.5.3)
u

is a parametric linear/quadratic problem. As is well know [BMDP02, DG00, MR03b,


BBM03a], the solution takes the form of a piecewise affine function of state x ∈ X(i,j,k) =
ProjX Z(i,j,k) :

0
θ(i,j,k) (x) = S(i,j,k,h) x + s(i,j,k,h) , x ∈ R(i,j,k,h) , h ∈ Nli (6.5.4)

where li is a finite integer and the union of polyhedral sets R(i,j,k,h) partition the set
S
X(i,j,k) , i.e. X(i,j,k) = h∈Nl R(i,j,k,h) .
i
If we let:

(i, j, k)0 (x) , arg min {i | x ∈ X(i,j,k) , (i, j, k) ∈ NNmax × Nr × Nqi } (6.5.5)
(i,j,k)

it follows that

0
θ(i0 (x),j 0 (x),k 0 (x)) (x) ∈ κ(i,j,k) (x) ⊆ κi (x), ∀x ∈ X(i,j,k) (6.5.6)

for all i ≥ 1 and all triples (i, j, k) ∈ NNmax × Nr × Nqi .


The corresponding optimization problem for the piecewise affine case is:

0 (p,l)
θ(i,j,l,k) (x) , arg min{Vi (x, u) | (x, u) ∈ Z(i,j,l,k) } (6.5.7)
u

where the set Z(i,j,l,k) defined in (6.4.12) is a polyhedral set.

Remark 6.7 (Disturbance Free Case & Algorithmic Implementation) The results re-
ported in this chapter are directly applicable to the case when W = {0}. We also remark
that the proposed set recursions are flexible to the minor and obvious modifications when
algorithmically implemented.

6.5.1 Numerical Example

Our illustrative example is the second order unstable system:


" # " #
+ 1 0 1
x = x+ u+w (6.5.8)
1 1 1

where

w ∈ W , w ∈ R2 | |w|∞ ≤ 1 .

The following set of ‘standard’ state and control constraints is required to be satisfied:

X ={x | |x|∞ ≤ 15},


U ={u | |u| ≤ 4} (6.5.9)

99
15

Z4 Z3 Z2
10

5 Z10 Z9

0 Z5 Z1

T
−5 Z11

Z6
−10

Z7 Z8

X
−15
−15 −10 −5 0 5 10 15

Figure 6.5.1: Obstacles, State Constraints and Target Set

The obstacle configuration, state constraints and target set are shown in Figure 6.5.1.
The target set is robust control invariant and is computed by method of Chapter 3.
The set X0 is shown together with the RCI set sequence {Xi }, i ∈ N3 computed by (6.3.2)
in Figure 6.5.2. In Figure 6.5.3 we show the sets {Xi }, i ∈ N3 for the case when W = {0}.

15

10

0 X0

X2

−5 X1

X3

−10

X
−15
−15 −10 −5 0 5 10 15

Figure 6.5.2: RCI Set Sequence {Xi }, i ∈ N3

6.6 Summary
Our results provide an exact solution of the robust obstacle avoidance problem for un-
certain discrete time systems. A complete characterization of the solution is given for

100
15
X

10

X1
5

0 X0

−5
X2

−10 X3

−15
−15 −10 −5 0 5 10 15

Figure 6.5.3: CI Set Sequence {Xi }, i ∈ N3

linear and piecewise affine discrete time systems. The basic set structure employed is a
polygon. Complexity of the solution may be considerable but the main advantage is that,
the exact solution is provided and the resultant computations can be performed by using
polyhedral algebra. The proposed algorithms can be implemented by using standard
computational geometry software [Ver03, KGBM03].
The results can be extended to address the optimal control for obstacle avoidance
problem with linear performance index. It is also possible to address the case when the
obstacles are given as a time varying set. These relevant extensions will be presented
elsewhere.
In conclusion, the robust time optimal avoidance problem is addressed and a set of
computational procedures is derived for the relevant cases when the system being con-
trolled is linear or piecewise affine. The method was illustrated by a numerical example.

101
Chapter 7

Reachability analysis for


constrained discrete time systems
with state- and input-dependent
disturbances

Once upon a time, when I had begun to think about the things that are, and my thoughts
had soared high aloft, while my bodily senses had been put under restraint by sleep – yet
not such sleep as that of men weighed down by fullness of food or by bodily weariness –
I thought there came to me a being of vast and boundless magnitude, who called me by
name, and said to me, ‘What do you wish to hear and see, and to learn and to come
to know by thought?’ ‘Who are you?’ I said. ‘I’ said he, ‘am Poimandres, the Mind
of Sovereignty.’ ‘I would fain learn,’ said I, ‘the things that are, and understand their
nature and get knowledge of God.’

– Hermes Trismegistus

This chapter presents new results that allow one to compute the set of states which can
be robustly steered in a finite number of steps, via state feedback control, to a given target
set. The assumptions that are made in this chapter are that the system is discrete-time,
nonlinear and time-invariant and subject to mixed constraints on the state and input.
A persistent disturbance, dependent on the current state and input, acts on the system.
Existing results are not able to address state- and input-dependent disturbances and the
results in this chapter are therefore a generalization of previously-published results. The
application of the results to the computation of the maximal robust control invariant set
is also briefly discussed. Specific results, which allow one to perform the set computations
using polyhedral algebra, linear programming and computational geometry software, are
presented for linear and piecewise affine systems with additive state disturbances. Some

102
simple examples are given which show that, even if all the relevant sets are convex and
the system is linear, convexity of the robustly controllable sets cannot be guaranteed.

7.1 Introduction
The problems of controllability to a target set and computation of robust control invariant
sets for systems subject to constraints and persistent, unmeasured disturbances have been
the subject of study for many authors [Ber72, BR71b, Bla99, De 98, KG87, KLM02,
KM02a, May01, VSS+ 01]. Though many papers have fairly general results that can be
applied to a large class of nonlinear discrete-time systems, most authors assume that the
disturbance is not dependent on the state and input. The only paper which appears to
address state-dependent disturbances directly is [De 98]. In [KLM02] a general framework
is introduced for systems with mixed state and input constraints subject to state- and
input-dependent disturbances, but the only specific results, which allow one to compute
the set of states from which the system can be controlled to a target set, are given for
disturbances which are independent of the state and input. This chapter therefore extends
the results of [De 98, KLM02, KM02a] to the case where the disturbance is dependent
on the state and input. Furthermore, results are given for linear and piecewise affine
systems which allow the use of polyhedral algebra, linear programming and computational
geometry software to perform the set computations.
The need for a framework which can deal with state- and input-dependent distur-
bances was briefly motivated in [KLM02]. Disturbances that are dependent on the state
and/or input frequently arise in practice when trying to model systems with physical
constraints. For example, consider the nonlinear (piecewise affine) system

x+ = Ax + Bsatu (u + Eu w) + Ex w (7.1.1)

which is subject to a bounded disturbance w ∈ W. The function satu (·) models physical
saturation limits on the input. Assuming that these saturation limits are symmetric and
have unit magnitude, an equivalent way of modelling (7.1.1) is to treat it as linear system
with input-dependent disturbances, i.e. letting

x+ = Ax + Bu + BEu w + Ex w, (7.1.2)

where the control is constrained to

U , {u | |u|∞ ≤ 1 } (7.1.3)

and the input-dependent disturbance w ∈ W(u) satisfies

W(u) , {w | |u + Eu w|∞ ≤ 1 and w ∈ W } . (7.1.4)

Another common reason why state- and input-dependent disturbances arise in prac-
tice is when it is known that the uncertainty of a model is greater in certain regions
of the state-input space than in other regions. For example, when a nonlinear model is

103
linearized, the uncertainty gets larger the further one gets from the point of linearization.
This uncertainty can be modelled as a state- and input-dependent disturbance, where
the size of the disturbance decreases the closer one gets to the point of linearization.
A state- and input-dependent disturbance model will therefore allow one to obtain less
conservative results than if one were to assume that the disturbance is independent of
the state and input.
Another example when one can model uncertainty as a state- and input-dependent
disturbance is when there is parametric uncertainty present in the model. For example, if
there is uncertainty in the pair (A, B) in (7.1.2), then one can think of the uncertainty as
an additional state- and input-dependent disturbance. The reader is referred to [Bla94] to
see how reachability computations can be carried out for this specific class of uncertainty
when the system is linear. The results in this chapter can, with some effort, be used
to extend the results in [Bla94] to the class of piecewise affine systems with parametric
uncertainty.

7.2 The One-step Robust Controllable set


Section 7.2.1 gives the main results of the chapter which are then specialized in Sec-
tion 7.2.2 for the case when the disturbance is dependent only on the state or input
or when the system does not have a control input. Section 7.2.3 shows that the set of
states robustly controllable to the target set is a polygon if the system is linear/affine or
piecewise affine, the target set is a polygon and all relevant constraint sets are polygons.

7.2.1 General Case

Consider the time-invariant discrete-time system

x+ = f (x, u, w), (7.2.1)

where x is the current state (assumed to be measured), x+ is the successor state, u is the
input, and w is an unmeasured, persistent disturbance that is dependent on the current
state and input:
w ∈ W(x, u) ⊂ W, (7.2.2)

where W = Rp denotes the disturbance space. The state and input are required to satisfy
the constraints
(x, u) ∈ Y ⊂ X × U, (7.2.3)

where X = Rn is the state space and U = Rm is the input space. The constraint
(x, u) ∈ Y defines the state-dependent set of admissible inputs

U(x) , {u | (x, u) ∈ Y } (7.2.4)

as well as the set of admissible states

X , {x | ∃u such that (x, u) ∈ Y } = {x | U(x) 6= ∅ } . (7.2.5)

104
In order to have a well-defined problem, we assume the following:
Assumption 7.1 W(x, u) 6= ∅ for all (x, u) ∈ Y and W(·) is bounded on bounded sets.
Given a set Ω ⊆ X , this section shows how the one-step robust controllable set – the
set of states Pre(Ω) for which there exists an admissible input such that, for all allowable
disturbances, the successor state is in Ω may be computed. The set Pre(Ω) is defined by

Pre(Ω) , {x | ∃u ∈ U(x) such that f (x, u, w) ∈ Ω for all w ∈ W(x, u) } . (7.2.6)

Remark 7.1 (Constraints Structure) If the constraints on the state and input are inde-
pendent, i.e. Y = X × U, then

Pre(Ω) = {x ∈ X | ∃u ∈ U such that f (x, u, W(x, u)) ⊆ Ω } . (7.2.7)

We now present main result of this chapter:


Theorem 7.1. (Characterization & Computation of Pre(Ω)) Let

Σ , {(x, u) ∈ Y | f (x, u, w) ∈ Ω for all w ∈ W(x, u) } (7.2.8)

and
Π , {(x, u, w) | (x, u) ∈ Y and w ∈ W(x, u) } . (7.2.9)

If
Φ , f −1 (Ω) , {(x, u, w) | f (x, u, w) ∈ Ω } , (7.2.10)

then the set of states that are robust controllable to Ω is given by

Pre(Ω) = ProjX (Σ) , (7.2.11)

where
Σ = ProjX×U (Π) \ ProjX×U (Π \ Φ) . (7.2.12)

Proof: A graphical interpretation of the proof is given in Figure 7.2.1.


From the definition of the set difference,

Π \ Φ = {(x, u, w) | (x, u) ∈ Y, w ∈ W(x, u) and f (x, u, w) ∈


/ Ω} (7.2.13)

so that

ProjX×U (Π \ Φ) = {(x, u) ∈ Y | ∃w ∈ W(x, u) such that f (x, u, w) ∈


/ Ω} (7.2.14)

and

Y \ ProjX×U (Π \ Φ) = {(x, u) ∈ Y | f (x, u, w) ∈ Ω for all w ∈ W(x, u) } . (7.2.15)

It follows from Assumption 7.1 and (7.2.9) that

ProjX×U (Π) = Y. (7.2.16)

105
w − space (x, u, w) − space

∆ , Π \ Φ = ∪i ∆i , i = 1, . . . , 4
Ψ = ∪i Ψi , Ψi = ProjX×U ∆i , i = 1, . . . , 4
Θ = ProjX×U Π = Y
Σ=Θ\Ψ

∆3
∆1 Π

Φ∩Π

∆2 ∆4

Ψ2 Ψ3
Ψ1 Σ Ψ4
Θ

(x, u) − space

Figure 7.2.1: Graphical illustration of Theorem 7.1

Hence

ProjX×U (Π) \ ProjX×U (Π \ Φ) = Y \ ProjX×U (Π \ Φ) (7.2.17a)


= {(x, u) ∈ Y | f (x, u, w) ∈ Ω for all w ∈ W(x, u) }
(7.2.17b)
=Σ (7.2.17c)

so that (7.2.12) is true.


The proof is completed by noting that

ProjX (Σ) = {x | ∃u such that (x, u) ∈ Y and f (x, u, w) ∈ Ω for all w ∈ W(x, u) }
(7.2.18a)
= {x | ∃u ∈ U(x) such that f (x, u, w) ∈ Ω for all w ∈ W(x, u) } (7.2.18b)
= Pre(Ω). (7.2.18c)

QeD.

Remark 7.2 (The set Σ) Note that the set Σ defined in (7.2.8) is equal to ProjX×U (Π)\
ProjX×U (Π \ Φ), as stated in (7.2.12).

106
A relevant result establishing when the set Pre(Ω) is closed is given next:
Theorem 7.2. (Closedness of Pre(Ω)) Suppose f : Rn × Rm × Rp → Rn is continuous,
p
W : Rr → 2R , r , n + m, is continuous and bounded on bounded sets. If Ω is closed,
then Pre(Ω) is closed.

n
Proof: Let the set-valued map F : Rr → 2R be defined as follows:

F (z) , {f (z, w) | w ∈ W(z)}, z , (x, u). (7.2.19)

By Proposition 7.1 in Appendix of this chapter, the set-valued function F is continuous.


The set Σ, defined in (7.2.12), is given by

Σ , {z | F (z) ⊆ Ω} = F † (Ω). (7.2.20)

Since F is continuous and Ω is closed, it follows from Proposition 7.2 in Appendix of this
chapter that Σ is closed. Since Pre(Ω) = ProjX Σ, it follows that Pre(Ω) is closed.

QeD.

7.2.2 Special Cases

Consider first the simpler case when the disturbance constraint set is a function of x
only, i.e. the disturbance w satisfies w ∈ W(x). The definitions of Σ and Π in (7.2.8) and
(7.2.9), respectively, and Pre(Ω) become

Σ , {(x, u) ∈ Y | f (x, u, w) ∈ Ω for all w ∈ W(x)}, (7.2.21)

Π , {(x, u, w) | (x, u) ∈ Y and w ∈ W(x) } (7.2.22)

and

Pre(Ω) , {x | ∃u ∈ U(x) such that f (x, u, w) ∈ Ω for all w ∈ W(x) } . (7.2.23)

Theorem 7.1 remains true with these changes. A similar modification is needed if the
disturbance constraint set is a function of u only, i.e. the disturbance w satisfies w ∈
W(u). For the case when the disturbance is independent of the state and input, see for
instance [Ker00, KLM02, KM02a].

Remark 7.3 (State – input independent disturbance) If the disturbance is independent


of the state and input, Theorem 7.1 provides a method for computing the one-step robust
controllable set and is an alternative to the method in [KLM02, KM02a, Ker00], where
it is proposed to compute the so-called Pontryagin difference. Obviously, both methods
will result in the same set. The difference between the two methods is that Theorem 7.1
relies on projection whereas the method in [KLM02, KM02a, Ker00] does not. It is not
easy to determine a priori which method would be more efficient. The computational

107
requirements depend very much on the specifics of the problem and the computational
tools that are available.
Next, consider the case when f is a function of (x, w) only, i.e. the system has no input
u and x+ = f (x, w). In this case, the constraint (x, u) ∈ Y is replaced by x ∈ X ⊂ X
and assumption Assumption 7.1 is replaced by:
Assumption 7.2 W(x) 6= ∅ for all x ∈ X and W(·) is bounded on bounded sets.
Also, in this case the definitions of Σ, Π and Φ in Theorem 7.1, and Pre(Ω) are
replaced by
Σ , {x ∈ X | f (x, w) ∈ Ω for all w ∈ W(x) } , (7.2.24)

Π , {(x, w) | x ∈ X and w ∈ W(x) } , (7.2.25)

Φ , f −1 (Ω) , {(x, w) | f (x, w) ∈ Ω } , (7.2.26)

and
Pre(Ω) , {x ∈ X | f (x, w) ∈ Ω for all w ∈ W(x)}. (7.2.27)

In other words, Pre(Ω) is now the set of admissible states such that the successor state
lies in Ω for all w ∈ W(x). In this case, the conclusion of Theorem 1 becomes

Pre(Ω) = Σ = ProjX (Π) \ ProjX (Π \ Φ) . (7.2.28)

As can be seen, this special case results in less computational effort, since operations are
performed in lower-dimensional spaces and only two projection operations are needed.

7.2.3 Linear and Piecewise Affine f (·) with Additive State Disturbances

Consider the system defined in (7.2.1) with

f (x, u, w) , Aq x + Bq u + Eq w + cq if (x, u, w) ∈ Pq . (7.2.29)

The sets {Pq | q ∈ Q}, where Q has finite cardinality, are polyhedra and constitute
S
a polyhedral partition of Π, i.e. Π , q∈Q Pq and the sets Pq have non-intersecting
interiors. For all q ∈ Q, the matrices Aq ∈ Rn×n , Bq ∈ Rn×m , Eq ∈ Rn×p and vector
cq ∈ Rn .

Theorem 7.3. (Special Case – Piecewise affine systems) If the system is given
by (7.2.29) and Π and Ω are the unions of finite sets of polyhedra, then the robust
controllable set Pre(Ω), as given in (7.2.6) and (7.2.11), is the union of a finite set of
polyhedra.

Proof: Let
[
Ω, Ωj , (7.2.30)
j∈J

108
where {Ωj | j ∈ J } is a finite set of polyhedra. First, note that
[
Φ= {(x, u, w) | f (x, u, w) ∈ Ωj } (7.2.31a)
j∈J
[
= {(x, u, w) ∈ Pq | Aq x + Bq u + Eq w + cq ∈ Ωj } . (7.2.31b)
(j,q)∈J×Q

Since {(x, u, w) ∈ Pq | Aq x + Bq u + Eq w + cq ∈ Ωj } is a polyhedron, it follows that


Φ is the union of a finite set of polyhedra (i.e. a polygon).
As shown in Proposition 1.6, the set difference between two polygons is a polygon.
The proof is completed by recalling that the projection of the union of a finite number
of sets is the union of the projections of the individual sets, hence the projection of a
polygon is a polygon.

QeD.

Remark 7.4 (Comment on the class of the systems) Clearly, Theorem 7.3 holds if the
system is linear or affine (i.e. Q has cardinality 1). It is interesting to observe that, even
if Ω and Π are both convex sets and f (·) is linear, there is no guarantee that Pre(Ω) is
convex. This is demonstrated in Section 7.4.1 via a numerical example.

Remark 7.5 (Necessary computational tools) See Proposition 1.6 for new results that
allow one to compute the set difference between two (possibly non-convex) polygons.
The projection of the set difference is then equal to the union of the projections of the
individual polyhedra that constitute the set difference. The projection of each individual
polyhedron can be computed, for example, via Fourier-Motzkin elimination [KG87] or via
enumeration and projection of its vertices, followed by a convex hull computation [Ver03];
see also [D’A97, DMD89] for alternative projection methods.

7.3 The i-step Set and robust control Invariant Sets


Consider the general case (Section 7.2.1). For any integer i, let Xi denote the i-step
(robust controllable) set to Ω, i.e. Xi is the set of states that can be steered, by a
time-varying state feedback control law, to the target set Ω in i steps, for all allowable
disturbance sequences while satisfying, at all times, the constraint (x, u) ∈ Y. As is well-
known [BR71b, KLM02, KM02a, May01, VSS+ 01], the sequence of sets {Xi }∞
i=0 may be
calculated recursively as follows:

Xi+1 = Pre(Xi ), (7.3.1a)


X0 = Ω. (7.3.1b)

Before giving the next result, recall that a set S is robust control invariant if and only
if for any x ∈ S, there exists a u ∈ U(x) such that f (x, u, w) ∈ S for all w ∈ W(x, u),

109
i.e. S is robust control invariant if and only if S ⊆ Pre(S) [Bla99, Ker00]. Recall also
that the maximal robust control invariant set C∞ in X is equal to the union of all robust
control invariant sets contained in X .

Theorem 7.4. (Invariant Sets – Standard Results) Suppose Assumption 7.1 holds:

(i) If the system is piecewise affine (defined by (7.2.29)) and if the sets Ω and Π are
the unions of finite sets of polyhedra, then each i-step set Xi , i ∈ {0, 1, . . .}, is the
union of a finite set of polyhedra.

(ii) If Xj ⊆ Xj+1 for some j ∈ {0, 1, . . .}, then each set Xi , i ∈ {j, j + 1, . . .}, is robust
control invariant.

(iii) If the set Ω is robust control invariant, then each set Xi , i ∈ {0, 1, . . .}, is robust
control invariant.

(iv) If Ω , X and Xj = Xj+1 for some j ∈ {0, 1, . . .}, then each set Xi , i ∈ {j, j +1, . . .},
is equal to the maximal robust control invariant set C∞ contained in X .

Proof: The method of proof is standard and the reader is therefore referred to [Bla99,
Ker00].

QeD.

Remark 7.6 (Comment on the maximal robust control invariant set) Note that, if Ω 6= X
and Ω is robust control invariant, then the maximal robust controllable set X∞ to Ω
S
(X∞ = ∞ i=0 Xi , where X0 = Ω) is, in general, not equal to the maximal robust control
T
invariant set C∞ in X (C∞ = ∞ i=0 Xi , where X0 = X ).

Remark 7.7 (Technical Issues regarding the maximal robust control invariant set) It is
important to note that, without any additional assumptions on the system or sets, it is
T
possible to find examples for which C∞ 6= i∈N Xi if Xf = X [Ber72].

Remark 7.8 (Special Case – system has no input) As in Section 7.2.2, if the system has
no input u, i.e. if f is a function only of (x, w), then with the appropriate modifications
to definitions, Theorem 7.4 still holds, but with ‘robust control invariant’ replaced with
‘robust positively invariant’.

7.4 Numerical Examples


In order to illustrate our results we consider two simple examples. In the first, the system
is scalar and the disturbance state-dependent (w ∈ W(x)); in the second, the system is
second-order and the disturbance control-dependent (w ∈ W(u)).

110
(x,w) space

w 5

3
W(xa )
2

−1

−2

−3

−4

−5
−20 −15 −10 −5 0 5 10 15 20
xa x

Figure 7.4.1: Graph of W

7.4.1 Scalar System with State-dependent Disturbances

We consider the following scalar system:

x+ = x + u + w (7.4.1)

which is subject to the constraints

(x, u) ∈ X × U, X , {x| − 5 ≤ x ≤ 20} and U , {u| − 2 ≤ u ≤ 2}. (7.4.2)

The state-dependent disturbance satisfies:

w ∈ W(x) ⇔ (x, w) ∈ ∆ , ∆1 ∪ ∆2 , (7.4.3)

where ∆1 = convh {(0, 0.25), (0, −0.25), (2, 1.25), (2, −1.25), (20, 2.25), (20, −2.25)}
and
∆2 = convh {(0, 0.25), (0, −0.25), (−2, 1.25), (−2, −1.25), (−20, 2.25), (−20, −2.25)}.
The set ∆ is shown in Figure 7.4.1. The robust control invariant target set is X0 = Ω =
{x| − 0.6 ≤ x ≤ 0.6}.
The sequence of i-step sets is computed by using the results of Theorem 7.1 and some
of the sets are: X1 = {x| − 0.7 ≤ x ≤ 0.7}, X2 = {x| − 0.9 ≤ x ≤ 0.9}, X3 = {x| − 1.3 ≤
x ≤ 1.3}, X4 = {x| − 2.0468 ≤ x ≤ 2.0468}, . . . , X8 = {x| − 4.5793 ≤ x ≤ 4.5793},
X9 = {x| − 5 ≤ x ≤ 5.1131}, X10 = {x| − 5 ≤ x ≤ 5.6123}, . . . , X49 = {x| − 5 ≤ x ≤
12.2759}, X50 = {x| − 5 ≤ x ≤ 12.3099}. The set X∞ of all states that can be steered to
the target set, while satisfying state and control constraints, for all allowable disturbance
sequences, is: X∞ = {x| − 5 ≤ x ≤ 12.7999}. The sets Σi for i = 1, 2, 3, 4 are also shown
in Figure 7.4.2.

111
(x,u) space
3
u

Σ
4

1 Σ
3

Σ
1
0 Σ
2

−1

−2

−3
−3 −2 −1 0 1 2 3
x

Figure 7.4.2: Sets Σi for i = 1, 2, 3, 4

In order to illustrate the fact that the i-step sets can be non-convex even if X, U, Ω
and the graph of W(x) are convex, consider the same example. This time the state-
dependent disturbance satisfies:

w ∈ W(x) ⇔ (x, w) ∈ ∆ , convh{(−5, 0), (0, −3), (5, 0), (0, 3)}. (7.4.4)

If the target set is X0 = Ω = {x | −2.5 ≤ x ≤ 2.5}, the one-step set is X1 = {x | −3.75 ≤


x ≤ −0.8333} ∪ {x | 0.8333 ≤ x ≤ 3.75}. The sets ∆ and Σ are shown in Figure 7.4.3.
Even if Ω is a robust control invariant set, the convexity of each i-step set
still cannot be guaranteed. This is easily illustrated by considering the same ex-
ample with X = {x | −5 ≤ x ≤ 4}, w ∈ W(x) ⇔ (x, w) ∈ ∆ ,
convh{(−5, 0.5), (−5, −0.5), (3, −2.1), (4, 0), (3, 2.1)} and the robust control invariant tar-
get set X0 = Ω = {x | −2.5 ≤ x ≤ 2.5}. In this case, the one-step robust control invariant
set is X1 = {x | −3.75 ≤ x ≤ 2.5} ∪ {x | 3.5455 ≤ x ≤ 4}. The sets ∆ and Σ are shown
in Figure 7.4.4.

7.4.2 Second-order LTI Example with Control-dependent Disturbances

The discrete-time linear time-invariant system


" # " #
+ 0.7969 −0.2247 0.1271
x = x+ u+w (7.4.5)
0.1798 0.9767 0.0132
is subject to the state and control constraints

(x, u) ∈ X × U, X , {x | |x|∞ ≤ 10, [−1 1]x ≤ 12 } , U , {u | −3 ≤ u ≤ 3 } . (7.4.6)

The control-dependent disturbance satisfies:

w ∈ W(u) ⇔ (u, w) ∈ ∆ , ∆1 ∪ ∆2 , (7.4.7)

112
(x,w) space
w 4

−2

−4
−6 −4 −2 0 2 4 6
x
(x,u) space
u 3
2

−1

−2

−3
−4 −3 −2 −1 0 1 2 3 4
x

Figure 7.4.3: Graph of W (top) and the set Σ (bottom)

(x,w) space
w 3
2

−1

−2

−3
−5 −4 −3 −2 −1 0 1 2 3 4
x
(x,u) space
u
2

−1

−2

−5 −4 −3 −2 −1 0 1 2 3 4
x

Figure 7.4.4: Graph of W (top) and the set Σ (bottom)

113
Projection to 1−2 axes Projection to 1−3 axes

0.06 0.06

0.04 0.04

0.02 0.02

0 0

−0.02 −0.02

−0.04 −0.04

−0.06 −0.06
−5 0 5 −5 0 5

Projection to 2−3 axes

0.06

0.04

0.02

−0.02

−0.04

−0.06
−0.05 0 0.05

Figure 7.4.5: Graph of W

where ∆1 and ∆2 are given by:


    

 −0.008 0 −1 0.01 


    
 




 −1 0 0  "
 u
# 
 0 



   
∆1 = (u, w)  −0.008 1 0  ≤  0.01  , (7.4.8)

   w  

  −0.008 −1 0   0.01  


    


 

−0.008 0 1 0.01

and     

 0.008 0 1 
 0.01

    
  10 


  0  "
 u
# 
 0 



   
∆2 = (u, w)  0.008 −1 0  ≤  0.01  . (7.4.9)

   w  

  0.008 1 0   0.01  


    


 

0.008 0 −1 0.01
The robust control invariant target set is X0 = convh{(−0.2035, 0.0482),
(0.2035, −0.0482), (−0.2035, −0.0148), (−0.1405, 0.0482), (0.2035, 0.0148), (0.1405, −0.0482)}.
The projections of the set ∆ onto two-dimensional subspaces are shown in Figure 7.4.5.
Some of the i-step sets, computed using Theorem 7.1, are shown in Figure 7.4.6.

7.5 Summary
The main result of this chapter (Theorem 7.1) showed how one can obtain Pre(Ω), the
set of states that can be robustly steered to Ω, via the computation of a sequence of set
differences and projections. It was then shown in Theorem 7.3 that if Ω and the relevant
constraint sets are polygons (i.e. they are given by the unions of finite sets of convex
polyhedra) and the system is linear or piecewise affine, then Pre(Ω) is also a polygon and

114
x space
x 2
2

1.5

0.5

−0.5

−1

−1.5

−2
−4 −3 −2 −1 0 1 2 3 4
x
1

Figure 7.4.6: Sets Xi for i = 0, 1, . . . , 7

can be computed using standard computational geometry software. It was then shown
in Section 7.3 how Pre(·) can be used to recursively compute the i-step set, i.e. the set
of states which can be robust steered to a given target set in i steps, as well as how
Pre(·) can be used to compute the maximal robust control invariant set. Finally, some
simple examples were given which show that, even if the system is linear, the respective
constraint sets are convex and the target set is robust control invariant, convexity of the
i-step sets cannot be guaranteed.

Appendix to Chapter 7 – Results on Set-valued Functions


The definitions of inner and outer semi-continuity employed below are due to Rockafellar
and Wets [RW98]; for Definitions 1–4 and Theorem 7.5, see [Pol97]; Professor Elijah
L. Polak also provided the proof of Proposition 7.1 (private communication). In what
follows, B(z, ρ) , {z | |z| ≤ ρ } and d(a, A) , inf b∈A |a − b|.

Definition 7.1 (Outer semi-continuity of set valued maps) A set-valued map F : Rr →


n
2R is outer semi-continuous (o.s.c.) at ẑ if F (ẑ) is closed and, for every compact set S
such that F (ẑ) ∩ S = ∅, there exists a ρ > 0 such that F (z) ∩ S = ∅ for all z ∈ B(ẑ, ρ).
n
A set-valued map F : Rr → 2R is o.s.c. if it is o.s.c. at every z ∈ Rr .

Definition 7.2 (Inner semi-continuity of set valued maps) A set-valued map F : Rr →


n
2R is inner semi-continuous (i.s.c.) at ẑ if F (ẑ) is closed and, for every open set S such
that F (ẑ) ∩ S 6= ∅, there exists a ρ > 0 such that F (z) ∩ S 6= ∅ for all z ∈ B(ẑ, ρ). A
n
set-valued map F : Rr → 2R is i.s.c. if it is i.s.c. at every z ∈ Rr .

115
n
Definition 7.3 (Continuity of set valued maps) A set-valued map F : Rr → 2R is
continuous if it is both o.s.c. and i.s.c.

Definition 7.4 (Convergence of set sequences) A point â is a limit point of the infinite
sequence of sets {Ai } if d(â, Ai ) → 0. A point â is a cluster point if there exists a
subsequence I ⊂ N such that d(â, Ai ) → 0 as i → ∞, i ∈ I. The set lim sup Ai is the set
of cluster points of {Ai } and lim inf Ai is the set of limit points of {Ai }, i.e. lim sup Ai is
the set of cluster points of sequences {ai } such that ai ∈ Ai for all i ∈ N and lim inf Ai is
the set of limits of sequences {ai } such that ai ∈ Ai for all i ∈ N. The sets Ai converge
to the set A (Ai → A or lim Ai = A) if lim sup Ai = lim inf Ai = A.
The following result appears as Theorem 5.3.7 in [Pol97].
Theorem 7.5. (Theorem on the continuity of set valued maps) (i) A function F : Rr →
n
2R is o.s.c. at ẑ if and only if for any sequence {zi } such that zi → ẑ, lim sup F (zi ) ⊆ F (ẑ).
Also, F is o.s.c. if and only if it graph G , {(z, y) | y ∈ F (z)} is closed.
n
(ii) A function F : Rr → 2R is i.s.c. at ẑ if and only if for any sequence {zi } such that
zi → ẑ, lim inf F (zi ) ⊇ F (ẑ).
n
(iii) Suppose F : Rr → 2R is such that F (z) is compact for all z ∈ Rr and bounded
on bounded sets. Then F is o.s.c. at ẑ if and only if, for every open set S such that
F (ẑ) ⊆ S, there exists a ρ > 0 such that F (z) ⊆ S for all z ∈ B(ẑ, ρ).

Proposition 7.1 (Result on the continuity of set valued maps) Suppose that f : Rr ×
p
Rp → Rn is continuous and that W : Rp → 2R is continuous and bounded on bounded
n
sets. Then the set-valued function F : Rr → 2R defined by F (z) , {f (z, w) | w ∈ W(z)}
is continuous.

Proof: (i) (F is o.s.c.). Let {zi } be any infinite sequence such that zi → ẑ and let {fi }
be any infinite sequence such that fi ∈ F (zi ) for all i ∈ N and fi → fˆ. Then, for all
p
i, fi = f (zi , wi ) with wi ∈ W(zi ). Since {zi } lies in a compact set and W : Rp → 2R
is bounded on bounded sets, there exists a subsequence of {wi } such that wi → ŵ as
i → ∞, i ∈ I ⊂ N. Since W is continuous, ŵ ∈ W(ẑ). Hence

fˆ = lim f (zi , wi ) = f (ẑ, ŵ) ∈ F (ẑ).


i∈I

This implies that F is o.s.c.


(ii) (F is i.s.c.) Let {zi } be any infinite sequence such that zi → ẑ and let fˆ be an
arbitrary point in F (ẑ). Then fˆ = f (ẑ, ŵ) for some ŵ ∈ W(ẑ). Since W is continuous,
there exists an infinite sequence {wi } such that wi ∈ W(zi ) and wi → ŵ. Then fi ,
f (zi , wi ) ∈ F (zi ) for all i ∈ N and

lim fi = lim f (zi , wi ) = f (ẑ, ŵ) = fˆ ∈ F (ẑ)

This implies that F is i.s.c.

116
QeD.

n
Proposition 7.2 (Result on the closedness) Suppose F : Rr → 2R is continuous and
that Ω ⊆ Rn is closed. Then the (outer) inverse set F † (Ω) , {z | F (z) ⊆ Ω} is closed.

Proof: Suppose {zi } is an arbitrary infinite sequence in F † (Ω) (F (zi ) ⊆ Ω for all i ∈ N)
such that zi → ẑ. Since F is continuous, lim F (zi ) = F (ẑ). Because Ω is closed, F (zi ) ⊆ Ω
for all i ∈ N implies F (ẑ) ⊆ Ω. Hence ẑ ∈ F † (Ω) so that F † (Ω) is closed.

QeD.

117
Chapter 8

State Estimation for piecewise


affine discrete time systems
subject to bounded disturbances

Let no man enter who knows no geometry.

– Plato

The problem of state estimation for piecewise affine, discrete time systems with
bounded disturbances is considered in this chapter. It is shown that the state lies in
a closed uncertainty set that is determined by the available observations and that evolves
in time. The uncertainty set is characterised and a recursive algorithm for its computation
is presented. Recursive algorithms are proposed for filtering prediction and smoothing
problems.
State estimation is usually addressed using min–max, set–membership or stochastic
approaches. In the stochastic approach [Jaz70, May79], the (approximate) a posteriori
distribution of the state is recursively computed and the conditional mean determined.
In the min-max approach, the worst case error is minimized to yield the state estimate
[NK91, Bas91].
The set membership approach deals with the case when the disturbances are unknown
but bounded. A sequence of compact sets that are consistent with observed measurements
as well as with the initial uncertainty is computed. This approach was first considered,
for linear time invariant discrete time systems, by Witsenhausen [Wit68] (related results
can be also found in for instance [Sch68, BR71a, Ber71, Sch73, Che88, Che94, Kur77]).
In these papers, state estimation is achieved by using either ellipsoidal or polyhedral sets.
An advantage of ellipsoidal sets is their simplicity, but a major disadvantage is the fact
that the sums and intersections of ellipsoids have to be approximated by an ellipsoid,
rendering the use of ellipsoidal sets in state estimation somewhat conservative. The key
advantage of polyhedral sets is that accuracy of the estimated sets of possible states

118
is improved; the price to be paid is their complexity. Complexity issues of polyhedral
sets in set membership state estimation have been tackled in [CGZ96] by the use of
minimum volume parallelotopes. The use of minimum volume zonotopes in a fairly
general setup has been recently proposed and analysed in [ABC03]. Further results
on approximation methods are given in [VM91], where authors provide procedures for
computation of simple shaped sets - norm balls (ellipsoidals, boxes or diamonds), that
are optimal approximations of the true uncertainty sets. An interesting discussion on the
choice of the criteria for optimality of external ellipsoidal approximations (the volume,
the sum of squared semi–axes and the volume of the projection of the external ellipsoidal
approximations onto a particular subspace) is reported in [Che02]. A comprehensive
account of the results in set membership estimation theory and a number of relevant
references can be found in [MV91].
Relevant to the set-membership approach to state estimation is set-valued analy-
sis and viability theory [AF90, Aub91, KV97, KF93]. In particular, a comprehensive
theoretical exposition of ellipsoidal calculus and its application to viability and state
estimation problems for linear continuous time systems is presented in [KV97].
A recent extension of the ellipsoidal techniques for reachability analysis for disturbance
free hybrid systems is reported in [KV04]. Moving horizon estimation for hybrid systems
is considered in [FTMM02].
This chapter addresses the problem of set-membership estimation for discrete time
piece-wise affine systems subject to additive but bounded disturbances; to the authors
knowledge this problem has not been specifically addressed in the literature. This chapter
complements existing results and provides a recursive filtering algorithm for piecewise
affine systems. This extension is not trivial since the dynamical behavior of piecewise
affine systems is significantly more complicated than that of linear systems for which the
set-membership based estimation is fairly well understood.

8.1 Preliminaries
Consider a perturbed, autonomous, piecewise affine, discrete time system, Sa , defined
by:

x+ = f (x, w), (8.1.1)


y = g(x, v), (8.1.2)

where x ∈ Rn denotes the current state, x+ the successor state, y ∈ Rp the output, w ∈
W ⊂ Rn the current input disturbance and v ∈ V ⊂ Rp the measurement disturbance.
The (unknown) disturbances w and v may take values, respectively, anywhere in the
bounded, convex sets W and V. The functions f (·) and g(·) are piecewise affine, being
defined by
)
f (x, w) , Ak x + ck + w,
∀x ∈ Rk , ∀k ∈ N+
q (8.1.3)
g(x, v) , Dk x + fk + v,

119
where, for any integer i the set N+ + +
i is defined by Ni , {1, 2, . . . , i}; the sets Ri , i ∈ Nq
are polytopes, have disjoint interiors and cover the region of state space of interest so
S n and interior(R )
T
that k∈N+ q
R k = X ⊆ R k interior(Rj ) = ∅ for all k 6= j, k, j ∈ N+
q
where interior(A) denotes the interior of the set A. The set of sets {Rk | k ∈ N+
q } is
called a polytopic partition of X.
Consider, also, the piecewise affine, discrete time system, S, defined by:

x+ = f (x, u, w), (8.1.4)


y = g(x, u, v), (8.1.5)

where the piecewise affine functions f (·) and g(·) are defined by
)
f (x, u, w) , Ak x + Bk u + ck + w,
∀(x, u) ∈ Pk , ∀k ∈ N+
q (8.1.6)
g(x, u, v) , Dk x + Ek u + fk + v,

where x, x+ , y, w and v are defined as above and u ∈ Rm denotes the current in-
put (control). The polytopes Pi , i ∈ N+ q , have disjoint interiors and cover the region
S
Z , X × U of state/control space of interest so that k∈N+ q
Pk = Z ⊆ Rn+m and
T
interior(Pk ) interior(Pj ) = ∅ for all k 6= j, k, j ∈ N+ +
q . The set of sets {Pk | k ∈ Nq } is
a polytopic partition of Z.
The problem considered in this chapter is: given that the initial state x0 lies in
the initial uncertainty set X0 and given, at each time i, the observation sequence
{y(1), y(2), . . . , y(i)}, determine the uncertainty set X (i) in which the true (but un-
known) state x(i) lies.
Remark 8.1 (Basic Notation) We remind the reader that the following notation is used
in this chapter. For any integer i, wi denotes the sequence {w(0), w(1), . . . w(i − 1)},
and φ(i; (x0 , i0 ), wi ) denotes the solution of x+ = f (x, w) at time i if the ini-
tial state is x0 at time i0 and the disturbance sequence is sequence wi . Similarly,
φ(i; (x0 , i0 ), ui , wi ) is the solution of x+ = f (x, u, w) at time i if the initial state is
x0 at time i0 , the input sequence is ui , {u(0), u(1), . . . , u(i − 1)}, and the distur-
bance sequence is sequence wi ; φ(i; (x0 , i0 ), u, w), where u = {u(0), u(1), . . . , u(N − 1)}
and w = {w(0), w(1), . . . , w(N − 1)} are input and disturbance sequences, respectively,
with N > i, denotes φ(i; (x0 , i0 ), ui , wi } where ui = {u(0), u(1), . . . , u(i − 1)} and
wi = {w(0), w(1), . . . , w(i − 1)} are sequences consisting of the first i elements of u
and w respectively. Event (x, i) denotes state x at time i.

Definition 8.1 (Set of states consistent with observation) A state x is said to be con-
sistent with the observation y = g(x, v) if y ∈ g(x, V) , {g(x, v) | v ∈ V}. An event
(x, i) is consistent (for system Sa ) with an initial uncertainty set X0 and observations
yi = {y(j) | j ∈ N+
i } if there exists an initial state x0 ∈ X0 and an admissible disturbance
sequence wi (wi ∈ Wi ) such that y(j) ∈ g(φ(j; (x0 , 0), wj ), V) for all j ∈ N+
i . The set of

120
states x at time i consistent with an initial uncertainty set and observations yi is:

X (i) = {φ(i; (x0 , 0), w) | x0 ∈ X0 , w ∈ Wi ,


y(j) ∈ g(φ(j; (x0 , 0), w), V)∀ j ∈ N+
i } (8.1.7)

Similarly, an event (x, i) is consistent (for system S) with an initial uncertainty set X0 ,
an input sequence ui+1 , {u(0), u(1), . . . , u(i)} and observations yi = {y(j) | j ∈ N+
i }
if there exists an initial state x0 ∈ X0 and an admissible disturbance sequence wi such
that y(j) ∈ g(φ(j; (x0 , 0), uj , wj ), V) for all j ∈ N+
i . The set X (i) of states, at time
i, consistent with an initial uncertainty set X0 , input sequence ui+1 = (ui , u(i)) and
observations yi is:

X (i) = {φ(i; (x0 , 0), ui , w) | x0 ∈ X0 , w ∈ Wi ,


y(j) ∈ g(φ(j; (x0 , 0), uj , w), u(j), V) ∀ j ∈ N+
i }. (8.1.8)

where uj is the sequence consisting of the first j elements of ui .

Clearly, the set of states x consistent with the observation y = g(x, v) is the set
{x | y ∈ g(x, V)}. When g(x, v) = h(x) + v, which is the case for Sa (and g(x, u, v) =
h(x, u) + v for S), the set of states consistent with observation y = h(x) + v is the set

C(y) = {x | h(x) ∈ y ⊕ (−V)} (8.1.9)

for system Sa , since {x | y ∈ h(x) + V} = {x | ∃v ∈ V s.t. y = h(x) + v} = {x | ∃v ∈


V s.t. h(x) = y − v} = {x | h(x) ∈ y ⊕ (−V)}. For system S, by similar arguments, the
set of states consistent with observation y = h(x, u) + v is the set

C(y, u) = {x | h(x, u) ∈ y ⊕ (−V)} (8.1.10)

We assume that the initial state x0 belongs to the initial uncertainty set X0 a polytope
or a polygon. We consider the two related problems Pa and P.

Problem Pa : Given system Sa , an integer i, and an initial uncertainty set X0 ,


determine, for each j ∈ N+
i , the uncertainty set X (j) of states at time j that is consistent
with the observations yi , {y(j) | j ∈ N+
i } and the initial uncertainty set X0 (i.e. with
each initial state x0 ∈ X0 ).

Problem P : Given system S, an integer i, an initial uncertainty set X0 , and the input
sequence ui+1 = {u0 , u1 , . . . , ui }, determine, for each j ∈ N+
i , the uncertainty set X (j)
of states that is consistent with the observations yi , {y(j) | j ∈ N+
i } and the initial
uncertainty set X0 .

8.2 Filtering
The filtering problem is the determination, at each time i, of X (i), the set of states
consistent with the initial uncertainty set X0 and the observations yi . The solution to

121
this problem is given by (8.1.7) or (8.1.8). In Section 8.2.1 we restate recursive versions
of (8.1.7) or (8.1.8) and then, in Section 8.2.2, specialize our results to the case when
f (·) and g(·) are piecewise affine.

8.2.1 Recursive filtering algorithm

The following result appears in a similar form in the literature, see for example [Wit68,
Sch68, Sch73], and is given here for the sake of completeness.
Proposition 8.1 (Recursive State Filtering) The uncertainty sets X (i), i ∈ N are given,
for system Sa , by the recursive relations,

X (i + 1) = X̂ (i + 1|i) ∩ {x | y(i + 1) ∈ g(x, V)} (8.2.1)

where X̂ (i + 1|i) is defined by

X̂ (i + 1|i) , {f (x, w) | x ∈ X (i), w ∈ W} (8.2.2)

and for system S by

X (i + 1) = X̂ (i + 1|i) ∩ {x | y(i + 1) ∈ g(x, u(i + 1), V)} (8.2.3)

where
X̂ (i + 1|i) , {f (x, u(i), w) | x ∈ X (i), w ∈ W} (8.2.4)

The set X̂ (i+1|i) is the uncertainty set at time i+1 consistent with the initial uncertainty
set X0 and the observations yi , i.e. prior to the observation y(i + 1); it is the one-step
ahead prediction of the uncertainty set X (i). The set {x | y(i + 1) ∈ g(x, V)} for system
Sa ({x | y(i + 1) ∈ g(x, u(i + 1), V)} for system S) is the set of states, at time i + 1,
consistent with the observation y(i + 1).

Proof: By definition, for system Sa ,

X (i + 1) = {x | x = φ(i + 1; (x0 , 0), w), x0 ∈ X0 , w ∈ Wi+1 ,


yj ∈ g(φ(j; (x0 , 0), w), V), ∀ j ∈ Ii+1 }
= {x | x = φ(i + 1; (z, i), w), z = φ(i; (x0 , 0), w), x0 ∈ X0 , w ∈ Wi ,
w ∈ W, yj ∈ g(φ(i + 1; (z, i), w, V) ∀ j ∈ Ii , yi+1 ∈ g(x, V)}
= {x | x = φ(1; (z, 0), w), z = φ(i; (x0 , 0), w), x0 ∈ X0 , wi ∈ Wi ,
w ∈ W, yj ∈ g(x, V) ∀ j ∈ N+
i } ∩ {x | yi+1 ∈ g(x, V)}

= {x | x = f (z, w), z ∈ X (i), w ∈ W} ∩ {x | yi+1 ∈ g(x, V)}


= X̂ (i + 1) ∩ {x | yi+1 ∈ g(x, V)}

The proof for system S is almost identical.

QeD.

122
8.2.2 Piecewise affine systems

As shown in the sequel of this chapter, the basic data structure employed in the solution of
the filtering problem when the system is piecewise affine, is a polygon. In particular, the
uncertainty sets X (i), i ∈ N, are polygons. Hence our first problem is the determination
of the one-step ahead prediction set X̂ + when the current uncertainty set X is a polygon
and our second problem is the determination of the updated uncertainty set X + ,
X̂ + ∩ {x | y + ∈ g(x, V)} for system Sa and X + , X̂ + ∩ {x | y + ∈ g(x, u+ , V)} for system
S. These problems are resolved in Lemmas 1 and 2.

Lemma 8.1 (The one-step ahead prediction set X̂ + ) Given the polygon X = ∪j∈J Xj
where each set Xj is a polyhedron, then the one-step ahead prediction set X̂ + is also a
polygon.

Proof: Consider first the case when the system is autonomous (system Sa ). Then, for
all j ∈ J,
f (Xj , W) = ck ⊕ Ak (Xj ∩ Rk ) ⊕ W ∀k ∈ N+
q

so that
[
X̂ + , f (X , W) = (ck ⊕ Ak Xj,k ⊕ W) (8.2.5)
(j,k)∈J×N+
q

where
Xj,k , Xj ∩ Rk (8.2.6)

Since each Xj is a polyhedron and each Rk a polytope, X̂ + is a polygon. For the non-
autonomous system S, the corresponding expression for X̂ + is
[ 
X̂ + , f (X , u, W) = u
(ck + Bk u) ⊕ Ak Xj,k ⊕W (8.2.7)
(j,k)∈J×N+
q

where, now,
u
Xj,k , {x ∈ Xj | (x, u) ∈ Pk } (8.2.8)

Thus, the updated set X̂ + is a polygon.

QeD.

Lemma 8.2 (The updated uncertainty set X + ) Given the polygon X = ∪j∈J Xj where
each set Xj is a polyhedron, then X + , the one-step ahead prediction set, updated by the
successor observation y + , is also a polygon.

Proof: (i) For system Sa , X + , the one-step ahead prediction set updated by the obser-
vation y + , is shown in Proposition 8.1 (equation (8.2.1)) to be

X + = X̂ + ∩ C(y + ) (8.2.9)

123
where, from (8.1.9),

C(y + ) = {x | y + ∈ g(x, V)} = {x | h(x) ∈ y + ⊕ (−V)} (8.2.10)


h(x) , Dk x + fk ∀x ∈ Rk , ∀k ∈ N+
q (8.2.11)

Using (8.2.5) and (8.2.9)–(8.2.11), we obtain


   
[ \ [
X+ =  (ck ⊕ Ak Xj,k ⊕ W)  Ck (y + ) (8.2.12)
(j,k)∈J×N+
q k∈N+
q

where, for all k ∈ N+ +


q , all y ∈ R
p

Ck (y + ) , {x ∈ Rk | Dk x + fk ∈ y + ⊕ (−V)}. (8.2.13)

Since, for all k ∈ N+ + p


q , all y ∈ R , Ck (y) is a polyhedron, it follows that X̂
+ is a polygon.

(ii) For system S, using similar reasoning, we obtain

X + = X̂ + ∩ C(y + , u+ ) (8.2.14)

where, now, X̂ + is given by (8.2.7) and C(y + , u+ ) is given by:

C(y + , u+ ) = {x | y + ∈ g(x, u+ , V)} = {x | h(x, u+ ) ∈ y + ⊕ (−V)} (8.2.15)


h(x, u+ ) , Dk x + Ek u+ + fk ∀(x, u+ ) ∈ Pk , ∀k ∈ N+
q (8.2.16)

Using (8.2.7) and (8.2.14)–(8.2.16), we obtain


   
[ \ [
X+ =  u
((ck + Bk u) ⊕ Ak Xj,k ⊕ W)  Ck (y + , u+ ) (8.2.17)
(j,k)∈J×N+
q k∈N+
q

where, for all k ∈ N+ + + p


q , all (y , u ) ∈ R × R
m

Ck (y + , u+ ) , {x | (x, u+ ) ∈ Pk , Dk x + Ek u+ + fk ∈ y + ⊕ (−V)}. (8.2.18)

Since, for all k ∈ N+ + + p m + +


q , all (y , u ) ∈ R × R , Ck (y , u ) is a polyhedron, it follows that
X̂ + is a polygon.

QeD.

Theorem 8.1. (Recursive State Filtering for piecewise affine systems) (i) The recursive
solution of the filtering problem for the autonomous system Sa is given by:
   
[ \ [
X (i + 1) =  (ck ⊕ Ak Xj,k (i) ⊕ W)  Ck (y(i + 1)) (8.2.19)
(j,k)∈Ji ×N+
q k∈N+
q

X (0) = X0 (8.2.20)

124
where X0 is the a-priori uncertainty set at time 0 and where, for each time i ≥ 1, the
sets X (i), Xj,k (i) and Ck (y(i + 1)) are defined by

X (i) = ∪j∈Ji Xj (i) (8.2.21)


Xj,k (i) , Xj (i) ∩ Rk (8.2.22)
Ck (y(i + 1)) , {x ∈ Rk | Dk x + fk ∈ y(i + 1) ⊕ (−V)}. (8.2.23)

The sets X (i), i ∈ N are polygons (and the sets Xj (i), j ∈ Ji and i ∈ N, are polyhedra)
(ii) The recursive solution of the filtering problem for the non-autonomous system S is
given by:
 
[
X (i + 1) =  ((ck + Bk u(i)) ⊕ Ak Xj,k (i) ⊕ W)
(j,k)∈Ji ×N+
q
 
\ [
 Ck (y(i + 1), u(i + 1)) (8.2.24)
k∈N+
q

X (0) = X0 (8.2.25)

where X0 is the a-priori uncertainty set at time 0 and where, for each time i ≥ 1, the
sets X (i), Xj,k (i) and Ck (y(i + 1), u(i + 1)) are defined by

X (i) = ∪j∈Ji Xj (i) (8.2.26)


Xj,k (i) , {x ∈ Xj (i) | (x, u(i)) ∈ Pk } (8.2.27)
Ck (y(i + 1), u(i + 1)) , {x | (x, u(i + 1)) ∈ Pk ,
Dk x + Ek u(i + 1) + fk ∈ y(i + 1) ⊕ (−V)}. (8.2.28)

The sets X (i), i ∈ N are polygons (and the sets Xj (i), j ∈ Ji and i ∈ N, are polyhedra).
Theorem 8.1 follows directly from Lemma 8.2. A graphical illustration of Theorem8.1
is given in Figure 8.2.1.

8.3 Prediction
Consider the problem of finding the uncertainty set X̂ (ℓ|i) at time ℓ, given observations
up to time i < l, i.e. given yi . For system Sa this is the set

X̂ (ℓ|i) = {φ(ℓ; (x0 , 0), wi ) | x0 ∈ X0 , wi ∈ Wi ,


y(j) ∈ g(φ(j; (x0 , 0), wj ), V) ∀ j ∈ N+
i } (8.3.1)

so that
X̂ (ℓ|i) = {φ(ℓ; (x, i), wi,ℓ ) | x ∈ X (i), wi,ℓ ∈ Wℓ−i } (8.3.2)

where X (i) = X̂ (i|i) is the uncertainty set at time i given X0 and the observation sequence
yi (the solution at time i to the filtering problem); wi,ℓ denotes the sequence {w(i), w(i +
1), . . . , w(ℓ − i − 1)}. Since no observations are available in the interval i + 1 to ℓ, the

125
R1 R2

X2,1 (i + 1)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

X̂2 (i + 1|i)
X1,1 (i + 1)
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
X̂1 (i + 1|i)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
C1 (y(i + 1)) xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
X2,2 (i + 1)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
C2 (y(i + 1))
X1,2 (i + 1)

T
Xk,l (i + 1) = X̂k (i + 1|i) Cl (y(i + 1)), k, l = 1, 2

Figure 8.2.1: Graphical Illustration of Theorem 8.1

solution to the prediction problem may be obtained form the solution to the filtering
problem by omitting the update step in (8.2.19) yielding:
Corollary 8.1 (Predicted Uncertainty Sets) The uncertainty sets at times greater than
time i, given observations up to time i, are given by the recursion:
[ h i
X̂ (ℓ + 1|i) = ck ⊕ Ak X̂j,k (ℓ|i) ⊕ W , ℓ ≥ i (8.3.3)
(j,k)∈Jℓ ×N+
q

X̂ (i|i) = X (i), (8.3.4)

for system Sa , where the sets X̂ (ℓ|i) and X̂j,k (ℓ|i) are defined by

X̂ (ℓ|i) = ∪j∈Jℓ X̂j (ℓ|i), X̂j,k (ℓ|i) , X̂j (ℓ|i) ∩ Rk (8.3.5)

and by
[ h i
X̂ (ℓ + 1|i) = (ck + Bk u(ℓ)) ⊕ Ak X̂j,k (ℓ|i) ⊕ W , ℓ ≥ i (8.3.6)
(j,k)∈Jℓ ×N+
q

X̂ (i|i) = X (i). (8.3.7)

where

X̂ (ℓ|i) = ∪j∈Jℓ X̂j (ℓ|i), X̂j,k (ℓ|i) , {x ∈ X̂j (ℓ|i) | (x, u(ℓ)) ∈ Pk } (8.3.8)

for system S. The sets X̂ (ℓ|i), l ≥ i are polygons.

126
8.4 Smoothing
Consider now the problem of finding the uncertainty set X̂ (i|N ) at time i given observa-
tions up to time N > i, i.e. given yN . For system Sa this is the set

X̂ (i|N ) = {φ(i; (x0 , 0), w) | x0 ∈ X0 , w ∈ WN ,


y(j) ∈ g(φ(j; (x0 , 0), w), V) ∀ j ∈ N+
N } (8.4.1)

so that

X̂ (i|N ) = X̂ (i|i) ∩ {x | y(j) ∈ g(φ(j; (x, i), w),


w ∈ WN −i , ∀ j ∈ {i + 1, . . . , N }} (8.4.2)

The set X (i) = X̂ (i|i) is the solution to the filtering problem, already discussed. Hence
we consider determination of the set

Z(i) , {x | y(j) ∈ g(φ(j; (x, i), w), w ∈ WN −i ∀ j ∈ {i + 1, . . . , N }} (8.4.3)

For each time i, Z(i) is a controllability set, the set of initial states x for which there
exists a disturbance sequence w such that the resultant state trajectory satisfies the
constraints y(j) ∈ g(φ(j; (x, i), w), V), w ∈ Wj−i for all j ∈ {i + 1, . . . , N }. The solution
to this problem is well known; the sets Z(j), j ∈ {i, . . . , N } for any i, for system Sa , are
polygons and are computed in reverse time, j = N − 1, N − 2, . . . , i, as follows:

Z(j) = ∪k∈N+
q
{x ∈ Rk | Ak x + ck ∈ Z ∗ (j + 1) ⊕ (−W)} (8.4.4)

Z(N ) = X. (8.4.5)

where  
\ [
Z ∗ (j + 1) = Z(j + 1)  Ck (y(j + 1)
k∈N+
q

is a polygon. By similar arguments (and appropriate modifications) as above we focus


on the determination of the set Z(i) for system S. The corresponding equations for
computing the sets Z(j) for system S are

Z(j) = ∪k∈N+
q
{x | (x, u(i)) ∈ Pk , Ak x + Bk u(i) + ck ∈ Z ∗ (j + 1) ⊕ (−W)} (8.4.6)

Z(N ) = X. (8.4.7)

where  
\ [
Z ∗ (j + 1) = Z(j + 1)  Ck (y(j + 1), u(j + 1)
k∈N+
q

is a polygon. As before, each set Z(j) is a polygon. That the sets Z(j) are yielded by
the recursion (8.4.4) and (8.4.5) for system Sa (and by (8.4.6) and (8.4.7) for system S )
follows from well known results on controllability sets, see for example [Ker00, RKM03]
and references therein.

127
Theorem 8.2. (Smoothing for piecewise affine systems) The uncertainty set X̂ (i|N ) at
time i < N (the set of states at time i consistent with the initial uncertainty set X0 and
observation sequence yN ) is a polygon and is given by

X̂ (i|N ) = X (i) ∩ Z(i) (8.4.8)

where X (i) is the uncertainty set at time i given the initial uncertainty set X0 and obser-
vations y(1), y(2), . . . , y(i) (and the inputs u(0), u(1), . . . , u(i)) and Z(i), which depends
on observations y(i+1), . . . , y(N ) (and on the inputs u(i+1), u(i+2), . . . , u(N )), is given
by (8.4.4) and (8.4.5) for system Sa and by (8.4.6) and (8.4.7) for system S.
Theorem 8.2 follows from (8.4.2) and (8.4.3) for system Sa and from appropriate
modifications of (8.4.2) and (8.4.3) for system S.

8.5 Numerical Example


In order to illustrate our results we employed a simple second order autonomous piecewise
affine system defined by:
 " # " #

 0.7969 −0.2247 0


 x+ + w, x1 ≤ 1


 0.1798 0.9767 0
x+ = (8.5.1)

 " # " #


 0.4969 −0.2247 0.3


 x+ + w, x1 ≥ 1
0.0798 0.9767 0.1

and  h i

 1 0 x + 0 + v, x1 ≤ 1

y= (8.5.2)

 h i
 x + 0.5 + v, x1 ≥ 1
0.5 0
where it is known that x0 ∈ X0 , v ∈ V and w ∈ W; the set of possible initial states X0 is

X0 , convh{(−4, −17)′ , (4.2, 10.1)′ , (13.6, −1)′ , (5.4, −7.9)′ }

and the uncertainty sets are described by

W , {w ∈ R2 | |w|∞ ≤ 0.05}

and
V , {v ∈ R | − 0.1 ≤ v ≤ 0.1}

We took the initial state to be x0 = (4.8, −9)′ , applied admissible random disturbance
sequences of length N = 37, and recorded the corresponding output sequence. Our
algorithm was applied with results shown in Figure 8.5.1, in which the light shaded sets
are estimated sets consistent with the observed output and the single trajectory is the
actual trajectory. As expected, the sets X (i) of possible states contained the actual
trajectory.

128
2
x2
0
R1 R2

−2
XN
−4
X1
−6

−8

−10

−12
X0
−14

−16

−18
−5 0 1 5 10 15

x1

Figure 8.5.1: Estimated Sets for Example in Section 8.5

8.6 Summary
Our results provide an exact method for set-membership based state estimation for piece-
wise affine discrete time systems. The basic set structure employed in our procedures is a
polygon. Complexity of the solution may be considerable but the main advantage is that,
at any given time i, the exact uncertainty set X (i), in which the true state x(i) lies, is com-
puted. The proposed algorithms can be implemented by using standard computational
geometry software [Ver03, KGBM03]. Computational complexity can be reduced by the
use of the appropriate approximations employing results reported in [VM91, CGZ96].
In conclusion, a set theoretic approach was applied to the problem of state estima-
tion for piecewise affine discrete time systems. Recursive algorithms were proposed for
filtering, prediction and smoothing problems. The method was illustrated by a numerical
example.

129
Part II

Robust Model Predictive Control

130
Chapter 9

Tubes and Robust Model


Predictive Control of constrained
discrete time systems

Mathematics possesses not only truth, but supreme beauty – a beauty cold and austere,
like that of sculpture.

– Bertrand Russell, Arthur William 3rd Earl

In this chapter we discuss a form of feedback model predictive control by using tubes.
This approach in certain relevant cases, such as linear, has manageable computational
complexity and it overcomes disadvantages of conventional model predictive control when
uncertainty is present. The optimal control problem, solved on-line, yields a ‘tube’ and
an associated policy that maintains the controlled trajectories in the tube despite un-
certainty; computational complexity is linear in horizon length, when the system being
controlled is linear. A set of ingredients ensuring stability is identified.

9.1 Preliminaries
The problem of robust model predictive control may be tackled in several ways, briefly
reviewed in [MRRS00], §4. The first and the most obvious method is to rely on
the inherent robustness of deterministic model predictive control (that ignores distur-
bances) [SR95, MAC02b].
A second approach, called open-loop model predictive control in [MRRS00], is to
determine the current control action by solving on-line an optimal control problem in
which the uncertainty is taken into account (both in cost minimization and constraint
satisfaction) and the decision variable (like that in the first approach) is a sequence of
control actions. Some of the earliest analysis of robust model predictive control, for ex-
ample [ZM93], used this approach; this method cannot contain the ‘spread’ of predicted

131
trajectories resulting from disturbances making solutions of the uncertain optimal con-
trol problem unduly conservative or, even, infeasible. To overcome these disadvantages,
feedback model predictive control is necessary. Both open–loop and feedback model pre-
dictive controller generate a tube of trajectories when uncertainty is present. A crucial
advantage of feedback model predictive control is the fact that it reduces the spread of
predicted trajectories resulting from uncertainty. Both open-loop and feedback model
predictive control provide feedback control but, whereas in open-loop model predictive
control the decision variable in the optimal control problem solved on-line is a sequence
{u0 , u1 , . . . , uN −1 } of control actions, in feedback model predictive control it is a policy
π which is a sequence {µ0 (·), µ1 (·), . . . , µN −1 (·)} of control laws.
An appropriate graphical illustration of the main differences between open–loop, de-
terministic and feedback model predictive control (for the case when system being con-
trolled is linear) is given in Figure 9.1.1. In Figure 9.1.1a we show predicted tubes of
trajectories for (i) feedback MPC and, (ii) open-loop OC (optimal control); as expected
and briefly discussed above the spread of trajectories is much larger for the second; the
state constraint is not satisfied for all predicted trajectories for open-loop MPC indicat-
ing that the open-loop optimal control problem is infeasible. Figure 9.1.1b shows the
spread of actual trajectories for feedback MPC (i) and conventional nominal MPC (ii)
that ignores uncertainty in the optimal control problem solved on-line. The performance
of the feedback model predictive controller is again superior; deterministic MPC has a
larger spread of trajectories and the state constraint is transgressed for some disturbance
sequences.

6
6

4 (i)
4
x2 x2
2 2 (ii)
(i)
(ii)
0
0
0 2 4 0 2 4
x1 x1
(a) Tubes of predicted trajectories: (i) (b) Tubes of actual trajectories: (i) feed-
feedback; (ii) open-loop OC back; (ii) nominal MPC

Figure 9.1.1: Comparison of open–loop OC, nominal MPC and feedback MPC

The main drawback of feedback model predictive control is that determination of a


control policy is usually prohibitively difficult. To overcome complexity of the feedback
optimal control problem, research has focused on various approximations and simplifica-
tions of the resultant optimal control problem. Relevant results and ideas on feedback
model predictive control and corresponding simplifications can be found, for instance,
in [May95, KBM96, May97, LY97, SM98, DMS00, MNv01, MDSA03, KRS00, SR00,

132
LK00, RKR98, KM02b, KM03b, CRZ01, ML01, L0̈3b, L0̈3a, vHB03, KA04, Smi04].
Some of these results are briefly discussed in [LCRM04], where recent results on feedback
model predictive control using ‘tubes’ (‘tube’ model predictive control) are presented.
In this chapter we discuss a strategy for achieving robust model predictive control
of constrained uncertain discrete time systems using tubes. Tubes have been exten-
sively studied by many authors, for instance: Aubin and Frankowska [Aub91, AF90];
Kurzhanski, Vályi and Filippova [KV88, KV97, KF93] and Quincampoix and Veliov
[QV02, MQV02], mainly in the context of continuous-time systems.

9.2 Tubes – Basic Idea


The problem that we consider is model predictive control of the system

x+ = f (x, u, w) (9.2.1)

where x, u and w are, respectively, the current state, control and disturbance (of dimen-
sion n, m and p respectively) and x+ is the successor state; the disturbance w is known
only to the extent that it belongs to the set W ⊂ Rp . The function f (·) is assumed to be
continuous. Control, state and disturbance are subject to the hard constraints

(x, u, w) ∈ X × U × W (9.2.2)

where U and W are (convex, compact) polytopes and X is a (convex) closed polyhedron
and the sets U, W and X contain the origin in their interiors.
Model predictive control is defined, as usual, by specifying a finite-horizon opti-
mal control problem that is solved on-line. In the approach adopted in this chap-
ter, the optimal control problem at state x is the determination of a tube defined
as a sequence X , {X0 , X1 , . . . , XN } of sets of states and an associated policy π =
{µ0 (·), µ1 (·), . . . , µN −1 (·)} satisfying:

x ∈ X0 (9.2.3)
Xi ⊆ X, ∀i ∈ NN −1 (9.2.4)
XN ⊆ Xf ⊆ X (9.2.5)
µi (z) ∈ U, ∀z ∈ Xi , ∀i ∈ NN −1 (9.2.6)
f (z, µi (z), w) ∈ Xi+1 , ∀z ∈ Xi , ∀w ∈ W, ∀i ∈ NN −1 (9.2.7)

where Xf is a terminal constraint set. Note that the constraint (9.2.3) is a quite nat-
ural constraint that has been introduced first in [MSR05]. The optimal tube minimizes
an appropriate cost function defined below. The inclusion (9.2.7), which replaces the
difference equation (9.2.1), may be written in the form

f (Xi , µi (·), W) ⊆ Xi+1 (9.2.8)

where f (Xi , µi (·), W) , {f (z, µi (z), w) | z ∈ Xi , w ∈ W}.

133
A graphical illustration of the considered approach is given in Figure 9.2.1. One of
the main problems when considering general discrete time systems lies in the fact that
even if the set Xi is a convex set, the set f (Xi , µi (·), W) , {f (z, µi (z), w) | z ∈ Xi , w ∈
W} is in general non–convex set. In fact, it is possible to construct a simple example
demonstrating this relevant observation even if the system being consider is linear. Thus
a relevant problem is characterization of the appropriate collection of sets within which
‘tube cross–section’ should be searched for. We will demonstrate that it is possible to
characterize an appropriate collection of the sets for certain important classes of discrete
time systems in the subsequent chapter.

f (Xi , µi (·), W) ⊆ Xi+1 , f (Xi , µi (·), W) , {f (z, µi (z), w) | z ∈ Xi , w ∈ W}, µi (z) ∈ U, ∀z ∈ Xi

Xi

x ∈ X0

Xf

Xi+1

Figure 9.2.1: Graphical illustration of feedback MPC by using tubes

Let θ , (X, π) and let Θ(x) be defined as the set of θ satisfying (9.2.3)– (9.2.7):

Θ(x) , {(X, π) | x ∈ X0 , Xi ⊆ X, µi (z) ∈ U, ∀(z, i) ∈ Xi × NN −1


f (Xi , µi (·), W) ⊆ Xi+1 , ∀i ∈ NN −1 , XN ⊆ Xf ⊆ X} (9.2.9)

The set Θ(x) depends on x because of constraint (9.2.3).


An important consequence of satisfaction of these constraints is that trajectories of
the actual system (9.2.1) are constrained to lie in the tube X and therefore satisfy all
constraints. This result can be established by a minor modification of the proof of Propo-
sition 1 in [LCRM04]:
Proposition 9.1 (Tube and Robust Constraint Satisfaction) Suppose that the tube
X and the associated policy π satisfy the constraints (9.2.3)– (9.2.7). Then the state
φ(i; x, π, w) ∈ Xi ⊆ X for all i ∈ NN −1 , the control µi (φ(i; x, π, w)) ∈ U for all i ∈ NN −1 ,
and φ(N ; x, π, w) ∈ Xf ⊆ X for every initial state x ∈ X0 and every admissible distur-
bance sequence w ∈ WN (policy π steers any initial state x ∈ X0 to Xf along a trajectory
lying in the tube X, and satisfying, for each i, state and control constraints for every
admissible disturbance sequence).

134
The optimal control problem PN (x) that is solved online when the current state is
x is minimization of a cost VN (θ) with respect to the decision variable θ subject to the
θ ∈ Θ(x) (i.e. subject to constraints (9.2.3)– (9.2.7)) where VN (·) is defined by:
N
X −1
VN (θ) , ℓ(Xi , µi (·)) + Vf (XN ) (9.2.10)
i=0

The optimal control problem PN (x) is, therefore, defined by

PN (x) : VN0 (x) = inf {VN (θ) | θ ∈ Θ(x)} (9.2.11)


θ

where VN0 (·) is the value function for the problem. The solution of the prob-
lem, if it exists, yields θ0 (x) = (X0 (x), π 0 (x)), i.e. it yields an optimal
tube X0 (x) = {X00 (x), X10 (x) . . . , XN
0 (x)} and an associated policy π 0 (x) =
{µ00 (·; x), µ01 (·; x), . . . , µ0N −1 (·; x)}. If x is the current state of the system being controlled,
the control applied to the plant is:

κN (x′ ) , µ00 (x′ ; x). (9.2.12)

The domain of the value function VN0 (·), the controllability set is:

XN = {x | Θ(x) 6= ∅} (9.2.13)

The following result is a consequence of definition (9.2.9):


Proposition 9.2 (Tubes & Optimal Cost) Suppose that XN 6= ∅, then for all x ∈ XN ,

VN0 (x′ ) ≤ VN0 (x), ∀ x′ ∈ X00 (x) (9.2.14)

if the minimum in (9.2.11) exists.

Proof: Claim follows immediately, since θ0 (x) ∈ Θ(x′ ) for all x′ ∈ X00 (x).

QeD.

We observe a relevant property that relates the tube approach and the standard
dynamic programming solution of feedback model predictive control:

Remark 9.1 (Link between Tubes and Dynamic programming solution of feedback model
predictive control) Recalling the dynamic programming recursion (1.7.36a) – (1.7.36c)
with boundary conditions specified in (1.7.37) it can be easily deduced that the tube –
control policy couple (X, π) with Xi , XN −i and µi (·) , κN −i (·) for all i ∈ NN yields the
time – varying tube and control policy that recovers a dynamic programming solution of
the feedback model predictive control problem (1.7.36a) – (1.7.36c).
However, the emphasis here is different; the main purpose here is to find a simpler
characterization of the tube – control policy couple in order to approximate the standard
dynamic programming solution and reduce the corresponding computational burden. A

135
set of appropriate simplifications for certain classes of discrete time systems is presented
in Chapter 10.
The first assumption that we impose is that:
Assumption 9.1 (Assumption on XN and PN (x):) The set XN defined in (9.2.13) is
non-empty and bounded set and the minimum in (9.2.11) exists for all x ∈ XN .

9.3 Stabilizing Ingredients


The control law κN (·) defined in (9.2.12) is not necessarily stabilizing because the optimal
control problem PN (x) is, inter alia, defined over a finite horizon. However it is possible
to stipulate conditions on the terminal cost Vf (·) and constraint set Xf , similar to those
presented in the literature and reviewed in [MRRS00] for the nominal problem, that
ensure asymptotic/exponential stability.
Because a state trajectory in the nominal (deterministic) problem is replaced by a
tube when uncertainty is present, it is necessary to generalize the concepts of robust
control and control invariance (a property that the terminal constraint set Xf is required
to have). This generalization is discussed in Chapter 4.

Remark 9.2 (Robust Control Invariant Set and Set Robust Control Invariant Set) We
refer the reader to Definition 1.23 for definition of RCI set and to Definition 4.1 for
definition of set robust control invariance.
If S is RCI for x+ = f (x, u, w) and constraint set (X, U, W), then there exists a control
law µS : S → U such that S is RPI for system x+ = f (x, µS (x), w) and constraint set
(XS , W) where XS , X ∩ {x | µS (x) ∈ U}. Hence, f (X, µS (·), W) ⊂ S for all X ⊂ S.
Given a feedback control law µf (·), we require in the sequel that for every set X =
z ⊕ S with X ⊆ Xf where z ∈ Z and S is a set, f (X, µf (·), W) satisfies f (X, µf (·), W ) ⊆
X + = z + ⊕ S for some z + ∈ Z and where X + ⊆ Xf . We therefore define a set of sets S
as follows:
S , {z ⊕ S | z ∈ Z} (9.3.1)

where S and Z are compact sets.


Remark 9.3 (More general form of S) It is possible to consider more general and slightly
complicated form of a set of sets S as we briefly remark next. Let the sets Z and S be
two finite collections of compact sets:

Z , {Zi | i ∈ Np }

and
S , {Si | i ∈ Nq }

where p and q are two finite integers, the set Zi is compact for every i ∈ Np and the set
Si is compact for every i ∈ Nq .
In this case we would have to require, for a given feedback control law µf (·), that
for every set X = z ⊕ S, where z ∈ Z, Z ∈ Z and S ∈ S, f (X, µf (·), W) satisfies

136
f (X, µf (·), W ) ⊆ z + ⊕ S + for some z + ∈ Z + , Z + ∈ Z and S + ∈ S. A more general
characterization of a set of sets S is as follows:

S , {z ⊕ S | z ∈ Z, Z ∈ Z, S ∈ S}

In order to keep presentation as simple as possible we will consider a set of sets S


characterized by (9.3.1). However, we remark that it is a simple (perhaps notationally
tedious) exercise to repeat arguments in the sequel of this chapter to the case when S
has this more general form.

Remark 9.4 (Set Robust Positively Invariant Set) The set of sets S defined in (9.3.1)
is set robust positively invariant for the system x+ = f (x, µf (·), w) and constraint
set (Xf , W) with Xf , X ∩ {x | µf (x) ∈ U} if every X ∈ S satisfies X ⊆ Xf and
f (X, µf (·), W) ⊆ X + for some X + ∈ S for every X ∈ S.
Stability will be established by using, as is customary, the value function as a Lya-
punov function. Our main task is to find conditions on Vf (·) and Xf that ensure robust
exponential stability of a set S that serves as the ‘origin’ for the controlled uncertain
system x+ = f (x, κN (x), w). The further condition that we impose is:
Assumption 9.2 (Conditions on S, Xf , ℓ(·) and Vf (·))

(i) There exists a set S and an associated control law µS : S → U such that S is
robust positively invariant for the system x+ = f (x, µS (x), w) and constraint set
(XS , W) where XS , X ∩ {x | µS (x) ∈ U}.

(ii) S ⊆ interior(Xf ), Vf (·) and ℓ(·) satisfy Vf (S) = 0, ℓ(S, µS (·)) = 0, Vf (X) >
0, ℓ(X, µX (·)) > 0, ∀ X 6⊆ S.

(iii) There exists a control law µf (·) such that Xf is set robustly positively invariant
set for the system x+ = f (x, µf (·), w) and constraint set (Xf , W) with Xf , X ∩
{x | µf (x) ∈ U} (every X ⊆ Xf satisfies X ⊆ Xf and f (X, µf (·), W) ⊆ X + for
some X + ⊆ Xf for every X ⊆ Xf with X, X + ∈ S) and

Vf (X + ) + ℓ(X, µf (·)) ≤ Vf (X). (9.3.2)

where X + satisfies f (X, µf (·), W) ⊆ X + with X, X + ∈ S.

Proposition 9.3 (Decrease of the value function) Suppose Assumption 9.1 and As-
sumption 9.2 are satisfied. Then

VN0 (f (x, κN (x), w)) + ℓ(X00 (x), κN (·)) ≤ VN0 (x) (9.3.3)

for all w ∈ W, all x ∈ XN , the domain of VN0 (·).

137
Proof: Let x ∈ XN and let θ0 (x) be the optimal solution of PN (x) so that

θ0 (x) = {X0 (x), π 0 (x)}

where
X0 (x) = {X00 (x), X10 (x) . . . , XN
0
(x)}

and
π 0 (x) = {µ00 (·; x), µ01 (·; x), . . . , µ0N −1 (·; x)}.

the minimizer of PN (x); θ0 (x) exists by Assumption 9.1. Let

θ∗ (x) , {X∗ (x), π ∗ (x)}

where
X∗ (x) , {X10 (x), X20 (x) . . . , XN
0 ∗
(x), XN (x)}
∗ (x) such that f (X 0 (x), µ (·), W) ⊆ X ∗ (x) ⊆ X (X ∗ (x) exists by Assump-
where XN N f N f N
tion 9.2) and
π ∗ (x) , {µ01 (·; x), µ02 (·; x), . . . , µ0N −1 (·; x), µf (·)}.

Now, x+ ∈ f (x′ , κN (x), W) ⊆ X10 (x) for all x′ ∈ X00 (x) with κN (x′ ) = µ00 (x′ ; x). It
follows from (9.2.3)– (9.2.7) that θ∗ (x) ∈ Θ(x+ ) for all x+ ∈ X10 (x).
By Assumption 9.2 for all x+ ∈ f (x′ , κN (x), W) ⊆ X10 (x):
N
X −1

VN (θ (x)) = ℓ(Xi∗ (x), µ∗i (·; x)) + Vf (XN

(x))
i=0
N
X −2
= ℓ(Xi0 (x), µ0i (·; x))
i=1
∗ ∗
+ ℓ(XN −1 (x), µf (·)) + Vf (XN (x))

= VN0 (x) − ℓ(X00 (x), µ00 (·; x)) − Vf (XN


0
(x))
0 ∗
+ ℓ(XN (x), µf (·)) + Vf (XN (x))
≤ VN0 (x) − ℓ(X00 (x), µ00 (·; x))

Since θ∗ (x) ∈ Θ(x+ ), VN0 (x+ ) ≤ VN (θ∗ (x)) so that for all x+ ∈ f (x′ , κN (x), W) ⊆ X10 (x):

VN0 (f (x, κN (x), w)) ≤ VN0 (x) − ℓ(X00 (x), κN (·))

with κN (·) = µ00 (·; x) and x+ = f (x, κN (x), w) for all w ∈ W, all x ∈ XN .

QeD.

Let the Hausdorff semi-distance be denoted by d(X, S) , maxx∈X d(x, S), with
d(x, S) , miny∈S |x − y|p .
Remark 9.5 (Comment on Hausdorff metric) For the development of the results in the
sequel of this chapter we will use the Hausdorff semi-distance; however all the results
may be obtained if the Hausdorff metric is used.

138
To use Proposition 9.3 to establish robust exponential stability of S (the origin) for
the uncertain system x+ = f (x, κN (X), w), we require some further assumptions.

Assumption 9.3 (Additional Conditions on XN , S, Xf , ℓ(·) and Vf (·)) There exist


constants c3 ≥ c2 > c1 > 0 such that:

(i) ℓ(X, µ(·)) ≥ c1 d(X, S) for all X ⊂ XN ,

(ii) ℓ(X, µ(·)) ≤ c3 d(X, S) for all X ⊂ XN ,

(iii) Vf (X) ≤ c2 d(X, S) for all X ⊂ Xf ,

(iv) XN is bounded.

We assume in the sequel of this section that Assumption 9.1, Assumption 9.2
and Assumption 9.3 are satisfied.

Proposition 9.4 (Properties of the value function – I) For all x ∈ XN , VN0 (x) ≥
c1 d(X00 (x), S).

Proof: VN0 (x) ≥ ℓ(X00 (x), µ00 (·; x)) ≥ c1 d(X00 (x), S).

QeD.

Let L(η) , {x | VN0 (x) ≤ η} and Ψ(η) , {η | L(η) ⊆ Xf }. We define

ηf , sup{η | η ∈ Ψ(η)}
η

Thus, L(ηf ) is the largest level set of function VN0 (·) contained in Xf . The following
result is a consequence of Proposition 9.3:
Proposition 9.5 (Properties of the value function – II) For all x ∈ L(ηf ), Xi0 (x) ⊆
L(ηf ), i ∈ NN and VN0 (x) ≤ c2 d(X00 (x), S).

Proof: The proof of this result follows similar arguments as the proof of Proposition
9.3 with a set of appropriate modifications.
Let x ∈ L(ηf ) and let θ0 (x) be the optimal solution of PN (x) so that

θ0 (x) = {X0 (x), π 0 (x)}

where
X0 (x) = {X00 (x), X10 (x) . . . , XN
0
(x)}

and
π 0 (x) = {µ00 (·; x), µ01 (·; x), . . . , µ0N −1 (·; x)}.

the minimizer of PN (x); θ0 (x) exists by Assumption 9.1. For each i ∈ NN and any
arbitrary y ∈ Xi0 (x) let
θ∗ (x) , {X∗ (x), π ∗ (x)}

139
where

X∗ (x) , {Xi0 (x), Xi+1


0 0
(x) . . . , XN ∗
(x), XN ∗
(x), XN ∗ ∗
+1 (x), . . . , XN +i−1 (x), XN +i (x)}

where for each j ∈ N+ ∗ ∗ ∗


i , XN +j (x) such that f (XN +j−1 (x), µf (·), W) ⊆ XN +j (x) ⊆ Xf
∗ (x) such that f (X 0 (x), µ (·), W) ⊆ X ∗ (x) ⊆ X exist by Assumption 9.2;
and XN N f N f
and for each j ∈ N+
i

π ∗ (x) , {µ0i (·; x), µ0i+1 (·; x), . . . , µ0N −1 (·; x), µf (·), µf (·), . . . µf (·)}.

and for i = 0, π ∗ (x) = π 0 (x).


It follows from (9.2.3)– (9.2.3) that θ∗ (x) ∈ Θ(y) for all y ∈ Xi0 (x) and for each
i ∈ NN .
By Assumption 9.2 for all y ∈ Xi0 (x) and for each i ∈ N+
N:
NX
+i−1
VN (θ∗ (x)) = ∗
ℓ(Xj∗ (x), µ∗j (·; x)) + Vf (XN +i (x))
j=i
N
X −1
= ℓ(Xj0 (x), µ0j (·; x))
j=i
NX
+i−1
+ ℓ(Xj∗ (x), µ∗j (·; x)) + Vf (XN

+i (x))
j=N
i−1
X
= VN0 (x) − ℓ(Xj0 (x), µ0j (·; x)) − Vf (XN
0
(x))
j=0
0 ∗
+ ℓ(XN (x), µf (·)) + Vf (XN (x))
i−1
X
≤ VN0 (x) − ℓ(Xj0 (x), µ0j (·; x))
j=0

and by Proposition 9.2 VN0 (y) ≤ VN0 (x) for all y ∈ X00 (x). Note that we used the
PN +i−1
fact that for each i ∈ N+
N, j=N ℓ(Xj∗ (x), µ∗j (·; x)) + Vf (XN
∗ ∗
+i (x)) ≤ Vf (XN (x)) by
iterative application of Assumption 9.2. Since θ∗ (x) ∈ Θ(y) for all y ∈ Xi0 (x) and for
each i ∈ NN , VN0 (y) ≤ VN (θ∗ (x)) so that y ∈ L(ηf ) for all y ∈ Xi0 (x) and for each i ∈ NN
so that Xi0 (x) ⊆ L(ηf ), i ∈ NN .
The fact that VN0 (x) ≤ c2 d(X00 (x), S) follows from Assumption 9.3 since L(ηf ) ⊆
Xf .

QeD.

Proposition 9.6 (Properties of the value function – III) For all x ∈ XN , VN0 (x) ≤
c4 d(X00 (x), S) with c4 > c3 .

Proof: Firstly, from Proposition 9.5 we have that VN0 (x) ≤ c2 d(X00 (x), S) for all x ∈
L(ηf ). Now, let x ∈ XN \ L(ηf ) and let

θ0 (x) ∈ arg min{VN (θ) | θ ∈ Θ(x)}


θ

140
so that
X0 (x) = {X00 (x), X10 (x), . . . , XN
0
(x)}

and
π 0 (x) = {µ00 (·; x)), µ01 (·; x)), . . . , µ0N −1 (·; x)).

Since
N
X −1
0
VN (θ (x)) = ℓ(Xi0 (x), µ0i (·; x)) + Vf (XN
0
(x))
i=0
and since
d(Xi0 (x), S) ≤ d(Xi0 (x), X00 (x)) + d(X00 (x), S)

it follows that
N
X −1
VN (θ0 (x)) ≤ c3 (d(Xi0 (x), X00 (x)) + d(X00 (x), S)) + c2 (d(XN
0
(x), X00 (x)) + d(X00 (x), S))
i=0
N
X −1 N
X −1
= c3 d(Xi0 (x), X00 (x)) + 0
c2 d(XN (x), X00 (x)) + c3 d(X00 (x), S) + c2 d(X00 (x), S)
i=0 i=0

≤ c′3 d(X00 (x), S) + c′′3 d(X00 (x), S)


= c4 d(X00 (x), S)

PN −1
where existence of c′3 such that c′3 d(X00 (x), S) ≥ i=0 c3 d(Xi0 (x), X00 (x)) +
0 (x), X 0 (x)) follows from the fact that X 0
c2 d(XN 0 N is bounded and each Xi (x) ⊆ XN
for any x ∈ XN so that:
N
X −1
c3 d(Xi0 (x), X00 (x)) + c2 d(XN
0
(x), X00 (x))
i=0
NX−1
≤ c3 d(XN , X00 (x)) + c2 d(XN , X00 (x))
i=0

= (N c3 + c2 )d(XN , X00 (x))


≤d

where d , (N c3 + c2 ) maxx∈closure(XN ) d(XN , X00 (x)). Hence,


N
X −1
c3 d(Xi0 (x), X00 (x)) + c2 d(XN
0
(x), X00 (x)) ≤ d
i=0

Let c′3 = d/b where b , minx∈closure(closure(XN )\L(ηf )) d(X00 (x), S). Hence for all x ∈
XN \ L(ηf ) we have:
N
X −1
c′3 d(X00 (x), S) ≥ c3 d(Xi0 (x), X00 (x)) + c2 d(XN
0
(x), X00 (x))
i=0

Also, c′′3 = N c3 + c2 so that c4 , c′3 + c′′3 ≥ c3 . Thus VN0 (x) = VN (θ0 (x)) ≤ c4 d(X00 (x), S)
for all x ∈ XN \ L(ηf ). Since VN0 (x) ≤ c2 d(X00 (x), S) for all x ∈ L(ηf ) and VN0 (x) ≤
c4 d(X00 (x), S) for all x ∈ XN \ L(ηf ) it follows that there exists c4 > c3 such that
VN0 (x) ≤ c4 d(X00 (x), S) for all x ∈ XN .

141
QeD.

Theorem 9.1. (Convergence of the set sequence {X00 (xi )}) Let {xi } be any sequence
generated by x+ = f (x, κN (x), w) with x0 ∈ XN for an admissible disturbance se-
quence {wi } and consider the set sequence {X00 (xi )}. Then (i) xi ∈ X00 (xi ), ∀i and
(ii) d(X00 (xi ), S) → 0 exponentially as i → ∞.

Proof: Part (i) follows directly by construction. (ii) From Proposition 9.3, Proposition
9.4 and Proposition 9.6 and Assumption 9.2:

VN0 (x) ≥ c1 d(X00 (x), S) ∀x ∈ XN


VN0 (x) ≤ c4 d(X00 (x), S) ∀x ∈ XN
VN0 (x+ ) ≤ VN0 (x) − c1 d(X00 (x), S) ∀x ∈ XN , ∀x+ ∈ f (x, κN (x), W )

Hence, for all x ∈ XN :

VN0 (x+ ) ≤ (1 − c1 /c4 )VN0 (x) = αVN0 (x)

for all x+ ∈ f (x, κN (x), W ) where α , (1 − c1 /c4 ) ∈ (0, 1), so that

VN0 (xi ) ≤ αi VN0 (x0 )

where {xi } is any sequence generated by x+ = f (x, κN (x), w) with x0 ∈ XN and {wi } an
admissible disturbance sequence. Then

d(X00 (xi ), S) ≤ (1/c1 )VN0 (xi ) ≤ (1/c1 )αi VN0 (x0 ) ≤ (c4 /c1 )αi d(X00 (x0 ), S)

so that
d(X00 (xi ), S) ≤ (c4 /c1 )d(X00 (x0 ), S)

for all x0 ∈ XN for all i ≥ 0 and every admissible disturbance sequence. The ith term of
the sequence {d(X00 (xi ), S)} satisfies:

d(X00 (xi ), S) ≤ (c4 /c1 )αi d(X00 (x0 ), S)

so that the sequence {d(X00 (xi ), S)} converges to zero exponentially as i → ∞, i.e.
limi→∞ d(X00 (xi ), S) = (c4 /c1 )d(X00 (x0 ), S) limi→∞ αi = 0 for all x0 ∈ XN .

QeD.

A relevant and direct consequence of Theorem 9.1 is given next:


Theorem 9.2. (Robust Exponential Stability of S) Suppose Assumption 9.1, As-
sumption 9.2 and Assumption 9.3 are satisfied, then S is robustly exponentially
stable for the controlled uncertain system x+ = f (x, κN (x), w). The region of attraction
is XN .

142
Proof: This result follows directly from the facts that (established in Theorem 9.1)
xi ∈ X00 (xi ), ∀i and d(X00 (xi ), S) → 0 as i → ∞ so that d(xi , S) → 0 exponentially as
i → ∞ where {xi } is any sequence generated by x+ = f (x, κN (x), w) with x0 ∈ XN and
for any admissible disturbance sequence {wi }.

QeD.

9.4 Summary
In this chapter feedback model predictive control using tubes is introduced. Relevant
properties such as robust constraint satisfaction and robust exponential stability of an
appropriate robust control invariant set for constrained uncertain discrete time systems
are established. The proposed approach introduces tractable simplifications of the highly
complex, uncertain, optimal control problem (needed for feedback model predictive con-
trol) and allows for development of computationally tractable and efficient algorithms.
The proposed approach simplifies a standard dynamic programming solution of feedback
model predictive control problem. Further development of the method will be considered
in more detail for certain relevant classes of discrete time systems (such as linear systems)
in the subsequent chapter.

143
Chapter 10

Robust Model Predictive Control


by using Tubes – Linear Systems

In every piece there is a number – maybe several numbers, but if so there is also a base–
number, and that is the true one. That is something that affects us all, and links us all
together.

– Arvo Pärt

In this chapter we provide a more detailed analysis of the feedback model predictive
control by using tubes for constrained and uncertain linear discrete time systems.
A number of methods for achieving robust model predictive control for linear discrete
time systems based on the results of the previous chapter is briefly discussed and more
discussion is devoted to a simple tube controller for efficient robust model predictive con-
trol of constrained linear, discrete-time systems in the presence of bounded disturbances.
As already considered in Chapter 9, we identify the couple tube – control policy ensuring
that controlled trajectories are confined to a designed tube despite uncertainty. The resul-
tant robust optimal control problem that is solved on–line is a standard quadratic/linear
programming problem of marginally increased complexity compared with that required
for model predictive control in the deterministic case. We exploit the results of Chap-
ters 3 – 4 to optimize the tube cross section, and to construct an adequate tube terminal
set and we establish robust exponential stability of a suitable robustly controlled invari-
ant set (the ‘origin’ for uncertain system) with enlarged domain of attraction. Moreover,
a set of possible controller implementations is also discussed.

10.1 Tubes for constrained Linear Systems with additive


disturbances
We consider the following discrete-time linear time-invariant (DLTI) system:

x+ = Ax + Bu + w, (10.1.1)

144
where x ∈ Rn is the current state, u ∈ Rm is the current control action x+ is the successor
state, w ∈ Rn is an unknown disturbance and (A, B) ∈ Rn×n × Rn×m . The disturbance
w is persistent, but contained in a convex and compact (i.e. closed and bounded) set
W ⊂ Rn that contains the origin. We make the standing assumption that the couple
(A, B) is controllable. We also define the corresponding nominal system:

z + = Az + Bv, (10.1.2)

where z ∈ Rn is the current state, v ∈ Rm is the current control action z + is the successor
state of the nominal system. The system (10.1.1) is subject to the following set of hard
state and control constraints:
(x, u) ∈ X × U (10.1.3)

where X ⊆ Rn and U ⊆ Rm are polyhedral and polytopic sets respectively and both
contain the origin as an interior point.

Remark 10.1 (Notation Remark) In this chapter, with slight deviation from the stan-
dard notation introduced in Chapter 1, the following notation is used. W , WN
denotes the class of admissible disturbance sequences w , {w(i) ∈ W | i ∈ NN −1 }.
φ(i; x, π, w) denotes the solution at time i of (10.1.1) when the control policy is
π , {µ0 (·), µ1 (·), . . . , µN −1 (·)}, where µi (·) is the control law (mapping state to con-
trol) at time i, the disturbance sequence is w and the initial state is x at time 0. If the
initial state of nominal model is z at time 0 then we denote by φ̄(k; z, v) the solution
to (10.1.2) at time instant k, given the control sequence v , {v0 , v1 . . . vN −1 }.
Robust model predictive control is defined, as usual, by specifying a finite-horizon
robust optimal control problem that is solved on-line. In this chapter following the ap-
proach considered in Chapter 9, the robust optimal control problem is the determination
of a simple tube, defined as a sequence X , {X0 , X1 , . . . , XN } of sets of states, and
an associated control policy π that minimize an appropriately chosen cost function and
satisfy the following set of constraints (see (9.2.3) – (9.2.7)), for a given initial condition
x ∈ X:

x ∈ X0 , (10.1.4)
Xi ⊆ X, ∀i ∈ NN −1 (10.1.5)
XN ⊆ Xf ⊆ X, (10.1.6)
µi (y) ∈ U, ∀y ∈ Xi , ∀i ∈ NN −1 (10.1.7)
Ay + Bµi (y) ⊕ W ⊆ Xi+1 , ∀y ∈ Xi , ∀i ∈ NN −1 (10.1.8)

where Xf is a terminal constraint set.


In order to exploit linearity and convexity of the problem, we recall and generalize
Proposition 1 of [ML01]:
Proposition 10.1 (Forward propagation of a RCI set) Let Ω be a RCI set for (10.1.1)
and constraint set (X, U, W), and let ν : Ω → U be a control law such that Ω is a RPI set

145
for system x+ = Ax+Bν(x)+w and constraint set (Xν , W) with Xν , X∩{x | ν(x) ∈ U}.
Let also x ∈ z ⊕ Ω and u = v + ν(x − z). Then for any v ∈ Rm , x+ ∈ z + ⊕ Ω where
x+ , Ax + Bu + w, w ∈ W and z + , Az + Bv.

Proof: Since x ∈ z ⊕ Ω we have x = z + y for some y ∈ Ω. Since u = v + ν(x − z) it


follows that x+ ∈ A(z + y) + B(v + ν(x − z)) ⊕ W = Az + Bv + Ay + Bν(y) ⊕ W. But
z + = Az + Bv and Ay + Bν(y) ⊕ W ⊆ Ω, ∀y ∈ Ω so that x+ ∈ z + ⊕ Ω.

QeD.

Note that the previous result holds for a RCI set Ω for (10.1.1) and any arbitrary con-
straint set (X, U, W). This result allows us to exploit a simple parameterization of the
tube-policy pair (X, π) as follows. The state tube X = {X0 , X1 , . . . , XN } is parametrized
by {zi } and R as follows:
Xi , zi ⊕ R, i ∈ NN (10.1.9)

where zi is the tube cross–section center at time i and R is a set. The control laws µi (·)
defining the control policy π = {µ0 (·), µ1 (·), . . . , µN −1 (·)} are parametrized by {zi } and
{vi } as follows:
µi (y) , vi + ν(y − zi ), y ∈ Xi , (10.1.10)

for all i ∈ NN −1 , where vi is the feedforward component of the control law and ν(y − zi )
is feedback component of the control law µi (·).
Our next step is to discuss adequate tube cross–section and tube terminal set that
enables for a formulation of a simple robust optimal control problem; the solution to this
robust optimal control problem allows for receding horizon implementation of the tube
controller ensuring robust exponential stability of an appropriate RCI set.
A suitable choice for the ‘tube cross–section’ R is any RCI set with a ν : R → U
such that R is RPI for system Ax + Bν(x) + w and constraint set (Xν , W) with Xν ,
X ∩ {x | ν(x) ∈ U}. The sequence {zi } is the tube centers sequence and is required to
satisfy (10.1.2), subject to tighter constraints as discussed in the sequel. We will discuss
in more detail some of the possible choices for the ‘tube cross–section’ R in the sequel. We
will provide more detailed discussion of the two methods based on [MSR05] and [RM05b].
However, we first present a general discussion before specializing our results as in [MSR05]
and [RM05b].

10.1.1 Tube MPC – Stabilizing Ingredients

The parametrization for the state tube X motivates the introduction of a set of sets of
the form Φ , {z ⊕ R | z ∈ Zf } (Φ is a set of sets, each of the form z ⊕ R where R
is a set) that is set robust control invariant as already discussed in Chapter 9 (See also
Chapter 4 for definition of set robust control invariant set and additional discussion).
We introduce the following assumption:
Assumption 10.1 ( Existence of a RCI set for system (10.1.1) and constraint set

146
(αX, βU, W)) A1: (i) The set R is a RCI set for system (10.1.1) and constraint set
(αX, βU, W) where (α, β) ∈ [0, 1) × [0, 1), (ii) The control law ν : R → βU is such that R
is RPI for system x+ = Ax + Bν(x) + w and constraint set (Xν , W), where Xν , αX ∩ Xν
and Xν is defined by:
Xν , {x | ν(x) ∈ U}. (10.1.11)

(ν(·) exists by A1 (i)). Let


Uν , {ν(x) | x ∈ R}. (10.1.12)

and
Z , X ⊖ R, V , U ⊖ Uν (10.1.13)

We also assume:
Assumption 10.2 ( Existence of a CI set for system (10.1.2) and tighter constraint
set (Z, V)) (i) The set Zf is a CI set for the nominal system (10.1.2) and constraint set
(Z, V), (ii) The control law ϕ : Zf → V is such that Zf is PI for system z + = Az +Bϕ(z)
and constraint set Zϕ , where Zϕ , Z ∩ {z | ϕ(z) ∈ V}. (ϕ(·) exists by A2 (i)).
If Assumption 10.1 is satisfied it is easy to show that Assumption 10.2 is also
satisfied; moreover the set Φ , {z ⊕ R | z ∈ Zf } is a set robust control invariant for
system x+ = Ax + Bu + w and constraint set (X, U, W) by Theorem 4.1 of Chapter 4.
We assume in the sequel that Assumption 10.1 and Assumption 10.2 hold so that
the terminal set Zf for the nominal model can be any CI set satisfying Assumption
10.2.
Theorem 4.1 of Chapter 4 suggests that an appropriate choice for the terminal set
Xf (10.1.7) is given by:
Xf , Zf ⊕ R (10.1.14)

where the sets Zf and R satisfy Assumption 10.1 and Assumption 10.2. With this
choice for the terminal set Xf the domain of attraction is enlarged (compared to the case
when Xf = R).

10.1.2 Simple Robust Optimal Control Problem

We are now ready to propose a robust optimal control problem, whose solution yields
the tube and the corresponding control policy satisfying the set of constraints specified
in (10.1.4) – (10.1.8) (providing that Assumption 10.1 and Assumption 10.2 hold).
In order to insure satisfaction of (10.1.4) – (10.1.8) and use of the simple tube–policy
parametrization (10.1.9) – (10.1.10) we require that the trajectory of the nominal model
(the sequence of tube centers) satisfy the tighter constraints (10.1.13).
Let the set VN (x) of admissible control–states pairs for nominal system at state x be
defined as follows:

VN (x) , {(z, v) | (φ̄(k; z, v), vk ) ∈ Z × V, ∀k ∈ NN −1 , φ̄(N ; z, v) ∈ Zf , x ∈ z ⊕ R}


(10.1.15)

147
where φ̄(k; z, v) is the solution to (10.1.2) at time instant k, given that the initial state
of nominal model is z at time 0 and the control sequence is v , {v0 , v1 . . . vN −1 }.
It is clear that the set VN (x) is a polyhedral set providing that R and Zf are poly-
hedral. An appropriate cost function can be defined as follows:
N
X −1
VN (z, v) , ℓ(zi , vi ) + Vf (zN ), (10.1.16)
i=0

where for all i, zi , φ̄(i; z, v) and ℓ(·) is the stage cost and Vf (·) is the terminal cost,
that can be chosen to be :

ℓ(x, u) , |Qx|p + |Ru|p , p = 1, ∞ or ℓ(x, u) , |x|2Q + |u|2R (10.1.17a)


Vf (x) , |P x|p , p = 1, ∞ or Vf (x) , |x|2P (10.1.17b)

where P , Q and R are matrices of suitable dimensions. We assume additionally, as is


standard [MRRS00], that:
Assumption 10.3 The terminal cost satisfies Vf (Az + Bϕ(z)) + ℓ(z, ϕ(z)) ≤ Vf (z) for
all z ∈ Zf .

Remark 10.2 (Suitable choice for control law ϕ(·), set Zf and terminal cost Vf (·)) When
ℓ(·) is (positive definite) quadratic, as is well known, a suitable choice for the control law
ϕ(·) and the corresponding terminal cost Vf (·) are, respectively, any stabilizing linear
state feedback control law, i.e.
ϕ(z) = Kz

and the weight for the terminal cost can be any matrix P = P ′ > 0 satisfying:

(A + BK)′ P (A + BK) + Q + K ′ RK − P < 0.

In this case an appropriate choice for the set Zf is any positively invariant set for system
z + = (A + BK)z and constraint set ZK , where:

ZK , {z | z ∈ Z, Kz ∈ V}.

It is worth pointing out that the preferred values for K, Vf (·), and Zf are, respectively,
the unconstrained DLQR controller for (A, B, Q, R), the value function for the optimal
(infinite time) unconstrained problem for (A, B, Q, R) and the set Zf is the maximal
positively invariant set for z + = (A + BK)z and constraint set {z | z ∈ Z, Kz ∈ V}.
If R and Zf are polyhedral, ℓ(·) and Vf (·) are quadratic with Q = Q′ > 0, P =
P ′ > 0 and R = R′ > 0 the resultant optimal control problem [MSR05] is a quadratic
programme, since the set VN (x) (10.1.15) is polyhedral, defined by :

PN (x) : VN0 (x) , min{VN (z, v) |(z, v) ∈ VN (x)} (10.1.18)


z,v

and its unique minimizer is:

(z 0 (x), v0 (x)) , arg min{VN (z, v) |(z, v) ∈ VN (x)} (10.1.19)


z,v

148
Remark 10.3 (Alternative Case – R and Zf are ellipsoidal sets) We observe that our
results are easily extended to the case when the sets R and Zf are ellipsoidal. In this
case a minor modification of results reported in [Smi04, L0̈3b, CE04] allows for a convex
optimization formulation of PN (x). We note that that arguments presented in this section
can be repeated to this relevant case when R and Zf are ellipsoidal sets. However our
aim is to obtain as simple formulation of PN (x) as possible.
The domain of the value function VN0 (·), the controllability set, is:

XN , {x | VN (x) 6= ∅} (10.1.20)

For each i let Vi (x) and Xi be defined, respectively, by (10.1.15) and (10.1.20) with i
replacing N . The sequence {Xi } is a monotonically non-decreasing set sequence, i.e.
Xi ⊆ Xi+1 for all i. Given any x ∈ XN the solution to PN (x) defines the corresponding
simple optimal RCI tube:

X0 (x) = {Xi0 (x)}, Xi0 (x) = zi0 (x) ⊕ R, (10.1.21)

for i ∈ NN , and the corresponding control policy π 0 (x) = {µ0i (·) | i ∈ NN −1 } with

µ0i (y; x) = vi0 (x) + ν(y − zi0 (x)), y ∈ Xi0 (x) (10.1.22)

where, for each i, zi0 (x) = φ̄(i; z 0 (x), v0 (x)).


We observe that Proposition 9.1 (see also Proposition 1 in [LCRM04]) holds from the
construction of the simple optimal RCI tube. In fact, a result analogous to Proposition
9.1 holds for any arbitrary couple (z, v) ∈ VN (x).
Let:
ZN , {z | z ⊕ R ⊆ XN } = XN ⊖ R (10.1.23)

and
ΦN , {z ⊕ R | z ∈ ZN } (10.1.24)

We can establish the following result:


Proposition 10.2 (Set RCI property of ΦN ) The set ΦN defined by (10.1.24) is set
robust control invariant set for system x+ = Ax + Bu + w and constraint set (X, U, W).

Proof: This result follows from the discussion above, definitions of the sets ZN , ΦN and
the fact that the control law κ0N (·) , µ00 (·) satisfies Ax+Bκ0N (x)⊕W ⊆ X10 (x) , z10 (x)⊕R
for any arbitrary set X00 (x) = z00 (x) ⊕ R.

QeD.

149
10.1.3 Tube model predictive controllers

The solution of PN (x) allows for a variety of controller implementations. A set of possible
controller implementations are:

(i) Single Policy Optimized Robust Control Invariant Tube Controller.

(ii) Decreasing Horizon Tube controller

(iii) Variable Horizon Tube controller

(iv) Receding Horizon Tube controller

For a more detailed discussion for the first three controller implementations (i)–(iii) we
refer to [LCRM04]. We remark that the robust optimal control problem formulation
differs from the ones considered in [LCRM04] due to introduction of constraint x ∈ X0 =
z ⊕ R [MSR05]. This relevant modification allows for establishing stronger stability
results as we will illustrate by considering the most preferable controller implementation
– Receding Horizon Tube controller.

10.1.4 Receding Horizon Tube controller

Here we follow a useful proposal recently made in [MSR05] and consider the following
implicit robust model predictive control law κ0N (·) yielded by the solution of PN (x):

κ0N (x) , v00 (x) + ν(x − z 0 (x)) (10.1.25)

where ν(·) is defined in (10.3.6) – (10.3.7). We establish some relevant properties of the
proposed controller κ0N (·) by exploiting the results reported in [MSR05].
First we recall that from a set of the standard definition of exponential stability (see
Definitions 1.16– 1.20 in Chapter 1) we have:
Remark 10.4 (Robustly Exponentially Stable Set) A set R is robustly exponentially
stable (Lyapunov stable and exponentially attractive) for x+ = Ax + Bκ(x) + w, w ∈ W ,
with a region of attraction XN if there exists a c > 0 and a γ ∈ (0, 1) such that any
solution x(·) of x+ = Ax + Bκ(x) + w with initial state x(0) ∈ XN , and admissible
disturbance sequence w(·) (w(i) ∈ W for all i ≥ 0) satisfies d(x(i), R) ≤ cγ i d(x(0), R)
for all i ≥ 0.

Proposition 10.3 (Properties of the value function) (i) For all x ∈ R, VN0 (x) = 0,
z 0 (x) = 0, v0 (x) = {0, 0, . . . , 0} and κ0N (x) = ν(x). (ii) Let x ∈ XN and let (z 0 (x), v0 (x))
be defined by (10.1.19), then for all x+ ∈ Ax+Bκ0N (x)⊕W there exists (v(x+ ), z(x+ )) ∈
VN (x+ ) and
VN0 (x+ ) ≤ VN0 (x) − ℓ(z 0 (x), v00 (x)). (10.1.26)

150
Proof: (i) Since ({0, 0, . . . , 0}, 0) ∈ VN (x) and VN ({0, 0, . . . , 0}, 0) = 0 for
all x ∈ R we have proven the first assertion. (ii) The couple v(x+ ) ,
{v10 (x), . . . , vN
0 0 0 + 0 + +
−1 (x), ϕ(φ̄(N ; z (x), v (x)))} and z(x ) , z1 (x) satisfies (v(x ), z(x )) ∈
VN (x+ ). Also, because VN (v(x+ ), z(x+ )) ≤ VN0 (x) − ℓ(z 0 (x), v00 (x)) by standard ar-
guments that use A3 – A4 [MRRS00], we have VN0 (x+ ) ≤ VN (v(x+ ), z(x+ )) ≤
VN0 (x) − ℓ(z 0 (x), v00 (x)).

QeD.

The main stability result follows (see Theorem 1 and the proof of Theorem 1 in [MSR05]):
Theorem 10.1. (Robust Exponential Stability of R) Suppose that XN is bounded,
then the set R is robustly exponentially stable for controlled uncertain system x+ =
Ax + Bκ0N (x) + w, w ∈ W . The region of attraction is XN .
The proof of this result is given in [MSR05]. We shall demonstrate that the proposed
method satisfies Assumptions 9.1 – 9.3. Firstly, from discussion above and Proposition
10.3 Assumptions 9.1 and 9.2 are satisfied providing that there exists a RCI set R (which
we have assumed). In order to show that Assumption 9.3 holds we first observe that
given any compact set S that contains the origin as an interior point we have:

d(z ⊕ S, S) ≤ d(z, 0) + d(S, S) = d(z, 0) ⇒ d(z ⊕ S, S) ≤ d(z, 0) (10.1.27)

Hence, it follows from definition of the path and terminal cost with p = 2 and (10.1.17)
that Assumption 9.3 is satisfied. Since Assumptions 9.1 – 9.3 are satisfied we can use
the results of Theorem 9.1 and Theorem 9.2 to establish Theorem 10.1.
The proposed controller κ0N (·) results in a set sequence {X00 (x(i))}, where:

X00 (x(i)) = z 0 (x(i)) ⊕ R, i ∈ N (10.1.28)

and z 0 (x(i)) → 0 exponentially as i → ∞. The actual trajectory x(·) , {x(i)}, where


x(i) is the solution of x+ = Ax + Bκ0N (x) + w at time i ≥ 0, corresponding to a particular
realization of an infinite admissible disturbance sequence w(·) , {wi }, where wi ∈ W for
all i ∈ N, satisfies x(i) ∈ X00 (x(i)), ∀i ∈ N. Theorem 10.1 implies that X00 (x(i)) → R
as i → ∞ exponentially in the Hausdorff metric so that d(x(i), R) → 0 as i → ∞
exponentially.

10.2 Tube MPC – Simple Robust Control Invariant Tube


In this section we briefly discuss an appropriate way of constructing robust control invari-
ant tube originally proposed in [MSR05]. This method exploits the tube–control policy
parametrization in (10.1.9) and (10.1.10). In this section we will propose an improve-
ment over the results reported in [MSR05] by relaxing the assumptions on the terminal
set. We will demonstrate that the tube cross–section and the terminal constraint set can
be constructed by using two different state feedback controllers. This relaxation allows
for reducing conservativeness and a moderate improvement over the original proposal
in [MSR05].

151
Simple Robust Control Invariant Tube

The first proposal [MSR05] enabling for establishing a robust exponential stability of
an appropriate set used a stabilizing linear state feedback control law for the feedback
component of policy, i.e. ν(y) = Ky, and the set R was the corresponding minimal robust
positively invariant set for system x+ = (A+BK)x+w and constraint set (XK , W) where
XK , {x ∈ X | Kx ∈ U}. The minimal RPI set or its ε approximation can be computed
by using the results of Chapter 2. The terminal constraint set was required to be the
maximal or any positively invariant set for system x+ = (A + BK)x and constraint set
XK ⊖ R.

Simple Robust Control Invariant Tube Ingredients

Following the original proposal [MSR05] so that we also use a stabilizing linear state
feedback control law for the local feedback component of policy, i.e. ν(y) , K1 y and the
tube cross-section is chosen to be the minimal RPI set for system x+ = (A + BK1 )x + w
and constraint set (XK1 , W) where:

XK1 , {x ∈ X | K1 x ∈ U} (10.2.1)

Thus the set R, the ε (ε > 0) RPI approximation to the mRPI set, has the following
property (see Chapter 2 for a more detailed discussion and an efficient algorithm for the
computation of the set R):
(A + BK1 )R ⊕ W ⊆ R (10.2.2)

and is given (for the case when the origin is in interior(W)) by:
s−1
M
−1
R = (1 − α) (A + BK1 )W (10.2.3)
i=0

where the couple (α, s) ∈ [0, 1) × N is such that

(A + BK1 )s W ⊆ αW

and
s−1
M
−1
α(1 − α) (A + BK1 )W ⊆ Bnp (ε).
i=0

If the set R satisfies:

R ⊆ interior(X) and K1 R ⊆ interior(U), (10.2.4)

then it is an RPI set for system x+ = (A + BK1 )x + w and constraint set (XK1 , W). The
existence of the set R satisfying conditions above is assumed in [MSR05] and we also
make this assumption in the sequel.
In order to avoid repetition of the arguments of Chapter 4, we provide only necessary
comments for a suitable choice for the terminal constraint set. In this section we allow

152
the terminal constraint set to be any control invariant set for system z + = Az + Bv and
constraint set (Z, V) where:

Z , X ⊖ R and V , U ⊖ K1 R (10.2.5)

Clearly, a modest improvement over the original proposal in [MSR05] is obtained since
the terminal constraint set can be chosen to be the maximal or any positively invariant
set for system z + = (A + BK2 )z and constraint set ZK2 where:

ZK2 , {z ∈ Z | K2 z ∈ V} (10.2.6)

and Z and V are defined in (10.2.5) (so that ϕ(z) , K2 z). Additional flexibility is gained
by allowing that K1 6= K2 ; however one can choose K1 = K2 and the results of this
section still hold.
Let Zf be a RPI set for system z + = (A + BK2 )z and constraint set ZK2 . The set
Zf satisfies:
(A + BK2 )Zf ⊆ Zf and Zf ⊆ ZK2 (10.2.7)

We define the terminal set, as in previous section (10.1.14), by:

Xf , Zf ⊕ R (10.2.8)

The cost function is defined by (10.1.16) and (10.1.17) with quadratic path and termi-
nal cost functions and we assume that the terminal cost satisfies Assumption 10.3. The
resultant optimal control problem is exactly the same optimal control problem defined
in (10.1.18). The only difference is in the ingredients for this optimal control problem:
tube cross–section – the set R (defined by (10.2.3)), tube terminal set – the set Xf (de-
fined by (10.2.8)) and tighter constraints Z and V. The tighter constraints are in this
case defined in (10.2.5). To summarize the resultant robust optimal control problem is
defined by (10.1.18) with the cost function defined by (10.1.16) and (10.1.17) and the set
VN (x) defined by (10.1.15).

Receding Horizon Simple Tube Controller & Illustrative example

As originally proposed in [MSR05], the solution to PN (x) (10.1.18) allows for implemen-
tation of the following implicit robust model predictive control law κ0N (·):

κ0N (x) , v00 (x) + K1 (x − z 0 (x)) (10.2.9)

By Proposition 9.1 and Proposition 1 in [LCRM04] the controller κ0N (·) ensures ro-
bust constraint satisfaction (i.e. constraints satisfaction for any admissible disturbance
sequence) and it has the properties established in Proposition 10.3 and Theorem 10.1.
The numerical example is control of a constrained, open-loop unstable, second order
system (sampled double integrator) defined by:
" # " #
+ 1 1 0.5
x = x+ u+w (10.2.10)
0 1 1

153
The state constraints are
x ∈ X , {x | [0 1]x ≤ 2}

the control constraint is


u ∈ U , {u | |u| ≤ 1}

and the disturbance is bounded:

w ∈ W , {w | |w|∞ ≤ 0.1}.

The cost function is defined by (10.1.16) and (10.1.17) with Q = I, R = 0.01; the terminal
cost Vf (x) is the value function (1/2)x′ Pf x for the optimal unconstrained problem for
the nominal system so that
" #
2.0066 0.5099
Pf =
0.5099 1.2682

and u = Kx is the optimal unconstrained controller for (A, B, Q, R). In this example we
choose K1 = K2 = K. The set R is computed as a polytopic, ε RPI approximation of
the minimal RPI set for systems x+ = (A + BK)x + w and constraint set ({x ∈ X | Kx ∈
U}, W) by exploiting the results of Chapter 2 and the terminal constraint set Xf = Zf ⊕R
is constructed according to the discussion above. The horizon length is N = 9. The sets
X9 and Z9 , {z | z ⊕ R ⊆ X9 } are shown in Figure 10.2.1. In Figure 10.2.2, a state
x2
2

X9
0
Z9
−2

−4

−6

−8

−20 −10 0 10 20 30 40 50
x1

Figure 10.2.1: RCI Sets Xi

trajectory for initial condition x0 = (−5, −2)′ is shown; the dash-dot line is the actual
trajectory {x(i)} for a sequence of random, but extreme, disturbances while the solid
line is the sequence {z00 (x(i))} of optimal initial states. The sets z00 (x(i)) ⊕ R are shown
shaded in Figure 10.2.2.
Also shown in Figure 10.2.2 are the sets Zf and the set Xf = Zf ⊕ R which is the
effective terminal set for the ‘tube’ of trajectories illustrating that Xf is in general much
larger than R.

154
x2 3

x(3) Xf = Zf ⊕ R
1

z00 (x(3)) Zf
0

−1
z00 (x(0))

−2

x(0)
−3
−8 −6 −4 −2 0 2 4

x1

Figure 10.2.2: Simple RMPC tubes

10.3 Tube MPC – Optimized Robust Control Invariant


Tube
Here we discuss an improved method for the construction of an optimized robust con-
trol invariant tube. The method is based on results reported in [RM05b]. The method
presented in previous section is simple and easy to implement. However, the main dis-
advantage is the fact that given any arbitrary stabilizing linear state feedback control
law ν(y) = Ky the corresponding minimal robustly positively invariant set R for system
x+ = (A + BK)x + w and constraint set (Rn , W) specified in (10.2.3) does not neces-
sarily satisfies constraints (10.2.4) (See also Chapter 3 for more detailed discussion). In
order to overcome this disadvantage we illustrate how to exploit results of Chapter 3 and
the tube–control policy parametrization in (10.1.9) and (10.1.10). The first step is to
construct an appropriate RCI set R for system x+ = Ax + Bu + w and constraint set
(X, U, W).

Optimized Robust Control Invariant Tube Ingredients

To reduce conservativeness we minimize an appropriate norm of the set R by exploiting


a relevant result established in Theorem 3.1. We briefly repeat some discussion from
Chapter 3 for the sake of completeness.
Let Mi ∈ Rm×n , i ∈ N and for each k ∈ N let Mk , (M0 , M1 , . . . , Mk−2 , Mk−1 ).
An appropriate characterization of a family of RCI sets for (10.1.1) for for constraint set
(Rn , Rm , W) is given by the following sets for k ≥ n:
k−1
M
Rk (Mk ) , Di (Mk )W (10.3.1)
i=0

where the matrices Di (Mk ), i ∈ Nk , k ≥ n are defined by:


i−1
X
D0 (Mk ) , I, Di (Mk ) , Ai + Ai−1−j BMj , i ≥ 1 (10.3.2)
j=0

155
where Mk satisfies:
Dk (Mk ) = 0 (10.3.3)

Let Mk denote the set of all matrices Mk satisfying condition (10.3.3):

Mk , {Mk | Dk (Mk ) = 0} (10.3.4)

It is established in Theorem 3.1 that given any Mk ∈ Mk the set Rk (Mk ) is RCI for
system (10.1.1) and constraint set (Rn , Rm , W).
The feedback control law ν : Rk (Mk ) → Rm in Theorem 3.1 is a selection from the
set valued map:
U(x) , Mk W(x) (10.3.5)

where Mk ∈ Mk and the set of disturbance sequences W(x) is defined for each x ∈
Rk (Mk ) by:
W(x) , {w | w ∈ Wk , Dw = x}, (10.3.6)

where Wk , W × W × . . . × W and D = [Dk−1 (Mk ) . . . D0 (Mk )]. It is remarked in


Chapter 3 that a ν(·) satisfying Theorem Theorem 3.1 can be defined, for instance, as
follows:

ν(x) , Mk w0 (x) (10.3.7a)


w0 (x) , arg min{|w|2 | w ∈ W(x)} (10.3.7b)
w

As already observed in Chapter 3 the function w0 (·) is piecewise affine, being the so-
lution of a parametric quadratic programme; it follows that the feedback control law
ν : Rk (Mk ) → Rm is piecewise affine (being a linear map of a piecewise affine function).
It is shown in Chapter 3 that a suitable Mk can be obtained by solving the following
optimization problem:

P̄k : (M0k , α0 , β 0 , δ 0 ) = arg min {δ | (Mk , α, β, δ) ∈ Ω̄} (10.3.8)


Mk ,α,β,δ

where the constraint set Ω̄ is defined by:

Ω̄ , {(Mk , α, β, δ) | Mk ∈ Mk , Rk (Mk ) ⊆ αX, U (Mk ) ⊆ βU,


(α, β) ∈ [0, 1] × [0, 1], qα α + qβ β ≤ δ} (10.3.9)

with Rk (Mk ) defined by (10.3.1), U (Mk ) defined by:


k−1
M
U (Mk ) , Mi W (10.3.10)
i=0

and qα and qβ weights reflecting a desired contraction of state and control constraints.
The solution M0k to problem P̄k (which exists if Ω̄ 6= ∅) yields a set

Rk0 , Rk (M0k )

and feedback control law


ν 0 (x) , M0k w0 (x)

156
satisfying:
Rk0 ⊆ α0 X, ν 0 (x) ∈ U (Mk ) ⊆ β 0 U, (10.3.11)

for all x ∈ Rk0 . In the sequel of this section we assume that there exists a RCI set
R , Rk0 , Rk (M0k ) for system x+ = Ax + Bu + w and constraint set (X, U, W).
The terminal constraint set can be any control invariant set for system z + = Az + Bv
and constraint set (Z, V) where the tighter constrain set (Z, V) is defined by:

Z , X ⊖ Rk (M0k ) and V , U ⊖ U (M0k ) (10.3.12)

Let for instance Zf be a RPI set for system z + = (A + BK)z and constraint set ZK .
The set Zf satisfies:
(A + BK)Zf ⊆ Zf and Zf ⊆ ZK (10.3.13)

where ZK , {z ∈ Z | Kz ∈ V}. We define the terminal set, as in previous sec-


tion (10.1.14), by:
Xf , Zf ⊕ R (10.3.14)

The cost function is defined by (10.1.16) and (10.1.17) and we assume that the ter-
minal cost satisfies Assumption 10.3. The resultant optimal control problem is exactly
the same optimal control problem defined in (10.1.18).

Receding Horizon Optimized Tube Controller & Illustrative Example

The solution to PN (x) (10.1.18) allows for implementation of the following implicit robust
model predictive control law κ0N (·):

κ0N (x) , v00 (x) + ν 0 (x − z 0 (x)) (10.3.15)

By Proposition 9.1 and Proposition 1 in [LCRM04] the controller κ0N (·) ensures the
robust constraint satisfaction for any admissible disturbance sequence and it has the
properties established in Proposition 10.3 and Theorem 10.1.
Our illustrative example is a double integrator:
" # " #
+ 1 1 1
x = x+ u+w (10.3.16)
0 1 1

with

w ∈ W , w ∈ R2 | |w|∞ ≤ 0.5 ,

x ∈ X = {x ∈ R2 | x1 ≤ 1.85, x2 ≤ 2}

and
u ∈ U = {u | |u| ≤ 2.4},

where xi is the ith coordinate of a vector x. The cost function is defined by (10.1.17)
with Q = 100I, R = 100; the terminal cost Vf (x) is the value function (1/2)x′ Pf x for
the optimal unconstrained problem for the nominal system. The horizon is N = 21. The
design parameters for the minimization problem P̄k (see (10.3.8)) defining the components

157
of feedback actions of control policy are k = 5, qα = qβ = 1. The optimization problem
P̄k , which in this case is a linear program, yielded the following matrix M0k :
" #′
−0.3833 0 0.15 0.233 0
M0k = (10.3.17)
−1 0 0 0 0

The tube cross-section is constructed by using the set R = Rk (M0k ). The sequence
of the sets Xi , i = 0, 1, . . . , 21, where Xi is the domain of Vi0 (·) and the terminal set
Xf = Zf ⊕R where Zf is the maximal positively invariant set for system z + = (A+BK)z
under the tighter constraints Z = X ⊖ R and V = U ⊖ U (M0k ) where K is unconstrained
DLQR controller for (A, B, Q, R), is shown in Figure 10.3.1. A RMPC tube {X00 (x(i)) =
5
x2

0
X 0 = Xf

−5
X21

−10
−25 −20 −15 −10 −5 0 5
x1

Figure 10.3.1: Controllability Sets Xi , i = 0, 1, . . . , 21

z00 (x(i)) ⊕ R} for initial state x0 = (0.5, −8.5)′ is shown in Figure 10.3.2 for a sequence
of random admissible disturbances. The dash-dot line is the actual trajectory {x(i)} due
to the disturbance realization while the dotted line is the sequence {z00 (x(i))} of optimal
initial states for corresponding nominal system.
5
x2

0 z00 (x(4))

Xf
x(4)

Zf
−5

X00 (x(2)) = z00 (x(2)) ⊕ R


z00 (x(0))
−10
x(0)
−25 −20 −15 −10 −5 0 5
x1

Figure 10.3.2: RMPC Tube Trajectory

158
10.4 Tube MPC – Method III
Here we discuss a different tube – policy parametrization by exploiting the results re-
ported in [LCRM04]. The tube X = {X0 , X1 , . . . , XN } now has the more general form

Xi = zi ⊕ αi R, ∀i ∈ NN (10.4.1)

where the sequences {zi } and {αi } can be freely chosen – the sequence {zi } is no longer
required to satisfy the nominal difference equation (10.1.2). The sequence {αi } permits
the size of Xi to vary. We refer to each element zi of the sequence {zi } as the center of
Xi . The set R is a polytope and is not necessarily RCI:

R = convh{r1 , . . . , rp } (10.4.2)

For each i, we define


Xi , convh{x1i , . . . , xpi } (10.4.3)

where, for each j:


xji , zi + αi rj . (10.4.4)

With each tube X is associated a tube control sequence U , {U0 , U1 , . . . , UN −1 } where


for each i ∈ NN −1 :
Ui , {u1i , . . . , upi } (10.4.5)

where for each j, control uji is associated with vertex xji .


For any (X, U ) with X , convh{x1 , . . . xp } and U , {u1 , . . . , up }, let the function
(control law) µX,U : X → convh{U } be defined as follows:
p
X
µX,U (y) , λj (y)uj , y ∈ X (10.4.6)
j=1
p
X
2
λ(y) , arg min{|λ| | λj xj = y, λ ∈ Λ} (10.4.7)
λ
j=1
p
X
Λ , {λ | λj ≥ 0, j ∈ N+
p, λj = 1}. (10.4.8)
j=1

The function µX,U : X → U is piecewise affine (affine if R is a simplex) [LCRM04]. The


(time-varying piecewise affine) policy associated with the state – control tube pair (X, U)
is
π(X, U) , {µX0 ,U0 (·), . . . , µXN −1 ,UN −1 (·)} (10.4.9)
Pp
If uji ∈ U for all j ∈ N+
p , then µi (y; x) = µXi ,Ui (y) ,
j j
j=1 λ (y)ui ∈ U for all y ∈ Xi
since U is convex and λ(y) ∈ Λ; hence (10.1.7) is satisfied. Finally, if (Xi , Ui ) satisfy

Axji + Buji ∈ Xi+1 ⊖ W, ∀j ∈ N+


p (10.4.10)

for all i ∈ NN −1 , then (10.1.8) holds.

159
The decision variable for the optimal control problem is θ ∈ IR(N +1)(n+1)+N mp defined
by
θ , {a, z, U} (10.4.11)

where a , {α0 , α1 , . . . , αN }, z , {z0 , z1 , . . . , zN } and U , {U0 , U1 , . . . , UN −1 }. Clearly


initial state x and the decision variable θ (i.e. a, z and U) specify the tube X (since
R is fixed). For each x, let Θ(x) denote the set of θ satisfying the constraints discussed
in (10.1.4) – (10.1.8):

ΘN (x) = {θ | a ≥ 0, Xi ⊆ X, Ui ⊆ Up , XN ⊆ Xf ⊆ X,
Axji + Buji ∈ Xi+1 ⊖ W ∀(i, j) ∈ NN −1 × N+
p } (10.4.12)

where a ≥ 0 implies αi ≥ 0 for all i ∈ N+


N . Let

XN , {x | ΘN (x) 6= ∅}. (10.4.13)

Since, for given x, the constraints in (10.4.12) are affine in a, z and U. For instance, the
constraint Xi ⊆ X is equivalent to zi +αi rj ∈ X for all j ∈ N+
p where X is a polytope. The
set ΘN (x) is a polyhedron for each x ∈ XN . Note that satisfaction of a difference equation
is not required; the difference equation is replaced by the difference inclusion (10.4.12).
Remark 10.5 (Comment on X0 and U0 ) We remark that in [LCRM04] the state and
control tube were required to satisfy that X0 = {x} and U0 = {u0 }. This is equivalent to
imposing an additional constraint α0 = 0 and z0 = x; in this section we consider a slightly
modified parametrization in which we allow X0 = z0 ⊕ α0 R and U0 = {u11 , . . . , up1 }.
The following observation is a simple extension of Proposition 4 in [LCRM04] and is
along the lines of Proposition 9.1:
Proposition 10.4 (Tube Robust Constraint Satisfaction) Suppose x ∈ XN and θ ∈
ΘN (x). Let π denote the associated policy defined by (10.4.9). Then φ(i; x, π, w) ∈ Xi ⊆
X for all i ∈ NN −1 , µi (φ(i; x, π, w)) ∈ U for all i ∈ NN −1 , and φ(N ; x, π) ∈ Xf ⊆ X for
every initial state x ∈ X0 and every admissible disturbance sequence w (policy π steers
any initial state x ∈ X0 to Xf along a trajectory lying in the tube X, and satisfying, for
each i, state and control constraints for every admissible disturbance sequence).

Pp j j j j
Proof: Suppose xi ∈ Xi ; then xi = j=1 λi (xi )xi and µi (xi ) = λi (xi )ui . Hence
P
Ax + Bµi (xi ) = pj=1 λji (Axji + Buji ). But since Axji + Buji ∈ Xi+1 ⊖ W, λji ≥ 0 and
Pp j
j=1 λi = 1, it follows that Axi +Bµi (xi ) ∈ Xi+1 ⊖W or xi+1 = Axi +Bµi (xi )+w ∈ Xi+1
for all w ∈ W. But x0 , x ∈ X0 . By induction, xi = φ(i; x, π, w) ∈ Xi ⊆ X for all i ∈ NN
for every admissible disturbance sequence w ∈ WN . Since uji ∈ U for all i ∈ NN −1 ,
j ∈ N+
p , µi (xi ) ∈ U for all i ∈ NN −1 . The remaining assertions follow.

QeD.

In [LCRM04] the cost VN (x, θ) associated with a particular tube was defined by:
N
X −1
VN (x, θ) , ℓ(Xi , Ui ) + Vf (XN ) (10.4.14)
i=0

160
Pp j j
PJ j
where, with some abuse of notation, ℓ(X, U ) , j=1 ℓ(x , u ) and Vf (X) , j=1 Vf (x )
where xj is, as above, the j th vertex of the polytope X and uj the j th element of U (uj
is the control associated with vertex xj ). In [LCRM04] it was assumed that ℓ(x, u) =
(1/2)[|x|2Q + |u|2R ] and that Vf (x) = (1/2)|x|2P where Q, R and P are all positive definite.
These ingredients yield the following tube optimal control problem PN (x):

PN (x) : min{VN (x, θ) | θ ∈ ΘN (x)} (10.4.15)


θ

Because VN (x, ·) is quadratic and Θ(x) is polyhedral, PN (x) is a quadratic program.


The solution to PN (x) allowed for implementation of a set of tube controllers and a more
detailed discussion is given in [LCRM04].
Here, we will consider an interesting and special case that allows us to establish
satisfaction of Assumptions 9.1 – 9.3 and hence to establish relevant stability properties.
More precisely we will discuss the choice of the set R, the terminal set Xf and the cost
function VN (x, θ). We consider the cost function defined by:
N
X −1
VNs (x, θ) , ℓ(Xi , Ui ) + Vf (XN ) (10.4.16)
i=0

with ℓ(X, U ) defined by

ℓ(X, U ) = (1/2)z ′ Qz + (1/2)ū′ Rū + (1/2)q(α − 1)2 (10.4.17)


Pp j
where ū , (1/p) j=1 u and ‘tube’ terminal cost:

Vf (X) = (1/2)z ′ P z + (1/2)q(α − 1)2 (10.4.18)

where Q, R and P are positive definite and q is a positive scalar.


First we assume that the set R is given by (10.2.3) and it satisfies (10.2.2) and (10.2.4):

(A + BK)R ⊕ W ⊆ R, R ⊆ interior(X) and KR ⊆ interior(U),

where u = Kx is a stabilizing linear state feedback control law. We again let Z , X ⊖ R


and V , U ⊖ KR. It follows from the definition of ℓ(·) that, with p = 2, there exist a
c1 > 0 and c3 > c1 > 0 such that

c3 d(X, R) ≥ ℓ(X, U ) ≥ c1 d(X, R) (10.4.19)

Similarly, there exists a c2 > 0 such that c3 ≥ c2 > c1 > 0 and

Vf (X) ≤ c2 d(X, R) (10.4.20)

so that Assumption 9.3(i) –(iii) is satisfied. Now, the terminal set is defined in previous
two sections:
Xf , Zf ⊕ R (10.4.21)

where Zf satisfies:

(A + BK)Zf ⊆ Zf , Zf ⊆ Z and KZf ⊆ V. (10.4.22)

161
It follows that the set Φ , {z ⊕ R | z ∈ Zf } is set RCI for system x+ = Ax + Bu + w
and constraint set (X, U, W).
Suppose X = 0 ⊕ R so that X ⊆ Xf and z = 0. For any X = z ⊕ R ⊆ Xf (where
z ∈ Zf , U = {u1 , . . . , uj } is constructed as follows:

uj , Kxj , j ∈ N+
p. (10.4.23)

so that
µX,U (x) = µf (x) , Kx (10.4.24)
Pp
if X ⊆ Xf . Additionally, if R has the symmetry property that j=1 rj = 0, then
P
ū = (1/p) pj=1 uj = Kz and

ℓ(X, U ) = (1/2)z ′ (Q + K ′ RK)z (10.4.25)

so that ℓ(X, U ) = 0 if X = S (which implies z = 0). Similarly Vf (X) = 0 if X = S


since then z = 0 and α = 1. Hence Assumption 9.3(i) –(ii) are satisfied. Since the
set Φ , {z ⊕ R | z ∈ Zf } is set RCI for system x+ = Ax + Bu + w and constraint set
(X, U, W) in order to establish satisfaction of Assumption 9.3(iii) it suffices to show
that:
Vf (X + ) + ℓ(X, U ) ≤ Vf (X)

where X + = (A + BK)z ⊕ R if X = z ⊕ R. But this follows, since we can choose P


in (10.4.18) to satisfy:

(A + BK)′ P (A + BK) + Q + K ′ RK ≤ P

and obtain

Vf (X + ) = (1/2)z ′ (A + BK)′ P (A + BK)z ≤ (1/2)z ′ (P − Q − K ′ RK)z.

Hence
Vf (X + ) ≤ Vf (X) − ℓ(X, U ).

where X + = (A + BK)z ⊕ R if X = z ⊕ R. Note that (A + BK)(z ⊕ R) ⊕ W ⊆


(A + BK)z ⊕ R, since (A + BK)R ⊕ W ⊆ R. For each x, let ΘsN (x) the set of θ satisfying
the constraints discussed in (10.1.4) – (10.1.8) and additionally that αN = 1:

ΘsN (x) = {θ | a ≥ 0, αN = 1, Xi ⊆ X, Ui ⊆ Up , XN ⊆ Xf ⊆ X,
Axji + Buji ∈ Xi+1 ⊖ W ∀(i, j) ∈ NN −1 × N+
p } (10.4.26)

Let
XNs , {x | ΘsN (x) 6= ∅}. (10.4.27)

and let

VNs 0 (x) = min{VN (x, θ) | θ ∈ ΘsN (x)}, (10.4.28)


θ

θ0 (x) ∈ arg min{VN (x, θ) | θ ∈ ΘsN (x)} (10.4.29)


θ

162
0 (x)} where each X 0 (x) = z 0 (x) ⊕ α0 (x)R, i ∈
so that X0 (x) = {X00 (x), X10 (x), . . . XN i i i
IN and U0 (x) = {U00 (x), U10 (x), . . . UN
0
−1 (x)}. The corresponding policy π 0 (x) ,
{µ00 (·; x), µ01 (·; x), . . . , µ0N −1 (·; x)} where each µ0i (·; x) , µX 0 (x),U 0 (x) (·) and µX,U (·) is de-
i i

fined by (10.4.6) – (10.4.8). The resultant optimal control problem is a quadratic program
hence Assumption 9.1 is satisfied. The implicit robust model predictive control law is
defined by:
κN (x′ ) , µX 0 (x),U 0 (x) (x′ ; x) (10.4.30)
i i

Since Assumptions 9.1 – 9.3 are satisfied providing that XNs is bounded we can state
the following stabilizing properties of the controller (10.4.30) by exploiting Theorem 9.1
and Theorem 9.2:
Theorem 10.2. (Robust Exponential Stability of R) Suppose that XNs is bounded,
then the set R is robustly exponentially stable for controlled uncertain system x+ =
Ax + Bκ0N (x) + w, w ∈ W . The region of attraction is XNs .

10.4.1 Numerical Examples for Tube MPC – III method

The numerical example is control of a constrained second order system defined by:
" # " #
1 1 0.5
x+ = x+ u+w (10.4.31)
0 1 1
The state constraints are
x ∈ X , {x | [0 1]x ≤ 1};
the control constraint is
u ∈ U , {u | |u| ≤ 1}
and the disturbance is bounded:

w ∈ W , {w | |w|∞ ≤ 0.1}.

The cost weights are Q = 100I, R = 1 and u = Kx is ‘dead-beat’ controller for


(A, B). The terminal cost is (1/2)x′ P x where P satisfying 9.2(iii) for given Q, R and K
is: " #
454 129
P =
129 267.5
The set R is the minimal robust positively invariant set and is polytopic and the terminal
constraint set Xf satisfies 9.2(iii). The horizon length is N = 11. In Figure 10.4.1, a
‘tube’ state trajectory for tube contraction weight q = 10 and initial condition x0 = (7, 1)′
is shown; the dash-dot line is the actual trajectory {x(i)} for a sequence of random, but
extreme, disturbances while the solid line is the sequence {z00 (x(i))} of optimal initial tube
centers. Uncertain trajectory lies in the ‘tube’ {z00 (x(i)) ⊕ α00 (x(i))R}. In Figure 10.4.2,
a ‘tube’ state trajectory for tube contraction weight q = 1000 and initial condition x0 =
(7, 1)′ is shown; as before the dash-dot line is the actual trajectory {x(i)} for a sequence
of random, but extreme, disturbances while the solid line is the sequence {z00 (x(i))} of
optimal initial tube centers. Uncertain trajectory lies in the ‘tube’ {z00 (x(i))⊕α00 (x(i))R}.

163
3
x2

Xf z00 (x(0)) = x(0)


1

−1

−2 R

−3
x(3)

z00 (x(3))
−4
−2 −1 0 1 2 3 4 5 6 7 8
x19

Figure 10.4.1: ‘Tube’ MPC trajectory with q = 10

3
x2

2 x(0)
Xf
1

0 z00 (x(0))

−1

R
−2

x(3)
−3
z00 (x(3))

−4
−2 −1 0 1 2 3 4 5 6 7 8 9
x1

Figure 10.4.2: ‘Tube’ MPC trajectory with q = 1000

10.5 Extensions and Summary


The tube model predictive controller presented in Section 10.4 can be extended to deal
with parametric uncertainty and bounded disturbances. This extension is briefly dis-
cussed in Section 5 of [LCRM04]. It is also possible to design the tube model predictive
controller for the case when system being controlled is piecewise affine; this extension is
non – trivial and a more detailed discussion can be found in [RM04a]. Stability proper-
ties of the tube model predictive controller for these relevant extensions can be further
improved and the details will be presented elsewhere.
In the first section we have analysed a robust model predictive controller for con-
strained linear systems with bounded disturbances based on the recent and relevant
proposals [MSR05, RM05b]. A novel feature of the robust model predictive controller is
the fact that the decision variable in the (nominal) optimal control problem solved online
incorporate the initial state of the model as well as the control sequence. The resultant
value function is zero in the RPI set R permitting robust exponential stability of R to be
established. The controller is relatively simple, merely requiring, in the optimal control

164
problem solved online, optimization over the initial state z0 of the model and a sequence
of control actions subject to satisfaction of a tighter set of constraints than in the original
problem; the optimal control problem is a standard quadratic program of approximately
the same complexity as that required for conventional model predictive control (the di-
mension of the decision variable is increased by n). A modest improvement has been
proposed and an appropriate numerical example was given.
A set of necessary ingredients for implementation of the method was discussed in
the second and the third sections. A moderate improvement of the method reported
in [MSR05] has been discussed in the second section. The main contribution of the
third section of this chapter is a simple tube controller that ensures robust exponential
stability of R, a RCI set – the ‘origin’ for the uncertain system. The complexity of the
corresponding robust optimal control problem is marginally increased compared with
that for conventional model predictive control. A set of necessary ingredients ensuring
robust exponential stability has been identified. The proposed scheme is computationally
simpler than the schemes proposed in [L0̈3a, vHB03, KA04, LCRM04] and it has an
advantage over schemes proposed in [KRS00, CRZ01, ML01, Smi04, MSR05] because
the feedback component of control policy, being piecewise affine results, in a smaller tube
cross–section.
In the fourth section a method for achieving robust model predictive control using
more general parametrization of tubes has been presented and analyzed. The method
achieves a modest improvement over the disturbance invariant controller when the system
being controlled is linear and time-invariant but can also be used, unlike the disturbance
invariant controller, when the system is time-varying or subject to parameter uncertainty
(and bounded disturbances).

165
Part III

Parametric Mathematical
Programming in Control Theory

166
Chapter 11

Parametric Mathematical
Programming and Optimal
Control

It has long been an axiom of mine that the little things are infinitely the most important.

– Sir Arthur Conan Doyle

Recent advances in parametric mathematical programming (pMP) have enabled char-


acterizations of solutions to a wide range of optimal control problems. While the standard
sensitivity analysis is concerned with variation of the solution to a considered optimization
problem with respect to perturbations of the parameters, the parametric mathematical
programming is concerned with characterization of the solution for a set of the possible
values of parameters. In this section we refer to a particular optimization problem as
a parametric mathematical programming problem when characterization of the solution
depends on the certain vector of parameters. Some of the authors refer to these op-
timization problems as multi – parametric mathematical programming problems if the
parameter is not a scalar but a vector.
The first results on parametric programming appear to be reported in work by Gass
and Saaty [GS55b, GS55a]. Since this early work many authors have treated a variety
of the problems related to parametric programming. Relevant ideas can be found, for
instance, in work by Propoi and Yadukin [PY78a, PY78b] and Schechter [Sch87]. The
first book [Gal79] appeared in 1979 and is still widely quoted. A number of books on the
subject exists [BGD+ 83, Lev94]; in particular in [BGD+ 83] contains a comprehensive set
of results for parametric non – linear optimization, while in [Lev94] perturbation theory
in mathematical programming is elaborated.
Most of the first results appear to be concerned more with theoretical aspects and
algorithmic side has not been developed; nevertheless these ideas have influenced re-
cent studies and formulated the development of efficient computational procedures for

167
parametric mathematical programming problems. These procedures are based on com-
putational geometry and polyhedral algebra. Recently many results have appeared, for
example [PGM98, DP00, DBP02], extending earlier results to mixed integer linear and
quadratic, parametric programming and on the application of these new results to char-
acterization of the solution of a variety of optimal control problems involving linear and
hybrid dynamic systems [SDG00, SGD00, May01, BMDP02, SDPP02, MR02, KM02a,
MR03b, BBM00b, BBM00a, BBM03a]; it is the consideration of mixed integer problems
that permits the extension to hybrid systems.
In this chapter we will recall basic results related to parametric linear programs
(pLP’s), parametric qyadratic programs (pQP’s), parametric piecewise affine programs
(pPAP’s) and parametric piecewise quadratic programs (pPQP’s). Reverse transforma-
tion procedure is then described and applied to constrained linear quadratic control and
optimal control of constrained piecewise affine discrete time systems1 .

11.1 Basic Parametric Mathematical Programs


This section is concerned with a generic parametric programming problem and introduces
a prototype algorithm that can be employed for solving this parametric programming
problem. a set of elementary results for pLP’s, pQP’s, pPAP’s and pPQP’s is also
provided.
We consider parametric programming problem defined by:

P(θ) : Ψ0 (θ) , inf {Ψ(θ, y) | (θ, y) ∈ C } (11.1.1)


y

where y ∈ Rny is the decision variable, θ ∈ Rnθ the parameter, and C a given non–empty
subset of Rny × Rnθ . Let y 0 (θ) denote the set of minimizers, providing that the minimizer
for P(θ) exists , i.e.

y 0 (θ) , arg inf {Ψ(θ, y) | (θ, y) ∈ C } , ∀θ ∈ Θ (11.1.2)


y

where
Θ , {θ | ∃y such that (θ, y) ∈ C } (11.1.3)

We are interested in characterization of the minimizer y 0 (θ), providing it exists, and


the value function Ψ0 (θ) = Ψ(θ, y 0 (θ)) in the set of parameters θ for which there exists
a y such that (θ, y) ∈ C (11.1.3). The algorithm we employ uses a condition of the
optimal solution that, if known a priori to be satisfied, enables a simple solution to the
optimization problem to be obtained. An example of such a condition is that certain
constraints are active. Algorithm 3 is a prototype algorithm for solving the parametric
program P(θ).
We now recall a set of basic results for the cases when objective function is lin-
ear, quadratic, piecewise affine and piecewise quadratic. We do not discuss in detail
1
Sections 11.2 and 11.3 are based on [MR02, MR03b] and primary contributor is Professor David Q.
Mayne. An extension of the original results is presented in Section 11.3.

168
Algorithm 3 Basic Algorithm for solving parametric programs
Require: Ψ(θ, y), C, Θ
Ensure: Set of Ri , yi0 (θ) and Ψ0i (θ), such that y 0 (θ) = yi0 (θ) and Ψ0 (θ) = Ψ0i (θ) for all
S
θ ∈ Ri and Θ = i Ri ,
1: Set i = 1, pick a value for parameter θ1 ∈ S and set S0 = ∅
2: repeat
3: Solve P(θ) for a given value θi of the parameter x using a standard (non-parametric)
algorithm.
4: Identify a condition that holds at the optimal solution y 0 (θi ) to P(θi ). Then obtain
the solution to P(θ), for arbitrary θ, under the assumption that the condition (that
is known to be satisfied at (y 0 (θi ), θi )) is satisfied at (y 0 (θ), θ) (note that y 0 (θ) can
be set–valued).
5: Obtain the set Ri of parameter values θ in which the solution to the simplified
problem is optimal for the original problem P(θ). Form the union Si of Ri with
regions previously obtained (Si = Ri ∪ Si−1 ).
6: Increment i by one (i = i + 1) and pick a new value θi+1 6∈ Si of the parameter.
7: until Θ \ Si 6= ∅

specialisation of the Algorithm 3 to these cases, since several obvious procedures have
been proposed in the literature, see for instance [PGM98, BMDP02, DBP02, SDG00,
KM02a, MR03b, BBM03b, BBM03b]. Instead we will provide additional discussion, in
Sections 11.2 and 11.3, for Steps 4 and 5 of the basic algorithm 3.
First we introduce the following definitions:

Definition 11.1 (Cover and partition of a given set) A family of sets P , {Pi | i ∈ I } is
a (closed) cover of a (closed) set X ⊆ Rn if the index set I is finite, each Pi is a (closed)
set and X = ∪i∈I Pi . A family of sets P , {Pi | i ∈ I } is a (closed) partition of a (closed)
set X ⊆ Rn if the index set I is finite, each Pi is a (closed) set and X = ∪i∈I Pi and for
every i 6= j, i, j ∈ I it holds that interior(Pi ) ∩ interior(Pj ) = ∅.

Definition 11.2 (Polyhedral cover and partition of a polygon) A family of sets P ,


{Pi | i ∈ I } is a (closed) polyhedral cover of a (closed) polygon X ⊆ Rn if it is cover of
X and each Pi is a (closed) polyhedron. A family of sets P , {Pi | i ∈ I } is a (closed)
polyhedral partition of a (closed) polygon X ⊆ Rn if it is partition of X and each Pi is a
(closed) polyhedron.
In this section we slightly deviate from the standard notation and we use P X , I X
and PiX to denote, respectively, a polyhedral cover of X , its associated index set and the
ith polyhedron in the cover.

Remark 11.1 (Comment on Definition 11.2) Note that the definition of a polyhedral
cover given here does not require that each Pi have a non-empty interior, nor does it

169
require that P , {Pi | i ∈ I } be a partition of X . Note also that our use of the term
cover is stronger than the commonly-used definition, where a cover is a collection of
sets P , {Pi | i ∈ I } such that X ⊆ ∪i∈I Pi — we require equality and not the weaker
condition of inclusion.

Definition 11.3 (Piecewise Affine and Quadratic Function on a cover)

• A function ψ : X → Rn is said to be piecewise affine on a cover P , {Pi | i ∈ I }


of X ⊆ Rm if it satisfies

ψ(x) = Ki x + ki , ∀x ∈ Pi , i ∈ I,

for some Ki , ki , i ∈ I.

• A function ψ : X → Rn is said to be piecewise quadratic on a cover P , {Pi | i ∈ I }


of X ⊆ Rm if it satisfies

ψ(x) = x′ Qi x + li x + mi , ∀x ∈ Pi , i ∈ I,

for some Qi , li , mi i ∈ I.

Now, we recall a basic result on the nature of the solution to a pLP [Gal95, BBM03b,
RKM04], where the cost is a linear/affine function of the decision variable y and pa-
rameter θ and the constraints on the decision variables and parameters are given by a
non – empty polytope. The reader is referred, for instance, to [BBM03b] for details of a
geometric algorithm for computing the solution to a pLP.
Theorem 11.1. (Solution to a pLP) If

Ψ0 (θ) , inf l′ θ + m′ y + n | (θ, y) ∈ C , ∀θ ∈ Θ (11.1.4a)
y

y 0 (θ) , arg inf l′ θ + m′ y + n | (θ, y) ∈ C , ∀θ ∈ Θ (11.1.4b)
y

where (l, m, n) ∈ Rnθ × Rny × R, C is a (closed) polyhedron and the (closed) polyhedron

Θ , {θ | ∃y such that (θ, y) ∈ C } , (11.1.5)

then Ψ0 : Θ → R is a continuous (convex), piecewise affine function on a (closed) poly-


hedral cover of Θ. Furthermore, provided y 0 (θ) exists for all θ ∈ Θ, then there exists a
continuous, piecewise affine function2 υ : Θ → Rny on a (closed) polyhedral cover of Θ
such that υ(θ) ∈ y 0 (θ) for all θ ∈ Θ.
A graphical illustration of the solution to pLP is given in Figure 11.1.1.
Next, we recall a basic result on the nature of the solution to a strictly convex pQP,
where the cost is a quadratic function of the decision variable y and parameter θ and
the constraints on the decision variables and parameters are given by a polytope. The
following result can be found in, for example, [BMDP02, SDG00, MR03b].
2
Note that, in general, y 0 (θ) is set-valued for all θ ∈ Θ.

170
y space (θ, y)space

(θ, y) ∈ C

l′ θ + m′ y + n = c2 , c2 > c1 l ′ θ + m ′ y + n = c1

y 0 (θ) = K2 θ + k2

R1 R2 R3

θ space

Figure 11.1.1: Graphical Illustration of Solution to pLP

Theorem 11.2. (Solution to a strictly convex pQP) If



Ψ0 (θ) , inf θ′ Qθθ θ + θ′ Qθy y + y ′ Qyy y + l′ θ + m′ y + n | (θ, y) ∈ C , ∀θ ∈ Θ
y

(11.1.6a)

y 0 (θ) , arg inf θ′ Qθθ θ + θ′ Qθy y + y ′ Qyy y + l′ θ + m′ y + n | (θ, y) ∈ C , ∀θ ∈ Θ
y

(11.1.6b)

where (Qθθ , Qθy , Qyy , l, m, n) ∈ Rnθ ×nθ × Rnθ ×ny × Rny ×ny × Rnθ × Rny × R, Qyy > 03
and C is a (closed) polyhedron and the (closed) polyhedron

Θ , {θ | ∃y such that (θ, y) ∈ C } , (11.1.7)

then Ψ0 : Θ → R is a continuous (convex), piecewise quadratic function on a (closed)


polyhedral partition of Θ. Furthermore, provided y 0 (θ) exists for all θ ∈ Θ, then y 0 :
Θ → Rny is a continuous, piecewise affine function on a (closed) polyhedral partition of
Θ.

Remark 11.2 (Solution to a convex pQP) Theorem 11.2 is easily extended to the case
of convex pQP’s.
We now recall the following result [KM02a], which characterizes the solution to a
pPAP, where the cost is a piecewise affine function of the decision variables y and pa-
rameters θ and polyhedral covers are given for the constraints on the decision variables
and parameters. Since the proof is constructive, it is also recalled below.
3
Qyy > 0 ⇔ y ′ Qyy y > 0, ∀y 6= 0.

171
Theorem 11.3. (Solution to a pPAP [KM02a]) Let Ψ : D → R, where D is a (closed)
polygon, be a piecewise affine function of the form

Ψ(θ, y) , li′ θ + m′i y + ni , ∀(θ, y) ∈ PiD , i ∈ I D , (11.1.8)



where P D , PiD i ∈ I D is a (closed) polyhedral cover of D and (li , mi , ni ) ∈ Rnθ ×
Rny × R for all i ∈ I D .

If P C , PiC i ∈ I C is a (closed) polyhedral cover of the (closed) polygon C,

Ψ0 (θ) , inf {Ψ(θ, y) | (θ, y) ∈ C } , ∀θ ∈ Θ (11.1.9a)


y

y 0 (θ) , arg inf {Ψ(θ, y) | (θ, y) ∈ C } , ∀θ ∈ Θ (11.1.9b)


y

where
Θ , {θ | ∃y such that (θ, y) ∈ C ∩ D } , (11.1.10)
then Θ is a (closed) polygon and Ψ0 : Θ → R is piecewise affine on a polyhedral cover
of Θ. Furthermore, provided y 0 (θ) exists for all θ ∈ Θ, then there exists a function
υ : Θ → Rny that is piecewise affine on a polyhedral cover of Θ and satisfies υ(θ) ∈ y 0 (θ)
for all θ ∈ Θ.

Proof: For each (i, j) ∈ I C × I D , let Θi,j be the orthogonal projection of the (closed)
polyhedron PiC ∩ PjD onto the θ-space, i.e.

Θi,j , θ ∃y such that (θ, y) ∈ PiC ∩ PjD , ∀(i, j) ∈ I C × I D . (11.1.11)

If PiC ∩ PjD is non-empty, then Θi,j is a (closed) polyhedron, hence Θ = ∪i,j Θi,j is a
(closed) polygon.
From Theorem 11.1 it follows that the function Ψ0i,j : Θi,j → R, defined as

Ψ0i,j (θ) , inf lj′ θ + m′j y + nj (θ, y) ∈ PiC ∩ PjD , ∀θ ∈ Θi,j (11.1.12a)
y

is a convex, piecewise affine function on a (closed) polyhedral cover of Θi,j .


Consider now the index set

K(θ) , (i, j) ∈ I C × I D | θ ∈ Θi,j (11.1.13)

and note that for all θ ∈ Θ,



Ψ0 (θ) = inf Ψ(θ, y) (θ, y) ∈ PiC ∩ D, i ∈ I C (11.1.14a)
y,i

= inf lj′ θ + m′j y + nj (θ, y) ∈ PiC ∩ PjD , (i, j) ∈ I C × I D (11.1.14b)
y,i,j

= inf Ψ0i,j (θ) | (i, j) ∈ K(θ) . (11.1.14c)
i,j

Since Ψ0 (·) is the pointwise-infimum of a finite set of functions {Ψ0i,j (·)}, where each
Ψ0i,j (·) is piecewise affine over a polyhedral cover of its domain Θi,j , it follows that Ψ0 (·)
is piecewise affine on a polyhedral cover of Θ.
The claim that there exists a piecewise affine function υ : Θ → Rny such that υ(θ) ∈
y 0 (θ) for all θ ∈ Θ, follows from Theorem 11.1 and the above.

172
QeD.

A graphical illustration of the solution to pPAP is given in Figure 11.1.2. The so-
lution to the illustrative pPAP , obtained by Theorem 11.3, is defined over partitions
T1 , T2 , T3 and T4 . The minimizer defined over partitions T1 and T2 is shown by bold
solid line, while the bold dotted line illustrates the minimizer defined over partitions
T3 and T4 . The minimizer in partition T1 is equal to the minimizer of a problem
Ψ01 (θ) , inf y {l1′ θ + m′1 y + n1 | (θ, y) ∈ D1 }. The minimizer in partition T4 is equal to the
minimizer of a problem Ψ02 (θ) , inf y {l2′ θ + m′2 y + n2 | (θ, y) ∈ D2 }. While the minimizer
in partitions T2 and T3 is characterized according to Theorem 11.3. In particular, the min-
imizer in partition T2 is equal to minimizer of Ψ01 (θ) , inf y {l1′ θ + m′1 y + n1 | (θ, y) ∈ D1 }
in the set R21 \ R12 . Finally, the minimizer in partition T3 is equal to minimizer of
Ψ02 (θ) , inf y {l2′ θ + m′2 y + n2 | (θ, y) ∈ D2 } in the set R12 .

y space (θ, y)space

l2′ θ + m′2 y + n2 = c

l1′ θ + m′1 y + n1 = c

(θ, y) ∈ D2

(θ, y) ∈ D1

R12 R22
R11 R21 R31

T1 T2 T3 T4
θ space

Figure 11.1.2: Graphical Illustration of Solution to pPAP

We finally recall the following result, which characterizes the solution to a strictly
convex pPQP, where the cost is a strictly convex piecewise quadratic function of the
decision variables y and parameters θ and polyhedral covers are given for the constraints
on the decision variables and parameters. This result is an appropriate but obvious
extension of Theorem 11.3.

Theorem 11.4. (Solution to a strictly convex pPQP) Let Ψ : D → R, where D is a


(closed) polygon, be strictly piecewise quadratic of the form

Ψ(θ, y) , θ′ Qθθ i θ + θ′ Qθy i y + y ′ Qyy i y + li′ θ + m′i y + ni , ∀(θ, y) ∈ PiD , i ∈ I D , (11.1.15)


 D
where P D , Pi i ∈ I D is a (closed) polyhedral cover of D and
(Qθθ i , Qθy i , Qyy i , li , mi , ni ) ∈ Rnθ ×nθ × Rnθ ×ny × Rny ×ny × Rnθ × Rny × R, Qyy i > 0
for all i ∈ I D .

173

If P C , PiC i ∈ I C is a (closed) polyhedral cover of the (closed) polygon C,

Ψ0 (θ) , inf {Ψ(θ, y) | (θ, y) ∈ C } , ∀θ ∈ Θ (11.1.16a)


y

y 0 (θ) , arg inf {Ψ(θ, y) | (θ, y) ∈ C } , ∀θ ∈ Θ (11.1.16b)


y

where
Θ , {θ | ∃y such that (θ, y) ∈ C ∩ D } , (11.1.17)

then Θ is a (closed) polygon and Ψ0 : Θ → R is piecewise quadratic on a cover of Θ.


Furthermore, provided y 0 (θ) exists for all θ ∈ Θ, then there exists a function υ : Θ → Rny
that is piecewise affine on a cover of Θ such that υ(θ) ∈ y 0 (θ) for all θ ∈ Θ.

Remark 11.3 (Cover in Theorem 11.4 is not, in general, a polyhedral cover) Note that
in Theorem 11.4 we have used cover due to the fact that some of the sets defining the
cover are not, in general, polyhedral due to the fact that comparison of two quadratics
leads to the elliptical boundaries.

Remark 11.4 (Solution to a convex pQP) Theorem 11.4 can be extended to convex
pPQP’s.

11.2 Constrained Linear Quadratic Control


The problem of determining an optimal state feedback controller for linear systems with
quadratic cost and polyhedral state and control constraints (CLQR) was recently consid-
ered a very difficult problem. Determination of an optimal control law is usually achieved
by employing dynamic programming (DP) which is very difficult unless the value function
can be simply parameterized. The solutions to many constrained optimal control prob-
lems for linear systems subject to polyhedral state and control constraints with linear or
quadratic cost have recently been obtained by a relatively simple reformulation and by
using concept of pMP. Structure of these problems was exploited by using transformation
procedure. The basic idea of the transformation procedure is as follows. A set of condi-
tions is formulated, each potentially satisfied by the optimal solution. Each condition, if
known a priori, enables a simple solution to the optimal control problem to be obtained.
The transformation procedure therefore assumes a particular condition is satisfied, solves
the optimal control problem under this assumption, and then, as a final step, determines
the set of initial states for which this condition is satisfied by the optimal controller. The
state feedback control is then defined on this set. By repeating this procedure for every
other condition, not lying in the already determined sets, the state feedback optimal
control can be determined for a specified region of the state space. Instead of finding the
optimal control for a given initial state, the reverse transformation method determines
the set of states for which the optimal control satisfies a certain condition. This approach
is powerful because the condition to be satisfied can often be chosen to make determi-

174
nation of the optimal control simple and yields the optimal control law rather than an
optimal control sequence.
To make reverse transformation more concrete, consider the constrained linear
quadratic control problem for discrete time systems. Consider the linear discrete time
invariant system defined by:

x+ = Ax + Bu, y = Cx + Du (11.2.1)

where x ∈ Rn is the current state (assumed known), u ∈ Rm the current control, y ∈ Rp


the constrained output, and x+ ∈ Rn the successor state. The system is subject to the
‘hard’ constraint
y(i) ∈ Y, i ∈ NN −1 , (11.2.2)

where for each i y(i) = Cφ(i; x, u) + Du(i), and the terminal constraint set is

x(N ) ∈ Xf (11.2.3)

where x(N ) = φ(N ; x, u). The set Y and Xf are polyhedral and polytopic, respectively,
each of them containing the origin in its interior. The cost function is defined by:
N
X −1
V (x, u) , ℓ(x(i), u(i)) + Vf (x(N )) (11.2.4)
i=0

where x(i) = φ(i; x, u) (is the solution of difference equation (11.2.1) at time i if the
initial state is x and the control sequence is u). The functions ℓ(·) and Vf (·) are the path
cost and the terminal cost and are quadratic and positive definite:

ℓ(x, u) , (1/2)|x|2Q + x′ Su + (1/2)|u|2R , Vf (x) , (1/2)|x|2P (11.2.5)

so that " #
Q S
>0
S′ R
and Pf > 0. The constraints y(i) ∈ Y, i ∈ NN −1 and x(N ) ∈ Xf constitute an implicit
constraint on the control sequence u:

u ∈ U(x) (11.2.6)

where

U(x) , {u | Cφ(i; x, u) + Du(i) ∈ Y, i ∈ NN −1 , φ(N ; x, u) ∈ Xf } (11.2.7)

The optimal control problem is easily formulated as the parametric quadratic program
P(x) (in which x is the parameter and u the decision variable):

P(x) : min{V (x, u) | u ∈ U(x)} (11.2.8)


u

where
V (x, u) = (1/2)|x|2Wxx + x′ Wxu u + (1/2)|u|2Wuu (11.2.9)

175
and
U(x) = {u | M u ≤ N x + p} (11.2.10)

for some Wxx , Wxu , Wuu , M, N and p. Recall that (in order to simplify notation), u,
wherever it occurs in algebraic expressions (such as u′ Wuu u or M u above) denotes the
vector form (u(0)′ , u(1)′ , . . . , u(N − 1)′ )′ of the sequence. The same convention applies
to other sequences.
Let Nu denote the number of rows of M . The optimal value function V 0 (·) and the
optimal control u0 (·) are functions of the parameter x; the objective is determination of
these functions rather than their values at a given initial state x. We refer, somewhat
unconventionally, to u0 (·) as a control law. Usually control law refers to the map x 7→
u(x) from current state to current control action; here control law refers to the map
x 7→ u0 (x) from state to control sequence.
The gradient of V (·) with respect to u is:

∇u V (x, u) = Wuu u + Wux x (11.2.11)

The domain of V 0 (·) (and u0 (·)) is:

XN , {x | U(x) 6= ∅} (11.2.12)

The set XN can be efficiently computed in a number of ways, since X is given by:

XN = ProjX {(x, u) | − N x + M u ≤ p} (11.2.13)

Another alternative is employ the following standard set recursion:

Xi , {x | ∃u s.t. Cx + Du ∈ Y, Ax + Bu ∈ Xi−1 }, i ∈ N+
N , X0 , X f (11.2.14)

in which case

Xi = ProjX {(x, u) | Cx + Du ∈ Y, Ax + Bu ∈ Xi−1 }, i ∈ N+


N , X0 , X f (11.2.15)

Remark 11.5 (CI property of {Xi }) If the set X0 = Xf is a CI set for system x+ =
Ax + Bu and constraints set Y then set sequence {Xi } is monotonically non–decreasing
sequence of CI sets, i.e. Xi−1 ⊆ Xi for each i ∈ NN .
We assume that Nu ≥ N m (the dimension u is m) and that U(x) is a (compact)
polytope at each x in XN ; this is the case, for example, if y = u and Xf = Rn so that
U(x) = YN . Let:
Z , {(x, u) | − N x + M u ≤ p} (11.2.16)

so that XN = ProjX Z and U(x) = {u | (x, u) ∈ Z}.


The following results is a consequence of Theorem 11.2. The proof of this result can
be found in [MR03b, MRVK04].
Proposition 11.1 (Properties of the value function V 0 (·) and the optimal control law
u0 (·)) Suppose that V : Z → XN is strictly convex, continuous function and that Z

176
defined in (11.2.16) is a polytope (compact polyhedron). Then for all x ∈ XN = ProjX Z
the solution to P(x) exists and is unique. The value function V 0 : XN → R is continuous
with domain XN , and the optimal control law u0 : XN → RN m is continuous on XN .
We proceed by discuss in more detail Steps 4 and 5 of Algorithm 3. Implicit in [DG99,
SGD00, DBP02, BMDP02], in which the solution of the constrained linear quadratic
problem is obtained, is the following transformation that provides a systematic procedure
for determining V 0 (·) and u0 (·).
For each x let I 0 (x) denote the set of active constraints (i ∈ I 0 (x) if and only if
Mi u0 (x) = Ni x + pi where the subscript i is used to indicate the ith row). For each
I ⊂ NNu , there exists a region RI ⊂ Rn , possibly empty, such that I 0 (x) = I for all
x ∈ RI . This region is simply determined. The equality constrained quadratic optimal
control problem

PI (x) : min{V (x, u) | Mi u0 (x) = Ni x + pi , i ∈ I} (11.2.17)


u

is solved to obtain the value function VI0 (·) and the optimal control u0I (·) :

VI0 (x) = (1/2)x′ PI x + p′I x + qI (11.2.18)


u0I (x) = KI x + kI (11.2.19)

For each I ∈ NNu , let I c denote the complement of I, MI the matrix whose rows are
Mi , i ∈ I and LI a matrix such that {y | LI y ≤ 0} = {MI′ ν | ν ≥ 0}, the polar cone of
the cone F(x) , {h | MI h ≤ 0} of feasible directions for P(x) at u0I (x) .

Theorem 11.5. (Constrained Linear Quadratic Regulator – Characterization of the


solution) The control law u0I (x) = KI x + kI is feasible for the original problem P(x) at
those states x satisfying

Mi (KI x + kI ) ≤ Ni x + pi , i ∈ I c (11.2.20)

and is optimal for problem P(x) at those x ∈ RI defined by


( )
Mi (KI x + kI ) ≤ Ni x + pi , i ∈ I c
RI , x (11.2.21)
LI ∇u V (x, u0I (x)) ≥ 0

Proof: (i) The control law satisfies Mi (KI x + kI ) = Ni x + pi , i ∈ I for all x by


construction. Hence the control constraint u0I (x) ∈ U(x) is satisfied at all x where
Mi (KI x + kI ) ≤ Ni x + pi , i ∈ I c . (ii) By (i), the control u0I (x) is feasible at every
point x in RI . At any x ∈ RI , a feasible direction h ∈ RN m for P (x) satisfies Mi h ≤ 0
for all i ∈ I (since I 0 (x) = I for all x ∈ RI ). Let g , ∇u V (x, u0I (x)); the condition
LI g = LI [Wuu (KI x + kI ) + Wux x] ≥ 0 ensures −g = MI′ ν for some ν ≥ 0. Hence at any
x ∈ RI , the directional derivative du V (x, u0I (x); h) = g ′ h = −ν ′ MI h ≥ 0 for any feasible
direction h (MI h ≤ 0). Since u → V (x, u) is convex, the optimality of u0I (x) at every
x ∈ RI follows.

177
QeD.

The proof is direct and simple and avoids technical conditions required is previous proofs
that use multipliers. The control law u0I (·) is optimal, for P(x), in the (possibly empty)
set RI .

Corollary 11.1 (Solution Structure) The value function V 0 (·) for the constrained linear
quadratic optimal control problem is piecewise quadratic and continuous, and the optimal
control law u0 (·) is piecewise affine and continuous. The value function and optimal
control law are defined by

V 0 (x) , VI0 (x), x ∈ RI


u0 (x) , u0I (x), x ∈ RI

for every subset I of NNu . The sets RI are polytopes, and the set of sets R , {RI , RI 6=
∅, I ⊆ NNu } is a polyhedral partition of XN the domain of V 0 (·) (and u0 (·)).
By choosing all possible subsets I of NNu , the value function V 0 (·) and the optimal
control law u0 (·) can be determined, as well as their domain XN . A more satisfactory
procedure is to employ the active constraint sets I 0 (x) at suitably selected values of
the state x; each x so selected lies in the polytope RI 0 (x) . This fact may be used to
determine a suitable value for the next initial state x. The value function is piecewise
quadratic, and the optimal control law piecewise affine, possessing these properties on
the polyhedral partitions RI of XN , which is also polyhedral. Of course, many of the sets
RI will be empty; the choice of I can be facilitated by choosing I to be I 0 (x) for given
x, computing RI , selecting a new x close to, but not in RI , and repeating the process.
An illustrative example [CKR01] is shown in Figur 11.2.1. The system parameters are
C = 0, D = 1, R = 0.1, Y = {y | |y| ≤ 1}, N = 5 and
" # " # " #
1 0.1 0 1 0
A= , B= , Q= ,
0 1 0.0787 0 0

The terminal set Xf is the maximal positively invariant set for system x+ = (A + BK)x
and constraint set {x | (C+DK) ∈ Y} where K is optimal unconstrained DLQR controller
for (A, B, Q, R) and the terminal cost is associated solution of discrete time algebraic
Riccati equation.

11.3 Optimal Control of Constrained Piecewise Affine Sys-


tems
The problem considered here is the optimal control of a piecewise affine discrete-time
system defined by
x+ = f (x, u) (11.3.1)

178
Projection to 1−2 axes
2

1.5

0.5

−0.5

−1

−1.5

−2
−3 −2 −1 0 1 2 3

Figure 11.2.1: Regions RI for a second order example

where x and u denote, respectively, the current state and control; x+ denotes the successor
state. The function f (·) is continuous and piecewise affine in each of a finite number of
polyhedral cover P , {Pi , i ∈ NJ } of the region of state-control space of interest. The
system therefore satisfies:

x+ = Ai x + Bi u + ci for all (x, u) ∈ Pi (11.3.2)

so that f (x, u) = Ai x + Bi u + ci for all (x, u) ∈ Pi and Ai x + Bi u + ci = Aj x + Bj u + cj


for all (x, u) ∈ Pi ∩ Pj . Recall that, for each i ≥ 0, let φ(i; x, u) denote the solution at
time i of (11.3.2) if the initial state (at time 0) is x and the control input sequence is u.
Previous research [BBM00b] considers the case when the cost V (x, u) is ℓ∞ , i.e.
N
X −1
V (x, u) = ℓ(x(i), u(i)) + Vf (x(N )) (11.3.3)
i=0

where ℓ(x, u) = |Qx|∞ + |Ru|∞ , Vf (x) = |Pf x|∞ and

u , {u(0), u(1), . . . , u(N − 1)} (11.3.4)

It is shown in [BBM00b] that in this case the optimal value function V 0 (x) and the
optimal control u0 (x) are both piecewise affine in x. This result, which has been used in
a recent application study [Fod01], is both interesting and useful since most controlled
systems are nonlinear and nonlinear systems may, in general, be arbitrarily closely ap-
proximated by piecewise affine systems (see Appendix of this Chapter). Optimal control
is relatively easily implemented, merely requiring the determination of the polytope in
which the current state lies and the use of a lookup table. However in many cases a
quadratic cost is the preferred option. The solution for this problem is not as simple;
the value function is piecewise quadratic and the optimal control piecewise affine but

179
the sets in which the cost is quadratic and the optimal control affine are not neces-
sarily polytopes. The boundaries of some regions are curved (ellipsoidal) and this has
inhibited progress. Nevertheless, a relatively simple solution is possible and is described
in subsequent sections. The method we employ (reverse transformation), introduced
in [May01] and applied to the problem considered in [May01, MR03b] and [Bor02], is
simple and illuminates earlier results. In this section we extend the results reported
in[May01, MR03b, Bor02] by specifying an appropriate target set and extending the
concept of a switching sequence.
The system is subject to the constraint

y(i) ∈ Y, i ∈ NN −1 (11.3.5)

where
y = Cx + Du (11.3.6)

and Y is a polytope containing the origin in its interior. The terminal constraint set is
assumed to be a closed and compact polygon:
[
x(N ) ∈ Xf , Xf i (11.3.7)
i∈Nr

where for each i y(i) = Cφ(i; x, u) + Du(i) and x(i) = φ(N ; x, u). In most practical
cases, Xf is a polytope; however, if the piecewise affine systems is such that the origin is
on the boundary of more than one of the polytopes Pi , the terminal constraint set may
be a polygon. The cost V (x, u) is defined by (11.3.3) with ℓ(·) defined by

ℓ(x, u) , (1/2) |x|2Q + 2x′ Su + |u|2R , (11.3.8)

and Vf : Xf → R is defined by:

Vf (x) , Vf i (x) , (1/2)|x|2Pf , x ∈ Xf i , i ∈ Nr (11.3.9)


i

We assume, for simplicity, that ℓ(·) is positive definite and that each Vf i (·), i ∈ Nr is
positive definite. The optimal control problem is

P(x) : VN0 (x) , min{V (x, u) | u ∈ U(x)} (11.3.10)


u

where U(x) is the set of control sequences u that satisfy the constraints (11.3.6) and
(11.3.7):

U(x) , {u | Cφ(i; x, u) + Du(i) ∈ Y, i = 0, 1, . . . , N − 1, φ(N ; x, u) ∈ Xf } (11.3.11)

This problem is difficult for two reasons: the system is piecewise affine and control
and state are subject to hard constraints. We approach the problem by using the two
transformations previously employed. The first application of the reverse transformation
is as follows.
Simplification by Reverse Transformation

180
Characterization of the solution to P(x) is not obvious since f (·), though piecewise
affine, is non-linear so that V (x, u) is not convex in u and may have several local minima.
Hence we adopt an indirect approach by considering the solution to a set of associated
problems of optimal control of a time-varying linear system with quadratic cost. These
problems are easily solved using standard Riccati techniques. We show how the solution
to the original problem may be obtained from the set of solutions to the simpler problem.
Let S , NN
J × Nr = NJ × NJ × . . . × NJ × Nr . Any s = (s0 , s1 , . . . , sN −1 , sN ) ∈ S
is called a switching sequence. A state-control pair (x, u) is said to generate a switching
sequence s if

(φ(i; x, u), u(i)) ∈ Psi , ∀i ∈ NN −1 and φ(N ; x, u) ∈ Xf sN (11.3.12)

A state-control pair (x, u) generates, in general, a set S(x, u) of switching sequences


where

S(x, u) , {s ∈ S | (φ(i; x, u), u(i)) ∈ Psi , ∀i ∈ NN −1 and φ(N ; x, u) ∈ Xf sN } (11.3.13)

For each switching sequence s ∈ S, we define an associated time-varying linear system


by
x(i + 1) = Asi x(i) + Bsi u(i) + csi , ∀i ∈ NN −1 (11.3.14)

In (11.3.14) (A, B, c) are determined by time i whereas in (11.3.2) they are determined
by the state-control pair (x, u). For each i ≥ 0, let φs (i; x, u) denote the solution at time
i of (11.3.14) if the initial state (at time 0) is x and the control input is u.
Let U0 (x) denote the set of the minimizers of (11.3.10), i.e.:

U0 (x) , arg min{V (x, u) | u ∈ U(x)} (11.3.15)


u

and let u0 (x) = {u0 (0; x), u0 (1; x), . . . , u0 (N − 1; x)} be an appropriate selection of
u0 (x) ∈ U0 (x) 4 . For any u0 (x) ∈ U0 (x) let:

φ0u0 (x) (·; x) , {φ0u0 (x) (0; x), φ0u0 (x) (1; x), . . . , φ0u0 (x) (N ; x)} (11.3.16)

denote the resultant optimal trajectory with initial state x and a particular optimizer
u0 (x)(so that φ0u0 (x) (0; x) = x). Clearly φ0u0 (x) (i; x) = φ(i; x, u0 (x)).
A relevant conclusion is that if the initial state x satisfies that x ∈ XN where:

XN , {x | U(x) 6= ∅} (11.3.17)

and if we knew the optimal control sequence u0 (x), the optimal process would induce
an optimal switching sequence s0 (x) lying in the set of the optimal switching sequences
defined by:
[
S0 (x) , S(x, u0 (x)) (11.3.18)
u0 (x)∈U0 (x)

4
In general u0 (x) is not necessarily a singleton due to the non–convex target constraint set Xf and
Definition of the terminal cost Vf (·).

181
It trivially follows that
S0 (x) ⊆ S (11.3.19)

where S = NN
J × Nr = NJ × NJ × . . . × NJ × Nr . Thus, stage 1 of reverse transfor-
mation suggests that the nonlinear system be replaced by a time-varying linear system
characterized by the switching sequence s (11.3.14).
For each s ∈ S, let Us (x) be defined as follows:

Us (x) , {u | Cφs (i; x, u) + Du(i) ∈ Y ∩ Psi , i = 0, 1, . . . , N − 1, φsN (N ; x, u) ∈ Xf sN }


(11.3.20)
Because the sets Pj , j ∈ NJ and the sets Xf j , j ∈ Nr are polytopes and u 7→ φs (i; x, u)
is an affine map for each i ∈ NN , the set Us (x) is a polytope.
The reverse transformation procedure requires a condition that is potentially satisfied
at the solution to the optimal control problem and that simplifies the problem. An
appropriate condition for this problem is that the optimal solution generates a chosen
switching sequence s satisfying (11.3.12); under this condition, the nonlinear system
(11.3.1) behaves like the linear, time-varying system (11.3.14). Therefore, for each s ∈ S,
we define an associated linear, quadratic optimal control problem Ps (x) as follows:

Ps (x) : min{Vs (x, u) | u ∈ Us (x)} (11.3.21)


u

where
N
X −1
Vs (x, u) , ℓ(φs (i; x, u), u(i)) + Vf sN (φs (N ; x, u)) (11.3.22)
i=0

Problem Ps (x), is a generalization of the problems formulated in [Bor02] and indepen-


dently, in a different context, in [MR03a], imposes the condition that the solution gen-
erates the switching sequence s, and is the ‘natural’ version of reverse transformation.
The constraint u ∈ Us (x) is a consequence of the discussion above that the optimal
process induces the optimal switching sequence (possibly a set of the optimal switching
sequences).
Problem Ps (x) is a parametric quadratic program with

Vs (x, u) = (1/2)|x|2Wxx ′ s ′ s 2 s
s + u Wux x + u Wuc cs + (1/2)|u|W s + q (x, cs )
uu
(11.3.23)

s , W s , , W s , W s , W s , W s where c in (11.3.23) is the vector form of the


for some Wxx ux uc uu cc xc s

sequence {cs0 , cs1 , . . . , csN −1 }, qsc is a term of the form (1/2)|cs |2Wcc ′ s
s + x Wxc cs that does

not affect the determination of u0 (x), and

Us (x) = {u | M s u ≤ N s x + ps } (11.3.24)

for some M s , N s , ps (dependent on the version of Us (x) employed) that are easily deter-
mined. The number of rows of M s is Nu (independently of s). The solution to Ps (x)
is a local, rather than global, minimizer for the original problem P(x). Problem Ps (x)
is a quadratic program which we simplify by applying a second reverse transformation

182
by assuming that the active constraints at an optimal solution are indexed by I yielding
problem Pµ (x) (µ , (s, I)) defined by:

Pµ (x) : min{Vs (x, u) | u ∈ Uµ (x)} (11.3.25)


u

subject to satisfaction of the time-varying linear system (11.3.14) where

Uµ (x) , {u | Mis u = Nis x + psi , i ∈ I} (11.3.26)

and Mis , Nis and psi denote, respectively, the ith row of M s , N s and ps . Problem Pµ (x) is
an equality constrained quadratic program that is easily solved yielding the value function
Vµ0 (·) and optimal control u0µ (·):

Vµ0 (x) = (1/2)|x|2Pµ + qµ′ x + rµ (11.3.27)


u0µ (x) = Kµ x + kµ (11.3.28)

For each µ = (s, I), let Mµ denote the matrix whose rows are Mis , i ∈ I and Lµ a
matrix such that {y | Lµ y ≤ 0} = {Mµ′ ν | ν ≥ 0} which is the polar cone of the cone
{y | Mµ y ≤ 0} of feasible directions . It follows, that the set of initial states x such that
u0µ (x), µ = (s, I) (any I ∈ I) is optimal for Ps (x) is
( )
Mis u0µ (x) ≤ Nis x + psi , i ∈ I c
Xµ = (11.3.29)
Lµ ∇u Vs (x, u0µ (x)) ≥ 0

where ∇u Vs (x, u0µ (x)) – the gradient with respect to u of Vs (·) is:
s s s
∇u Vs (x, u) = Wuu u + Wux x + Wuc cs (11.3.30)

Let N denote the set of subsets of {1, 2, . . . , Nu } and let Ψ , S × N .


In order to establish our main result we first need the following:
Proposition 11.2 (Properties of U(x)) (i) For each x the set U(x) is a finite union of
convex sets:
U(x) = ∪s∈S Us (x). (11.3.31)

(ii) The map x → U(x) is outer semi-continuous.

Proof: (i) Let x be given. Suppose u ∈ U(x) and that U(x) has a non-empty inte-
rior. Let s be the switching sequence generated by (x, u) so that (φ(i; x, u), u(i)) ∈ Psi ,
φ(N ; x, u) ∈ Xf sN and φ(i; x, u) = φs (i; x, u) for all i ∈ NN . Hence (φs (i; x, u), u(i)) ∈
Psi , φs (N ; x, u) ∈ Xf sN and Cφs (i; x, u) + Du(i) ∈ Y , for all i ∈ NN −1 and φs (N ; x, u) ∈
Xf sN so that u ∈ Us (x). Thus U(x) ⊂ Us (x) ⊂ ∪s∈S Us (x) and ∪s∈S Us (x) has a non-
empty interior. (ii) Now suppose u ∈ ∪s∈S Us (x). Then there exists an s ∈ S such that
u ∈ Us (x). Then, φ(i; x, u) = φs (i; x, u) for all i ∈ NN so that (φ(·; x, u), u) satisfies
the same constraints as does (φs (·; x, u), u). Hence u ∈ U(x) so that ∪s∈S Us (x) ⊂ U(x).
Equation (11.3.31) follows. (ii) At each x ∈ XN , {x | U(x) 6= ∅} the map x → U(x)
is a finite union of the maps x → Us (x). Since the maps x → Us (x) are continuous
by Proposition 11.1 it follows that the map x → U(x) is outer semi-continuous.

183
QeD.

Theorem 11.6. (i) For all µ = (s, I) ∈ Ψ, Xµ is a polytope. (ii) Suppose the interior of
Xµ does not intersect Xµ′ for any µ′ ∈ Ψ \ {µ}, then V 0 (x) = Vµ0 (x) and u0µ (x) ∈ u0 (x)
for all x in Xµ ; also u0 (x) = u0s (x) for all x in the interior of Xµ . (iii) If the sets
Xµ , µ ∈ J ⊂ Ψ intersect, then
\
V 0 (x) = min Vµ0 (x) for all x ∈ Xµ (11.3.32)
µ∈J
µ∈J
\
µ0 (x) = arg min Vµ0 (x) for all x ∈ Xµ (11.3.33)
µ∈J
µ∈J
\
0
u (x) = {u0µ (x) 0
| µ ∈ µ (x)} for all x ∈ Xµ (11.3.34)
µ∈J

(iv) The value function V 0 (·) is lower semi-continuous. (v) ∪µ∈Ψ Xµ is the domain of
V 0 (·).

Proof: (i) Proven above. (ii) Suppose x lies in the interior of Xµ but does not lie in
Xµ′ for any µ′ 6= µ. Then, since V (x, u) = Vs (x, u), u0µ (x) is the unique local minimizer
of V (x, u) in Xµ , the only local minimizer in ∪µ∈Ψ Xµ , and, hence, the global minimizer
in ∪µ∈Ψ Xµ . If there exists an µ′ 6= µ such that x ∈ Xµ∗′ (x), then Vµ∗′ (x) ≥ Vµ0 (x). Hence
u0µ (x) = u0 (x), is a global minimizer of V (x, ·) and V 0 (x) = Vµ0 (x).(iii) Suppose x lies in
the interior of ∩µ∈J Xµ for some subset J of Ψ. Then the potential minimizers of V (x, ·)
lie in the set {u0µ (x) | µ ∈ J}. Hence u0 (x) = {u0µ (x) | µ ∈ µ0 (x)}. (iv) The lower
semi-continuity of V 0 (·) follows from the continuity of V (·), the outer semi-continuity of
U(x) established in Proposition 11.2 and the maximum theorem (Theorem 5.4.1. and
Corollary 5.4.2 in [Pol97]). (v) This observation follows from Proposition Proposition
11.2 above and Theorem 11.5.

QeD.

The value function and optimal control law may be implemented by computing Xµ for
all possible realizations of µ but the complexity this approach is overwhelming except for
very simple problems. An initial proposal suggested that the better procedure is to select
a state x in the region of interest, compute µ0 (x) = (s0 (x), I 0 (x)) by solving the optimal
control problem P(x), and then compute Xµ0 (x) . The procedure is then be repeated for
a new value of x not lying in the union of the sets Xµ already computed.
Our next step is to illustrate that the computational aspects can be improved by
examining the structure of the set XN = {x | U(x) 6= ∅}. The set XN can be characterized
by exploiting the standard set recursion:

Xi , {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ Xi−1 }, i ∈ N+


N , X0 , X f (11.3.35)

184
The sets Xi are polygons due to definition of f (·) and their more detailed characterization
S
is possible because if Xi−1 = j∈Nq X(i−1,j) we have:
i

Xi = {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ Xi−1 }


[
= {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ X(i−1,j) }
j∈Nqi
[
= {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ X(i−1,j) }
j∈Nqi
[
= {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ X(i−1,j) }
(k,j)∈NJ ×Nqi
[
= X(i−1,(k,j)) ,
(k,j)∈NJ ×Nqi

X(i−1,(k,j)) , {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ X(i−1,j) } (11.3.36)

The previous equation motivates a similar recursion that provides a better way for char-
acterization of XN (and each Xi ). This better characterization is based on generating
a set of the feasible switching sequences. We proceed as follows. For each i ∈ NN let
si , {s0 , s1 , . . . , si } where each sj ∈ NJ , j < i and si ∈ Nr . We are now ready to establish
the following result:
Proposition 11.3 (Characterization of the set XN and a set of the feasible switching
sequances SN ) Consider the set recursion (11.3.35), then each of the sets Xi , i ∈ NN is
given by:
[
Xi = Xsi , i ∈ NN (11.3.37)
si ∈Si

where each set Si is given by the following recursion

Si , S̄i × Si−1 , ∈ N+
N , S0 , Nr , (11.3.38)

and
S̄i , {k ∈ NJ | X{k,si−1 } 6= ∅}, si−1 ∈ Si−1 (11.3.39)

where:

X{k,si−1 } , {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ Xsi−1 }, si−1 ∈ Si−1


(11.3.40)

S
Proof: We prove the result by induction. Since X0 , Xf = i∈Nr Xf i it trivially
follows from (11.3.38) that s0 ∈ S0 = {si | i ∈ Nr } so that if we let X{si } = Xf i , i ∈ Nr
we obtain:
[
X0 = X s0
s0 ∈S0

Suppose now that for some l ∈ NN −1 the set Xl is given by:


[
Xl = X sl
sl ∈Sl

185
By definition of f (·) and the set recursion (11.3.35) we have:

Xl+1 , {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ Xl }


[
= {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ X sl }
sl ∈Sl
[
= {x | ∃u s.t. Cx + Du ∈ Y, f (x, u) ∈ Xsl }
sl ∈Sl
[
= {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ Xsl }
{k,sl }∈NJ ×Sl
[
= X{k,sl } ,
{k,sl }∈N J ×Sl

X{k,sl } , {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ Xsl }


[
= X{k,sl } ,
{k,sl }∈S̄l+1 ×Sl

X{k,sl } , {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ Xsl }

where S̄l+1 is given by (11.3.39) and (11.3.40). It remains to notice that sl+1 = {k, sl } ∈
Sl+1 , S̄l+1 × Sl so that
[
Xl+1 = Xsl+1 ,
sl+1 ={k,sl }∈Sl+1

X{k,sl } , {x | ∃u s.t. Cx + Du ∈ Y ∩ Pk , Ak x + Bk u + ck ∈ Xsl }

where sl+1 = {k, sl } ∈ Sl+1 , S̄l+1 × Sl completing the proof.

QeD.

Remark 11.6 (A relevant consequence of Proposition 11.3) Note that Proposition 11.3
combines reachability analysis and result of Theorem 11.6 in order to obtain an easily
and straight forward implementable algorithm for computation of the solution to P(x)
defined in (11.3.10). The computational aspects are improved since in general SN is a
strict subset of S = NJ × NJ × . . . × NJ × Nr .

11.3.1 Illustrative Example

To illustrate these results, consider the system described by


 " # " # " #
 0.7969 −0.2247 0.1271 0
if x1 ≤ 1



 x+ u+


 0.1798 0.9767 0.0132 0
x+ = (11.3.41)

 " # " # " #

 0.4969 −0.2247 0.1271 0.3
if x1 ≥ 1



 x+ u+
0.0798 0.9767 0.0132 0.1

where xi is the ith coordinate of a vector x An illustration of the procedure is shown in


Figure 11.3.1. The system of (11.3.41) is subject to the following constraints |x|∞ ≤ 10,
−x1 + x2 ≤ 15 and |u| ≤ 0.5. The cost function is defined by Q = I and R = 0.1.

186
The terminal set Xf is the maximal positively invariant set for system x+ = (A1 +
B1 K)x and constraint set {x | |x|∞ ≤ 10, −x1 + x2 ≤ 15, |Kx| ≤ 0.5} where K
is optimal unconstrained DLQR controller for (A1 , B1 , Q, R) and the terminal cost is
associated solution of discrete time algebraic Riccati equation. In Figure 11.3.1, complete
2.5
x2
2
P1 P2

1.5

0.5

−0.5

−1

−1.5

−2

−2.5
−3 −2 −1 0 1 2 3 4 5 6 7
x1

Figure 11.3.1: Constrained PWA system – Regions Xµ for a second order example

characterization of the solution by Theorem 11.6 is shown. In the gray shaded regions
a set of switching sequences is feasible, i.e. some of the regions Xµi , Xµj satisfy that
Xµi ∩ Xµj 6= ∅. The controller in these overlapping regions can be selected according
to Theorem 11.6.

11.4 Summary
This Chapter presented a basic algorithm for solving parametric programming prob-
lems. The concept of reverse transformation has been briefly discussed and applied to
two relevant optimal control problems allowing for a relatively simple solution to be
obtained. Characterization of solutions to the constrained linear quadratic and the con-
strained piecewise affine quadratic (and ℓ1 and ℓ∞ ) optimal control problems is easily
achieved using reverse transformation that seeks initial states which satisfy pre-specified
conditions on the solution to a simplified problem. Rather than enumerating the com-
binatorial set of possible values of µ = (s, I), a characterization may be obtained by
employing µ = (s0 (x), I 0 (x)) (the value of (s, I) at a solution u0 (x) of the optimal con-
trol problem P(x)) for given x and repeating the process for new states not lying in
polytopes already determined. It is also demonstrated that an appropriate and improved
algorithmic procedure can be obtained by combing reachability analysis and Theorem
11.6.

187
Appendix to Chapter 11 – Piecewise affine approximation
Here we justify the comment, made in the introduction, that most nonlinear dynamic
systems may be arbitrarily closely approximated by piecewise affine systems. Suppose
the nonlinear system is described by

x+ = fr (x, u)

and that the piecewise affine approximation is defined by (11.3.1) and (11.3.2). We define
the diameter of a polytope P to be maxx,y {|x − y| | x, y ∈ P }. The following result is
based on [CK77].
Lemma 11.1 (Piecewise affine approximation) Suppose fr (·) is continuous on a compact
subset Z of Rn × Rm . Let ε > 0 be given. Then there exists a partition of Z, consisting
of a finite set of simplices {Pi | i = 1, . . . , J}, and associated parameters {(Ai , Bi , ci ) |
i = 1, . . . , J} such that the piecewise affine approximation f (·) defined by

f (x, u) = Ai x + Bi u + ci for all (x, u) ∈ Pi

satisfies |fr (x, u) − f (x, u)| ≤ ε for all (x, u) ∈ Z

Proof: Because fr (·) is continuous, it is uniformly continuous on Z. Hence there exist


a δ > 0 such that |fr (w) − fr (z)| ≤ ε for any w, z ∈ Z such that |w − z| ≤ δ. Choose any
simplicial partition {Pi } of Z such that each simplex Pi has a diameter not exceeding δ.
For each simplex Pi let (Ai , Bi , ci ) be defined implicitly by
n+m+1
X
f (x, u) , Ai x + Bi u + ci = µij (x, u)fr (zji )
j=1

for all (x, u) ∈ Pi where {zji | j = 1, . . . , n + m + 1} is the set of vertices of Pi and


{µij (x, u), j = 1, . . . , n + m + 1} is the unique set of non-negative multipliers summing to
unity that satisfies
n+m+1
X
(x, u) = µij (x, u)zji
j=1

Each µij (x, u) is affine in (x, u). Since |(x, u)−zji | ≤ δ for all (x, u) ∈ Pi , j ∈ {1, . . . , n+1}
and i ∈ {1, . . . , J},
n+m+1
X
|fr (x, u) − f (x, u)| = | µij (x, u)(fr (x, u) − fr (zji ))|
j=1
n+m+1
X
≤ µij (x, u)|(fr (x, u) − fr (zji ))| ≤ ε
j=1

for all (x, u) ∈ Pi , for each i ∈ {1, . . . , J}.

QeD.

188
Chapter 12

Further Applications of
Parametric Mathematical
Programming

Seeing that I cannot choose a particularly useful or pleasant subject, since the men born
before me have taken for themselves all the useful and necessary themes, I shall do the
same as the poor man who arrives last at the fair, and being unable to choose what he
wants, has to be content with what others have already seen and rejected because of its
small worth.

– Leonardo da Vinci

Section 12.1 of this chapter provides a more detailed discussion of robust one –
step controllers that can be employed to implement robust time optimal controllers for
constrained discrete time systems. The Robust one – step controllers are discussed and
specific results provided for constrained linear and piecewise affine discrete time systems.
A more detailed discussion for the piecewise affine case is also given.
Section 12.2 of this chapter illustrates how Voronoi diagrams and Delaunay trian-
gulations of point sets can be computed by applying parametric linear programming
techniques. We specify parametric linear programming problems that yield the Delau-
nay triangulation or the Voronoi Diagram of an arbitrary set of points S in Rn .
Closed-form Model Predictive Control (MPC) results in a polytopic subdivision of the
set of feasible states, where each region is associated with an affine control law. Solving
the MPC problem on–line then requires determining which region contains the current
state measurement. This is the so-called point location problem. For MPC based on
linear control objectives (e.g., 1- or ∞-norm), we show in Section 12.3 that this problem
can be written as an additively weighted nearest neighbour search that can be solved
on–line in time linear in the dimension of the state space and logarithmic in the number

189
of regions. We demonstrate several orders of magnitude sampling speed improvement
over traditional MPC and closed-form MPC schemes.

12.1 Robust One – Step Ahead Control of Constrained Dis-


crete Time Systems
In this section we consider the discrete time systems described by:

x+ = f (x, u) + g(w) (12.1.1)

where x ∈ Rn is the current state, u ∈ Rm is the current input, w ∈ Rp is the disturbance


and x+ is the successor state. The system is subject to the following set of constraints:

(x, u) ∈ Y, w ∈ W (12.1.2)

Given a set Ω we define the set Pre(Ω) and the set valued function κ(·) defined by:

Pre(Ω) , {x | ∃u s.t. (x, u) ∈ Y, f (x, u) ⊕ g(W) ⊆ Ω} (12.1.3)


κ(x) , {u | (x, u) ∈ Y, f (x, u) ⊕ g(W) ⊆ Ω} ∀x ∈ Pre(Ω) (12.1.4)

or equivalently:

Pre(Ω) = {x | ∃u s.t. (x, u) ∈ Y, f (x, u) ⊆ Ω ⊖ g(W)} (12.1.5)


κ(x) = {u | (x, u) ∈ Y, f (x, u) ⊆ Ω ⊖ g(W)} ∀x ∈ Pre(Ω) (12.1.6)

where g(W) , {g(w) | w ∈ W}. If we define:

Z(Ω) , {(x, u) | (x, u) ∈ Y, f (x, u) ⊕ g(W) ⊆ Ω} (12.1.7)

it follows that:

Pre(Ω) = {x | ∃u s.t. (x, u) ∈ Z(Ω)} (12.1.8)


κ(x) = {u | (x, u) ∈ Z(Ω)} ∀x ∈ Pre(Ω) (12.1.9)

We aim to illustrate that in certain cases is possible to employ standard computa-


tional geometry software (polyhedral algebra) in order to characterize the set Pre(Ω),
the corresponding set valued control law κ(x) and provide efficient computational pro-
cedure for generating an appropriate selection of feedback control law θ(·) satisfying
θ(x) ∈ κ(x), ∀x ∈ Pre(Ω). An adequate method for selecting a feedback control law θ(·)
can be posed as an optimization problem defined for all x ∈ Pre(Ω):

P(x) : µ0 (x) , arg inf {V (x, u) | (x, u) ∈ Z(Ω)} (12.1.10)


u

The function µ0 (·) is in general set valued and satisifies:

µ0 (x) ⊆ κ(x) ∀x ∈ Pre(Ω) (12.1.11)

so that a selection cane be made from the set of controls µ0 (x), i.e. θ(·) can be chosen
to be any feedback control law satisfying: θ(x) ∈ µ0 (x) ∀x ∈ Pre(Ω).

190
Constrained Linear Parametrically Uncertain Systems with additive and
bounded disturbances

Here, we consider the case when:

x+ = Ax + Bu + w (12.1.12)

where:
q
X
n×n n×m
(A, B) ∈ C , {(A, B) ∈ R ×R | (A, B) = λi (Ai , Bi ), (λ1 , . . . , λq ) ∈ Λ}
i=1
q
X
Λ , {λ = (λ1 , . . . , λq ) | λi = 1, λ ≥ 0} (12.1.13)
i=1
The constraint sets (Y, W) and the target set (Ω) are assumed to be polytopic; each set
contains the origin as an interior point.

Remark 12.1 (Class of the considered systems and Convexity of Ω) When q = 1 the
system is linear system subject to bounded disturbances; additionally if W = {0} the
system (12.1.12) is a deterministic linear system. If the set Ω is nonconvex and q > 1,
the results developed bellow can be used only to obtain an inner approximation of the
set Pre(Ω). In this case it is necessary to perform computation by exploiting results of
Chapter 7.
The set Pre(Ω) can be computed as follows:

Pre(Ω) = ProjX Z(Ω) (12.1.14)

where:

Z(Ω) = {(x, u) | (x, u) ∈ Y, Ai x + Bi u ∈ Ω ⊖ W, ∀i ∈ Nq } (12.1.15)

Note that the set Z(Ω) is a polytopic set in (x, u) space. The set valued function κ(·) is
characterized by:
κ(x) = {u | (x, u) ∈ Z(Ω)} (12.1.16)
An appropriate selection can be obtained be specifying V (x, u) in (12.1.10) to be any
linear or a convex quadratic function in (x, u); an appropriate choice is:

V (x, u) = |An x + Bn u|2Q + |u|2R (12.1.17)

with Q = Q′ ≥ 0 and R = R′ ≥ 0 where (An , Bn ) specifies the nominal dynamics.


Problem P(x) is in this case a standard pLP or pQP (depending on the choice of the
cost function) and can be solved to obtain an explicit control law. With V (x, u) defined
by (12.1.17), P(x) is a parametric quadratic problem:

P(x) : µ0 (x) , arg inf {|An x + Bn u|2Q + |u|2R | (x, u) ∈ Y, Ai x + Bi u ∈ Ω ⊖ W, ∀i ∈ Nq }


u
(12.1.18)
Note that if R = R′ > 0 and Z is compact then µ0 (·) exists and is unique ∀x ∈
Pre(Ω). The solution to P(x) provides a constructive way of generating an appropriate
selection of the feedback control law θ(·) satisfying θ(x) ∈ κ(x), ∀x ∈ Pre(Ω) because
µ0 (x) ⊆ κ(x), ∀x ∈ Pre(Ω).

191
Constrained Piecewise Affine Systems with additive and bounded dis-
turbances

Now, consider the case when:

x+ = f (x, u, w) = fl (x, u, w), ∀(x, u) ∈ Pl ,


fl (x, u, w) , Al x + Bl u + cl + w, ∀l ∈ N+
t (12.1.19)

The function f (·) is assumed to be continuous and the polytopes Pl , l ∈ N+


t ,
have disjoint interiors and cover the region Y of state/control space of interest so that
S T
k∈N+ Pk = Y ⊆ R
n+m and interior(P )
k interior(Pj ) = ∅ for all k 6= j, k, j ∈ N+
t . The
t
set of sets {Pk | k ∈ N+
t } is a polytopic partition of Y. The constraint set (12.1.2) (Y)
and the target set Ω are assumed to be polygonic, the disturbance set W is polytopic
and each set contains the origin as an interior point. Thus,
[ [
Y, Yj , Ω , Ωi , (12.1.20)
j∈Nr i∈Ns

Our next step is provide a detailed characterization of the sets Pre(Ω):

Pre(Ω) , {x | ∃u s.t. (x, u) ∈ Y, f (x, u, W) ⊆ Ω}


, {x | ∃u s.t. (x, u) ∈ Y, f (x, u, 0) ⊆ Ω ⊖ W}
[ \
= {x | ∃u s.t. (x, u) ∈ Y Pl , fl (x, u, 0) ∈ Ω ⊖ W}
l∈N+
t

S
Recalling that (ΩW , Ω ⊖ W = k∈Nq ΩWk where q 6= s is a finite integer), it follows
from that:
[ \
Pre(Ω) = {x | ∃u s.t. (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ Ω ⊖ W}
(j,l)∈Nr ×N+
t
[ \
= {x | ∃u s.t. (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }
(j,l,k)∈Nr ×N+
t ×Nq
[
= X(j,l,k)
(j,l,k)∈Nr ×N+
t ×Nq
\
X(j,l,k) , {x | ∃u s.t. (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }

The set Pre(Ω) is easily computed (characterized), since


\
X(j,l,k) = ProjX Z(j,l,k) , Z(j,l,k) , {(x, u) | (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }
(12.1.21)
so that:
[
Pre(Ω) = ProjX Z(j,l,k) (12.1.22)
(j,l,k)∈Nr ×N+
t ×Nq

where for all (j, l, k) ∈ Nr × N+


t × Nq Z(j,l,k) is defined in (12.1.21). By similar arguments
we have:
κ(j,l,k) (x) ⊆ κ(x), ∀x ∈ X(j,l,k) , (12.1.23)

192
where
\
κ(j,l,k) (x) , {u | (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }, ∀x ∈ X(j,l,k) (12.1.24)

For every x ∈ Pre(Ω) let:

S(x) , {(j, l, k) ∈ Nr × N+
t × Nq | x ∈ X(j,l,k) }, (12.1.25)

so that:
[
κ(x) = κ(j,l,k) (x), ∀x ∈ Pre(Ω). (12.1.26)
(j,l,k)∈S(x)

Remark 12.2 (Computational Remark) It is necessary to consider those integer triples


(j, l, k) ∈ Nr × N+
t × Nq for which X(j,l,k) 6= ∅.
By definition of f (·) in (12.1.19), bearing in mind the structure of Pre(Ω) (12.1.22)
and recalling discussion related to P(x) defined in (12.1.18), it is easy to conclude that an
appropriate selection of feedback control law θ(·) can be obtained be specifying V (x, u)
in (12.1.10) to be any piecewise linear or a convex piecewise quadratic function in (x, u).
For instance, an appropriate choice is:

V (x, u) = Vl (x, u) = |Al x + Bl u + cl |2Q + |u|2R , (x, u) ∈ Pl (12.1.27)

with Q = Q′ ≥ 0 and R = R′ ≥ 0. The problem P(x) is in this case a standard pPAP


or pPQP (depending on the choice of the cost function) and can be solved to obtain an
explicit control law. We provide more details when V (x, u) defined by (12.1.27). In this
case we consider the set of problems P(j,l,k) (x) for all (j, l, k) ∈ Nr × N+
t × Nq such that
X(j,l,k) 6= ∅. The problem P(j,l,k) (x) is a parametric quadratic programming problem:

P(j,l,k) (x) :
\
µ0(j,l,k) (x) , arg inf {|Al x + Bl u + cl |2Q + |u|2R | (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }
u
(12.1.28)
T
If R = R′ > 0 and {(x, u) ∈ Pl Yj | Al x + Bl u + cl ∈ ΩWk } is compact then µ0(j,l,k) (·)
T
exists and is unique ∀x ∈ {x |∃u s.t. (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ ΩWk }. An
appropriate feedback control law θ(·) satisfying θ(x) ∈ κ(x), ∀x ∈ Pre(Ω) can be selected
from the set valued control laws µ0(j,l,k) (x) because:

µ0(j,l,k) (x) ⊆ κ(j,l,k) (x) ⊆ κ(x), ∀x ∈ X(j,l,k) (12.1.29)

An adequate selection for feedback control law θ(·) is any selection satisfying:
[
θ(x) ∈ µ0(j,l,k) (x), ∀x ∈ Pre(Ω). (12.1.30)
(j,l,k)∈S(x)

We demonstrate how to exploit the problem P(j,l,k) (x) to devise robust time optimal con-
trollers of a low or modest complexity controller for uncertain and constrained piecewise
affine systems.

193
12.1.1 Robust Time Optimal Control of constrained PWA systems

The robust time–optimal control problem P(x) is defined, as usual in robust time–optimal
control problems (See Section 5.3 of Chapter 5 and Sections 6.2 – 6.2 of Chapter 6), by:

N 0 (x) , inf {N | (π, N ) ∈ ΠN (x) × NNmax }, (12.1.31)


π,N

where Nmax ∈ N is an upper bound on the horizon and ΠN (x) is defined as follows:

ΠN (x) , {π | (xi , ui ) ∈ Y, ∀i ∈ NN −1 , xN ∈ T, ∀w(·)} (12.1.32)

where for each i ∈ N, xi , φ(i; x, π, w(·)) and ui , µi (φ(i; x, π, w(·))) and T is the
corresponding target set. We remark once again that the solution is sought in the class
of the state feedback control laws because of the additive disturbances, i.e. π is a control
policy (π = {µi (·), i ∈ NN −1 }, where for each i ∈ NN −1 , µi (·) is a control law mapping
the state x′ to a control action u). The solution to P(x) is

π 0 (x), N 0 (x) , arg inf {N | (π, N ) ∈ ΠN (x) × NNmax }. (12.1.33)
π,N

The value function of the problem P(x) satisfies N 0 (x) ∈ NNmax and for any integer
i, the robustly controllable set Xi , {x | N 0 (x) ≤ i} is the set of initial states that
can be robustly steered (steered for all w(·)) to the target set T, in i steps or less while
satisfying constraints (Y) for all admissible disturbance sequences. Hence N 0 (x) = i
for all x ∈ Xi \ Xi−1 . The robust controllable sets {Xi } and the associated robust
time-optimal control laws κi (·) can be computed by the following standard recursion:

Xi , {x | ∃u s.t. (x, u) ∈ Y, f (x, u, W) ⊆ Xi−1 } (12.1.34)


κi (x) , {u | (x, u) ∈ Y, f (x, u, W) ⊆ Xi−1 }, ∀x ∈ Xi (12.1.35)

for i ∈ NNmax with the boundary condition X0 = T and where f (x, u, W) =


{f (x, u, w) | w ∈ W}. It is well known that the set sequence {Xi } is the sequence
of polygons and additionally if the target set T satisfies that for all x ∈ T there exists a
u such that (x, u) ∈ Y and f (x, u, W) ⊆ T then {Xi } is a monotonically non–decreasing
sequence of robust control invariant (polygonal) sets. We proceed to illustrate how an
appropriate feedback control law θi (·), i ∈ NNmax satisfying θi (x) ⊆ κi (x), ∀ x ∈ Xi can
be obtained be exploiting one step ahead controllers discussed in the introduction of this
section. For each couple (i, l) ∈ NNmax × N+
t we define:

Pi (x) : νi (x) , arg inf {Vi (x, u) | (x, u) ∈ Y, f (x, u, W) ⊆ Xi−1 } (12.1.36)
u

where Vi (x, u) (as in (12.1.10)) can be any piecewise linear or a convex piecewise quadratic
function in (x, u). It follows from (12.1.35) and (12.1.36) that

νi (x) ⊆ κi (x), ∀ x ∈ Xi (12.1.37)

Recalling (12.1.19), (12.1.22) and recalling discussion related to P(x) defined


in (12.1.18), it is easy to conclude that an appropriate selection of feedback control
law θi (·) can be obtained as follows. For each i ∈ NNmax let Vi (·) be defined as follows:

Vi (x, u) , V(i,l) (x, u) = |Al x + Bl u + cl |2Q + |u|2R , (x, u) ∈ Pl (12.1.38)

194
with Q = Q′ ≥ 0 and R = R′ ≥ 0. The problem Pi (x), as already remarked, is in this
case a standard pPAP or pPQP (depending on the choice of the cost function) and can
be solved to obtain an explicit control law.
Since Xi = Pre(Xi−1 ) we have
[ \
Xi = Pre(Xi−1 ) = {x | ∃u s.t. (x, u) ∈ Y Pl , fl (x, u, 0) ∈ X̃i }
l∈N+
t

S
where for each i ∈ NNmax , X̃i , Xi−1 ⊖ W = k∈Nqi X̃(i,k) (and qi is a finite integer for
each i ∈ NNmax ) so that:
[
Xi = X(i,j,l,k) (12.1.39)
(j,l,k)∈Nr ×N+
t ×Nqi

where
\
X(i,j,l,k) = ProjX Z(i,j,l,k) , Z(i,j,l,k) , {(x, u) | (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ X̃(i,k) }
(12.1.40)
We are now ready to pose a set of optimal control problems similar to the set of problems
P(j,l,k) (x) defined in (12.1.28). Let for all (i, j, l, k) ∈ NNmax × Nr × N+
t × Nqi the problem
P(i,j,l,k) (x) be defined as the following parametric quadratic programming problem:

P(i,j,l,k) (x) :
\
0
ν(i,j,l,k) (x) , arg inf {|Al x + Bl u + cl |2Q + |u|2R | (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ X̃(i,k) }
u
(12.1.41)
T
If R = R′ > 0 and {(x, u) ∈ Pl 0
Yj | Al x + Bl u + cl ∈ X̃(i,k) } is compact then ν(i,j,l,k) (·)
T
exists and is unique ∀x ∈ {x |∃u s.t. (x, u) ∈ Pl Yj , Al x + Bl u + cl ∈ X̃(i,k) }. Hence,
an appropriate feedback control law θi (·) satisfying θi (x) ∈ κi (x), ∀x ∈ Xi and for each
0
i ∈ NNmax can be selected from the set valued control laws ν(i,j,l,k) (x) because, as already
established in (12.1.29):

0
ν(i,j,l,k) (x) ⊆ νi (x) ⊆ κi (x), ∀x ∈ X(i,j,l,k) (12.1.42)

We conclude that for each i ∈ NNmax , an adequate selection for the feedback control law
θi (·) is any selection satisfying:
[
0
θi (x) ∈ ν(i,j,l,k) (x), ∀x ∈ Xi . (12.1.43)
(j,l,k)∈Si (x)

where for each i ∈ NNmax we define

Si (x) , {(j, l, k) ∈ Nr × N+
t × Nqi | x ∈ X(i,j,l,k) }. (12.1.44)

We now introduce the following assumption:


Assumption 12.1 (i) There exists a control law θ0 (·) such that (x, θ0 (x)) ∈ Y and
f (x, θ0 (x), W) ⊆ T for all x ∈ T.
Consider the time-invariant control law κ0 (x) defined, for all i ∈ NNmax , by

κ0 (x) , θi (x), ∀x ∈ Xi \ Xi−1 (12.1.45)

195
where θ0 (·) satisfies Assumption 12.1 and θi (·) are defined by (12.1.43) – (12.1.44).
The time-invariant control law κ0 (x) robustly steers any x ∈ Xi to X0 in i steps or less
to X0 , while satisfying constraints and thereafter maintains the state in X0 . As already
established in Theorem 6.1 we can state the following result that follows directly from
the construction of κ0 (·):
Theorem 12.1. (Robust Finite Time Attractivity of X0 = T) Suppose that Assump-
tion 12.1 holds and let X0 , T. The target set X0 , T is robustly finite-time attractive
for the closed-loop system x+ ∈ f (x, κ0 (x), W) with a region of attraction XNmax .

Terminal Set Construction

Our next step is to discuss an appropriate choice for θ0 (·) (satisfying Assumption 12.1)
and the computation of an appropriate target set T. In order to address these issues we
consider a more general problem that is the computation of an robust positively invariant
set for piecewise linear, time invariant, discrete time systems. We first show how to bound
the zero disturbance response set of piecewise linear system by a convex, compact set
and we also establish robust positive invariance of this set. This results are further used
for the computation of the maximal robust positively invariant set for piecewise linear
system subject and the corresponding constraint sets. The results developed below are
natural implementation of results reported in Chapter 2 and in [KG98, Kou02].
We consider the following autonomous discrete-time, piecewise linear, time-invariant
system:
x+ = g(x, w), (12.1.46)

where x ∈ Rn is the current state, x+ is the successor state and w ∈ Rn is an unknown


disturbance. The disturbance w is persistent, but contained in a convex and compact (i.e.
closed and bounded) set W ⊂ Rn , which contains the origin in its interior. The function
g(·) is piecewise linear in each of a finite number of polytopes Pi , i ∈ N+
q , with possibly
S
non-disjoint interiors that cover the region of state space of interest X (X = i∈N+ q
Pi )
that contains the origin in its interior.
Therefore the system satisfies:

x+ = g(x, w) , Ai x + w, x ∈ Pi , i ∈ N+
q (12.1.47)

Remark 12.3 (Class of the considered systems) The type of system in (12.1.47) mod-
els PWA systems around the origin if they are subject to stabilizing control (e.g., see
[RGK+ 04, GKBM04]). Consider x+ = Fi x + Gi u, (x, u) ∈ Qi , it is well known that
the stabilizing piecewise linear controller for this systems can be computed by solving,
for instance, the following, perhaps somewhat conservative, Linear Matrix Inequality
(LMI) (Fi + Gi Ki )′ P (Fi + Gi Ki ) − P < 0, P > 0. The solution of such an LMI then
yields the Lyapunov function which in this case is a common quadratic Luapunov func-
tion defined by V (x) = (1/2)x′ P x and a stabilizing piecewise linear controller defined

196
by u = θ0 (x) = Ki x, (x, Ki x) ∈ Pi , so that the closed loop system takes the form of
x+ = Ai x where Ai = Fi + Gi Ki , justifying the use of (12.1.47). For controller design
techniques the interested reader is referred to[MFTM00, GKBM04]. The type of system
also corresponds to switched linear systems which are often found in standard control
applications.
With respect to the previous remark, we assume in the sequel that:
Assumption 12.2 There exists a matrix P > 0 such that A′i P Ai −P < 0 for all i ∈ N+
q .
Clearly, Assumption 12.2 guarantees absolute asymptotic stability [Gur95,
VEB00](see (12.1.56) of the discrete time system defined in (12.1.47).
Before proceeding, we recall the following:

Remark 12.4 (Reachable Set of PWL systems) Given the non-empty set Ω ⊆ Rn and a
function g(·), defined in (12.1.47), let

Reachk (Ω, W) , {φ(k; x, w(·)) | x ∈ Ω, w(·) ∈ MW }

denote the k step reachable set.


Consider a set sequence {Fk }, k ∈ N defined by:
 
[
Fk+1 ,  Ai (Fk ∩ Pi ) ⊕ W, F0 , {0} (12.1.48)
i∈N+
q

It is clear that for all k ∈ N+ we have Fk = Reachk ({0}, W) so that the set sequence
{Fk }, k ∈ N is the zero disturbance response set of piecewise linear system (12.1.47).
We note that given any finite k ∈ N the set Fk is a compact set, because Fk is the
Minkowski addition of two compact sets, one of which is compact by assumption and the
other is compact by the fact it is the finite union of compact sets. However, the structure
of Fk becomes complicated as k increases, since each of the sets Fk is a polygon and
structure of Fk+1 is more complicated than Fk as k increases. In order to bound the
sets Fk as well as the Fk as k → ∞ we introduce an appropriately defined discrete time
inclusion as follows.
Let the set A be a finite set defined by:

A , {Ai , i ∈ N+
q } (12.1.49)

For any integer k ∈ N+ , let ik , {i0 , i1 , . . . , ik } denote a sequence of discrete variables,


where ij is the j th element in the sequence and ij ∈ N+
q for each j ∈ Nk and let i0 , 0.

For any integer k ∈ N , let IK , ik ij ∈ Nq , ∀j ∈ N+
+ +
k and let I0 , {i0 }. For any
ik ∈ IK , let Aik , Ai0 Ai1 . . . Aik , k ∈ N+ and Ai0 , I where I denotes identity matrix.
We consider the difference inclusion defined by:

x+ ∈ D(x, w), (12.1.50)


D(x, w) , {y | y = Ax + w, A ∈ A, w ∈ W} (12.1.51)

We also need to recall the following:

197
Remark 12.5 (Reachable Set of D(x, w)) Given the non-empty set Ω ⊆ Rn and the
difference inclusion D(x, w) defined in (12.1.50)– (12.1.51), let

ReachD D
k (Ω, W) , {φ (k; x, w(·), ik ) | x ∈ Ω, w(·) ∈ MW , ik ∈ IK }

denote the k step reachable set of the difference inclusion D(x, w) defined in (12.1.50)–
(12.1.51), where φD (k; x, w(·), ik ) denotes the solution to the difference inclusion D(x, w)
at time instant k given the disturbance sequence w(·) and the dynamics switching se-
quence ik .
We define a set sequence {Dk }, k ∈ N by:
 
[
Dk+1 ,  Ai Dk  ⊕ W, D0 , {0} (12.1.52)
i∈N+
q

and note that an alternative form of the set sequence {Dk } is given by:
 
k−1
M [
Dk =  Aij W , k ∈ N+ , D0 , {0} (12.1.53)
j=0 ij ∈IJ

Clearly, given any finite k ∈ N the set Dk is a compact set, because Dk is Minkowski
addition of two compact sets, one of which is compact by assumption and the other is
compact by the fact it is the finite union of compact sets. It also holds that Dk+1 =
ReachD
k+1 ({0}, W) so that the set sequence {Dk }, k ∈ N is the zero disturbance response
set of difference inclusion D(x, w) defined in (12.1.50)– (12.1.51). It follows from the
definition of the set sequence {Dk }, The definition of ReachD
k (Ω, W) and the definition
of the difference inclusion D(x, w) that for all k ∈ N we have:

ReachD D
k (Ω, W) = Reachk (Ω, {0}) ⊕ Dk (12.1.54)

Our next step is to discuss relationship between the set sequences {Fk } and {Dk }
as well as between the set sequences {Reachk (Ω, W)} and {ReachD
k (Ω, W)}. We can
establish few necessary preliminary results.

Lemma 12.1 (Relationship between {Fk } and {Dk }) Let the set sequences {Fk } and
{Dk } be defined by (12.1.48) and (12.1.52), respectively. Then Fk ⊆ Dk for all k ∈ N.

Proof: Proof is by induction. Suppose that for some k ∈ N we have Fk ⊆ Dk . Since


Fk ∩ Pi ⊆ Fk ⊆ Dk it follows that for all i ∈ N+ q we have Ai (Fk ∩ Pi ) ⊆ Ai Dk so that
S S
i∈N+q
Ai (Fk ∩ Pi ) ⊆ i∈N+
q
Ai Dk . Since for arbitrary set P ⊂ Rn , Q ⊂ Rn and R ⊂ Rn
S 
we have P ⊆ Q ⇒ P ⊕ R ⊆ Q ⊕ R we conclude that + Ai (Fk ∩ Pi )
i∈Nq ⊕W ⊆
S 
i∈N+q
Ai Dk ⊕ W so that Fk+1 ⊆ Dk+1 .
The proof is completed by noting that F0 ⊆ D0 (and F1 ⊆ D1 ).

QeD.

198
The following result can be established by a minor modification of the proof of
Lemma 12.1.
Lemma 12.2 (Relationship between {Reachk (Ω, W)} and {ReachD
k (Ω, W)}) Let Ω ⊆
Rn be a non-empty compact set. Consider the set sequences {Reachk (Ω, W)} and
{ReachD
k (Ω, W)}, where Reachk (Ω, W ) and ReachD
k (Ω, W) are defined in Remarks 12.4
and 12.5, respectively. Then Reachk (Ω, W) ⊆ ReachD
k (Ω, W) for all k ∈ N.

Remark 12.6 (Set Inclusion Reachk (Ω, W) ⊆ ReachD


k (Ω, {0}) ⊕ Dk ) Lemma 12.2
and (12.1.53) imply that

Reachk (Ω, W) ⊆ ReachD


k (Ω, {0}) ⊕ Dk , ∀k ∈ N (12.1.55)

Before proceeding to prove that the set sequence {Dk } is a Cauchy sequence and
has a limit as k → ∞ in Hausdorff metric sense we observe that Assumption 12.2
implies absolute asymptotic stability (AAS) of the difference inclusion x+ ∈ D(x, 0)
[Gur95, VEB00], in the sense that:

lim Aij x = 0, ∀ ij ∈ IJ , ∀ x ∈ Rn (12.1.56)


j→∞

We are now ready to establish the following result:

Lemma 12.3 (Set Inclusions Dk ⊆ Dk+1 ⊆ Dk ⊕ λk Bnp (µ)) Let the set sequence
{Dk }, k ∈ N be defined by (12.1.52) ( (12.1.53)) and suppose that Assumption 12.2
holds, then (i) Dk ⊆ Dk+1 for all k ∈ N and (ii) there exists a scalar λ (0 < λ < 1) and
a scalar µ > 0 such that Dk+1 ⊆ Dk ⊕ λk Bnp (µ) for all k ∈ N.

Proof: (i) It follows from (12.1.54) that Dk+1 = Dk ⊕ ReachD


k (W, {0}) implying that
Dk ⊆ Dk+1 for all k ∈ N.
(ii) Compactness of W and Assumption 12.2 imply existence of a scalar λ (0 < λ <
1) and a scalar µ > 0 such that ReachD k n
k (W, {0}) ⊆ λ Bp (µ). Since Dk+1 = Dk ⊕
ReachD k n
k (W, {0}) for all k ∈ N it follows that Dk+1 ⊆ Dk ⊕ λ Bp (µ) for all k ∈ N.

QeD.

Remark 12.7 (The set sequence {Dk } is a Cauchy sequence) Lemma 12.3 implies that
for any integer k ∈ N we have that the dpH (Dk+1 , Dk ) ≤ µλk for some µ > 0 and
0 < λ < 1. Hence maxm≥0 dpH (Dk+m , Dk ) ≤ λk µ(1 − λ)−1 . This in turn implies
that limk→∞ maxm≥0 dH (Dk+m , Dk ) → 0. Therefore the set sequence {Dk } satisfies the
Cauchy criterion [KF57] and is a Cauchy sequence.
We also recall the following before establishing our next result.

Remark 12.8 (RPI set and mRPI set of the difference inclusion (12.1.50)– (12.1.51))

199
A set Φ is a robust positively invariant (RPI) set of the difference inclusion defined
in (12.1.50)– (12.1.51) and constraint set (Rn , W) if D(Φ, W) ⊆ Φ, where D(Φ, W) =
{y | y = Ai x + w, x ∈ Φ, i ∈ N+
q , w ∈ W}.
The minimal robust positively invariant (mRPI) set D∞ of D(x, w) and constraint set
(Rn , W) is the RPI set of D(x, w) and constraint set (Rn , W) that is contained in every
closed, RPI set Φ of D(x, w) and constraint set (Rn , W).
Since {Dk } is a Cauchy sequence similar to Theorem 4.1 in [KG98] the following
properties of the set sequence {Dk } are easily established.

Theorem 12.2. (Properties of the set sequence {Dk }) Let the set sequence {Dk } be
defined by (12.1.52) ( (12.1.53)) and suppose that Assumption 12.2 holds. Then there
exists a compact set D ⊂ Rn with the following properties:

(i) 0 ∈ Dk ⊂ D, ∀k ∈ N,

(ii) Dk → D (in the Hausdorff metric), i.e. for every ǫ > 0 there exists k ∈ N such that
D ⊂ Dk ⊕ Bnp (ǫ),

(iii) D is RPI set of the difference inclusion (12.1.50)– (12.1.51).

Let   
M∞ [
D∞ = closure   Aij W  (12.1.57)
j=0 ij ∈IJ

Remark 12.9 (D∞ is the minimal RPI set of the difference inclusion (12.1.50)– (12.1.51)
and constraint set (Rn , W)) The proof of Corollary 4.2 in [KG98] implies that the set D∞
is the minimal RPI set of the difference inclusion (12.1.50)– (12.1.51) and constraint set
(Rn , W) in the class of the closed RPI sets of the difference inclusion (12.1.50)– (12.1.51)
and constraint set (Rn , W).
We note that Theorem 12.2 and Lemma 12.1 imply that Fk ⊆ D∞ for all k ∈ N.
We are now in position to extend almost all results reported in Chapter 2 but we will
provide just a set of the most important results. We proceed by recalling the following
result established in Theorem 1.1.2 in [Sch93]:
S
Theorem 12.3. (Convex Hull and Minkowski Addition Result) Let P = i∈N+r
Pi and
S
Q = j∈N+ Qj , where each Pi ⊂ Rn and Qj ⊂ Rn is a polytope and let R = P ⊕ Q.
t
Then co(R) = co(P) ⊕ co(Q).

Remark 12.10 (Relevant Consequence of Theorem 12.3) Theorem 12.3 implies that
given a finite set of polygons {Pi , i ∈ N+
l } we have
M M
co( Pi ) = co(Pi ). (12.1.58)
i∈N+
l i∈N+
l

Theorem 12.3 allows us to exploit ideas of [Kou02] and Chapter 2. However we need
to include a set of necessary changes in order to establish next result:

200
Theorem 12.4. (A convex, compact, RPI set of the difference inclusion (12.1.50)–
(12.1.51) and constraint set (Rn , W)) Suppose that Assumption 12.2 holds and suppose
that W is a convex, compact set with 0 ∈ interior(W), then there exists a finite integer
s ∈ N+ and a scalar α ∈ [0, 1) that satisfy

Ais W ⊆ αW, ∀ is ∈ IS (12.1.59)

or equivalently,
ReachD
s ({0}, W) ⊆ αW. (12.1.60)

Furthermore, if (12.1.59) ( (12.1.60)) is satisfied, then

Υ(α, s) , (1 − α)−1 co(Ds ) (12.1.61)

is a convex, compact, RPI set of the difference inclusion (12.1.50)– (12.1.51) and con-
straint set (Rn , W). Furthermore, 0 ∈ interior(Υ(α, s)) and Fk ⊆ D∞ ⊆ Υ(α, s) for all
k ∈ N.

Proof: Existence of an s ∈ N+ and an α ∈ [0, 1) that satisfies (12.1.59) ( (12.1.60))


follows from the fact that the origin is in the interior of W, compactness of W and As-
sumption 12.2 (See also (12.1.56)).
Convexity of Υ(α, s) is obvious, compactness of Υ(α, s) follows directly from the fact
that Ds (and hence Υ(α, s)) is compact set from the properties of the Minkowski Set
Addition.
In order to prove robust positive invariance of the set Υ(α, s) with respect to the
difference inclusion x+ ∈ D(x, w) (12.1.50)– (12.1.51) and constraint set (Rn , W) we
need to show that D(Υ(α, s), W) ⊆ Υ(α, s).
We proceed as follows:

D(Υ(α, s), W) ⊆ Υ(α, s) ⇔ (12.1.62a)


 
[
 Ai Υ(α, s) ⊕ W ⊆ Υ(α, s) ⇔ (12.1.62b)
i∈N+
q

Ai Υ(α, s) ⊕ W ⊆ Υ(α, s), ∀i ∈ N+


q (12.1.62c)

201
We establish that for any i ∈ N+
q we have Ai Υ(α, s) ⊕ W ⊆ Υ(α, s) as follows:

Ai Υ(α, s) ⊕ W (12.1.63a)
= Ai (1 − α)−1 co(Ds ) ⊕ W (12.1.63b)
= (1 − α)−1 co(Ai Ds ) ⊕ W (12.1.63c)
  
s−1
M [
= (1 − α)−1 co Ai  Aij W ⊕ W (12.1.63d)
j=0 ij ∈IJ
  
Ms−1 [
= (1 − α)−1 co  Ai  Aij W ⊕ W (12.1.63e)
j=0 ij ∈IJ
  
Ms [
⊆ (1 − α)−1 co   Aij W ⊕ W (12.1.63f)
j=1 ij ∈IJ
    
[ s−1
M [
= (1 − α)−1 co  Ais W ⊕ (1 − α)−1 co   Aij W ⊕ W (12.1.63g)
is ∈IS j=1 ij ∈IJ
  
s−1
M [
⊆ (1 − α)−1 αW ⊕ (1 − α)−1 co   Aij W ⊕ W (12.1.63h)
j=1 ij ∈IJ
  
Ms−1 [
= (1 − α)−1 co   Aij W ⊕ (1 − α)−1 W (12.1.63i)
j=1 ij ∈IJ
  
Ms−1 [
= (1 − α)−1 co   Aij W (12.1.63j)
j=0 ij ∈IJ

= (1 − α)−1 co(Ds ) (12.1.63k)


= Υ(α, s) (12.1.63l)

Note that we have used the facts that P ⊆ Q ⇒ P ⊕ R ⊆ Q ⊕ R for arbitrary sets
S 
P ⊂ Rn , Q ⊂ Rn , R ⊂ Rn ; the fact that Ais W ⊆ αW, ∀ is ∈ IS ⇒ co is ∈IS A is W ⊆
αW and Theorem 12.3.
It follows trivially that Fk ⊆ D∞ ⊆ Υ(α, s) for all k ∈ N. Note also that 0 ∈ Υ(α, s)
if 0 ∈ interior(W).

QeD.

It follows trivially from Lemma 12.2 that the set Υ(α, s) is an RPI set for sys-
tem (12.1.47) and constraint set (Rn , W). By Theorem 12.3 the set Υ(α, s) is the
same set first studied in context of linear parametrically uncertain systems by Kouramas
in [Kou02].
The degree of quality of this approximation Υ(α, s) of the set D∞ can be measured
by establishing results analogous to Theorem 2.2 and Theorem 2.3. Before we state this
results that follow from the same arguments as in the proofs of Theorem 2.2 and Theorem

202
2.3 we let

so (α) , inf s ∈ N+ ReachD s ({0}, W) ⊆ αW , (12.1.64a)

αo (s) , inf α ∈ [0, 1) ReachDs ({0}, W) ⊆ αW (12.1.64b)

be the smallest values of s and α such that (12.1.64) holds for a given α and s, respectively.
The infimum in (12.1.64a) exists for any choice of α ∈ (0, 1). The infimum
in (12.1.64a) is also guaranteed to exist if s is sufficiently large.

Theorem 12.5. (Limiting behavior of the RPI approximation Υ(α, s)) If 0 ∈ interior(W)
and Assumption 12.2 holds, then

(i) Υ(αo (s), s) → co(D∞ ) as s → ∞ and

(ii) Υ(α, so (α)) → co(D∞ ) as α ց 0.

The proof of this result follows the same arguments of the proof of Theorem 2.2 in
Chapter 2.
Next result allows us to establish existence of an improved approximation.
Theorem 12.6. (Improved RPI approximation Υ(α, s)) If 0 ∈ interior(W), then
for all ε > 0, there exist an α ∈ [0, 1) and an associated integer s ∈ N+ such
that (12.1.59)( (12.1.60)) and

α(1 − α)−1 Ds ⊆ Bnp (ε) (12.1.65)

hold. Furthermore, if (12.1.59)( (12.1.60)) and (12.1.65) are satisfied, then Υ(α, s) is a
convex, compact, RPI set of the difference inclusion (12.1.50)– (12.1.51) and constraint
set (Rn , W) such that co(D∞ ) ⊆ Υ(α, s) ⊆ co(D∞ ) ⊕ Bnp (ε).
The proof of this result follows the same arguments of the proof of Theorem 2.3 in
Chapter 2.
Remark 12.11 (An appropriate choice for θ0 (·) (satisfying Assumption 12.1) and
an appropriate target set T) We note that an appropriate choice for θ0 (·) (satisfying
Assumption 12.1) and an appropriate target set T are as follows. We assume that the
origin is an equilibrium of system x+ = f (x, u, 0) where f (·) is defined in (12.1.19), so
that ci = 0 for all i ∈ N0q , where N0q ⊆ N+
t , is defined by:

N0q , {i ∈ N+
q | 0 ∈ Pi } (12.1.66)

where 0 is the origin of the state-control space. Following [MFTM00, GKBM04,


BGFB94], we assume that a stabilizing piecewise linear control law θ0 (·) for the nomi-
nal system x+ = Ai x + Bi u, (x, Ki x) ∈ Pi , i ∈ N0q along with the associated common
quadratic Lyapunov function are obtained by computing {Kl , l ∈ N0q } and P satisfy-
ing the following (perhaps conservative) LMI (See [MFTM00, GKBM04, BGFB94] for a
whole set of possible variations):

(Al + Bl Kl )′ P (Al + Bl Kl ) − P < 0, P > 0 (12.1.67)

203
or computing {Kl , l ∈ N0q }, β and P satisfying the following set of constraints:

(Al + Bl Kl )′ P (Al + Bl Kl ) − βP ≤ 0, P > 0, β ∈ (0, 1) (12.1.68)

In which case we define θ0 (·) by:

θ0 (x) = Ki x if x ∈ Pi∗ , ∀i ∈ N0q (12.1.69)

where
Pi∗ , {x | (x, Ki x) ∈ Pi ∩ Y}, i ∈ N0q (12.1.70)

and we define
[
X0 , Pi∗ (12.1.71)
i∈N0q

We also need to require that interior(X0 ) is non empty. The set T can be computed as
described bellow as an invariant approximation so that T = Υ(α, s), where one needs to
take care (when applying procedure for computation of Υ(α, s)) of the index set N0q .
Finally in order to ensure that the set Υ(α, s) is a RPI set for system (12.1.47) and
constraint set (X0 , W) it is necessary to check whether the convex set Υ(α, s) satisfies
that Υ(α, s) ⊆ X0 where X0 is given by (12.1.71). Note that for general piecewise linear
discrete time systems the set X0 in which the corresponding piecewise linear system (see
(12.1.47)) is valid is non-convex and therefore it is worthwhile pointing out that efficient
procedures for testing Υ(α, s) ⊆ X are given in Introduction of this thesis (See Chapter 1
Proposition 1.5 and Proposition 1.6). In cases when the set X0 is convex the required
subset test can be efficiently performed as proposed in Chapter 2.
We proceed to provide an additional suitable option for the target set T for robust
time–optimal control problem for constrained piecewise affine discrete time systems.
An alternative option for computation of a suitable target set T is to perform the
standard computation of the maximal robust positively invariant set for system (12.1.47)
and constraint set (X0 , W). In the sequel we assume that (without loss of generality)
interior(Pi∗ ) is non empty for all i ∈ N0q . Recalling (1.7.10)– (1.7.10b) we proceed to
compute the following set sequence.

Ωk+1 , {x ∈ X0 | g(x, w) ∈ Ωk , ∀w ∈ W}, k ∈ N+ , Ω0 = X0 (12.1.72)

where g(·) is defined in (12.1.47). Note that N+ 0


q should be identified with Nq and if
necessary some renumeration of index sets should be performed.
The set sequence {Ωk } is a sequence of polygonic sets and it satisfies that Ωk+1 ⊆ Ωk
for all k ∈ N and if Ωk∗ +1 = Ωk∗ for some k ∗ ∈ N then the set Ωk∗ is the maximal
robust positively invariant set for system (12.1.47) and constraint set (X0 , W). However,
if Ωk = ∅ for some k ∈ N a simple conclusion is that the maximal robust positively
invariant set for system (12.1.47) and constraint set (X0 , W) is an empty set. A relevant
observation is that:

Theorem 12.7. (Sufficient Conditions for existence of a finite integer k ∗ such that

204
Ωk∗ +1 = Ωk∗ ) Suppose that Assumption 12.2 holds and that Υ(α, s) ⊆ interior(X0 )
then there exists a k ∗ ∈ N such that Ωk∗ +1 = Ωk∗ and the set Ωk∗ is the maximal robust
positively invariant set for system (12.1.47) and constraint set (X0 , W).

Proof: It follows from (12.1.55) that Reachk (X0 , W) ⊆ ReachD


k (X0 , {0}) ⊕ Dk for all
k ∈ N. Assumption 12.2 guarantees that there exists a finite time k ∗ such that
ReachD ∗
k∗ (X0 , {0}) ⊆ interior(X0 ) ⊖ Υ(α, s). Furthermore Ωk∗ ⊆ X0 , where Ωk∗ is the k -
th term of the set sequence {Ωk } defined in (12.1.72) so that Ωk∗ denotes the set of states
which satisfy ReachD D
1 (Ωk∗ , W) ⊆ X0 . It follows that Reachk∗+1 (Ωk∗ , {0}) ⊆ X0 ⊖ Υ(α, s)
so that ReachD
k∗+1 (Ωk∗ , {0}) ⊕ Dk ⊆ X0 . This in turns implies that Reachk∗ +1 (Ωk∗ , W) ⊆
X0 so that Ωk∗ +1 ⊇ Ωk∗ , but the set sequence {Ωk } satisfies Ωk+1 ⊆ Ωk for all k ∈ (·)N
which yields Ωk∗ +1 = Ωk∗ .

QeD.

Theorem 12.7 provides merely sufficient conditions that guarantee the existence
and finite time determination of the maximal robust positively invariant set for sys-
tem (12.1.47) and constraint set (X0 , W). Suppose that Assumption 12.2 holds and
X0 is a compact set that contains the origin in its interior (recall that we have assumed
interior(Pi∗ ) is non empty for all i ∈ N0q .). Then if W = {0} the set sequence {Ωk }
defined in (12.1.72) computes the maximal positively invariant set for system (12.1.47)
(with W = {0}) and constraint set X0 in finite time.
We can conclude that issue of an appropriate choice for θ0 (·) (satisfying Assumption
12.1) and an appropriate target set T are therefore discussed.

Illustrative Examples

In order to illustrate the proposed procedure we consider two second order PWA sys-
tems [RGK+ 04].
Our first example is the following 2-dimensional problem:

x+ = Ai x + Bi u + ci + w (12.1.73)

where i = 1 if x1 ≤ 1 and i = 2 if x1 ≥ 1 and


" # " # " #
1 0.2 0 0
A1 = , B1 = , c1 = ,
0 1 1 0
" # " # " #
0.5 0.2 0 0.5
A2 = , B2 = , c2 = .
0 1 1 0

and the additive disturbance w is bounded:

w ∈ {w ∈ R2 | |w|∞ ≤ 0.1}. (12.1.74)

The system is subject to constraints −x1 + x2 ≤ 15, −3x1 − x2 ≤ 25, 0.2x1 + x2 ≤ 9,


x1 ≥ −6, x1 ≤ 8, and −1 ≤ u ≤ 1, whereas weight matrices for the optimization problem

205
are Q = I and R = 1. The target set is the maximal robust positively invariant set for
x+ = (A1 + B1 K1 )x + w and the corresponding constraint set, where K1 is the Riccati
LQR feedback controller. The resulting PWA control law is defined over a polyhedral
partition consisting of 417 regions and is depicted in Figure 12.1.1.

10

2
x2

−2

−4

−6

−8

−10
−6 −4 −2 0 2 4 6 8
x1

Figure 12.1.1: Final robust time–optimal controller for Example 1.

Our second example is the following 2-dimensional PWA system with 4 dynamics:

x+ = Ai x + Bi u + w (12.1.75)

where



 1, if x1 ≥ 0 & x2 ≥ 0


 2, if x ≤ 0 & x ≤ 0
1 2
i=
 3, if x1 ≤ 0 & x2 ≥ 0




 4, if x1 ≥ 0 & x2 ≤ 0
" # " #
1 1 1
A1 = , B1 = ,
0 1 0.5
" # " #
1 1 −1
A2 = , B2 = ,
0 1 −0.5
" # " #
1 −1 −1
A3 = , B3 = ,
0 1 0.5
" # " #
1 −1 1
A4 = , B4 = .
0 1 −0.5

with the following bounds on the disturbance:

w ∈ {w ∈ R2 | |w|∞ ≤ 0.2}. (12.1.76)

206
One can observe, that the system is a perturbed double integrator in the discrete time
domain, with different orientation of the vector field. The output and input constraints,
respectively, are: −5 ≤ x1 ≤ 5, −5 ≤ x2 ≤ 5, −1 ≤ u ≤ 1, whereas weight matrices for
the optimization problem are Q = I and R = 1.
By solving the SDP in [GKBM04], the following stabilizing feedback controllers are
obtained: K1 = [−0.5897 − 0.9347], K2 = [0.5897 0.9347], K3 = [0.5897 − 0.9347] and
K4 = [−0.5897 0.9347]. The target set T is the maximal robustly positively invariant
set for closed loop system and the corresponding target set. A controller partition with
508 regions is depicated in Figure 12.1.2.

2.5

1.5

0.5
2

0
x

−0.5

−1

−1.5

−2

−2.5
−5 −4 −3 −2 −1 0 1 2 3 4 5
x
1

Figure 12.1.2: Final robust time–optimal controller for Example 2.

12.2 Computation of Voronoi Diagrams and Delaunay Tri-


angulations by parametric linear programming
It is purpose of this section 1 to establish a link between the methodology of parametric
linear programming (PLP) [Sch87, Gal95, Bor02], Voronoi diagrams and Delaunay tri-
angulations. Voronoi diagrams, Dirichlet tesselations and Delaunay tesselations are the
concepts introduced by the mathematicians: Dirichlet [Dir50], Voronoi [Vor08, Vor09]
and Delaunay [Del34]. It is shown in this section that the solution of an appropriately
specified PLP yields the Voronoi diagram or the Delaunay triangulation, respectively.
The following introduction to Voronoi diagrams and Delaunay triangulation is derived
from [Fuk00].
Given a set S of d distinct points p in Rn , the Voronoi diagram is a partition of Rn into
d polyhedral regions. The region associated with the point p is called the Voronoi cell of
1
This section is based on work done in collaboration with Pascal Grieder and Colin Jones; the primary
contributor is the author of this thesis.

207
p and is defined as the set of points in Rn that are closer to p than to any other point in
S. Voronoi diagrams are well known in computational geometry and have been studied
by many authors [Bro79, Bro80, ES86, Fuk00, Zie94, OBSC00]. Voronoi Diagrams are
a fundamental tool in many fields to many. For example the breeding areas of fish, the
optimal placement of cellular base stations in a city and searches of large databases can
all be described by Voronoi Diagrams [Aur91].
The Delaunay complex of S is a partition of the convex hull of S into polytopical
regions whose vertices are the points in S. Two points are in the same region, called
a Delaunay cell, if they are closer to each other than to any other points (i.e., they
are nearest neighbours). This partition is called the Delaunay complex and is not in
general a triangulation (we use the term triangulation, through this section, in sense
that it represents the generalized triangulation, i.e., a division of polytope in Rn into n-
dimensional simplicies) but becomes a triangulation when the input points are in general
position or nondegenerate (i.e. no points are cospherical or equivalently there is no point
c ∈ Rn whose nearest neighbour set has more than n + 1 elements and the convex hull of
the set of points has non–empty interior). The Delaunay complex is dual to the Voronoi
diagram in the sense that there is a natural bijection between the two complexes which
reverses the face inclusions.
Both the Voronoi diagram and the Delaunay triangulation of a random set of points
are illustrated in Figure 12.2.1.

10 10

8 8

6 6

4 4

2 2
2

0 0
x

−2 −2

−4 −4

−6 −6

−8 −8

−10 −10
−10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10
x x
1 1

(a) Voronoi diagram. (b) Delaunay triangulation.

Figure 12.2.1: Illustration of a Voronoi diagram and Delaunay triangulation of a random


set of points.

Preliminaries

Before proceeding, the following definitions and preliminary results are needed.
The standard Euclidan distance between two points x and y in Rn is denoted by
d(x, y) , ((x − y)′ (x − y))1/2 .

Definition 12.1 (Voronoi Cell and Voronoi Diagram) Given a set of S , {pi ∈ Rn | i ∈
N+
q }, the Voronoi cell associated with point pi is the set V (pi ) , {x | d(x, pi ) ≤

208
d(x, pj ), ∀j 6= i, i, j ∈ N+
q } and the Voronoi diagram of the set S is given by the
S
union of all of the Voronoi cells: V(P ) , i∈N+
q
V (pi ).

Definition 12.2 (Convex Hull of a set of points) The convex hull of a set of points
S , {pi | i ∈ N+
q } is defined as

q
X
co(S) = {x = pi λi | λ = (λ1 , . . . , λq ) ∈ Λq }
i=1
q
X
q
Λq , {λ = (λ1 , . . . , λq ) ∈ R | λi ≥ 0, λi = 1}
i=1

Note that the convex hull of a finite set of points is always a convex polytope.

Definition 12.3 (Triangulation and Simplex) A triangulation of a point set S is the


partition of the convex hull of S into a set of simplices such that each point in S is a
vertex of a simplex. A simplex is a polytope defined as the convex hull of n + 1 vertices.

Definition 12.4 (Delaunay Triangulation) For a set of points S in Rn , the Delaunay


triangulation is the unique triangulation DT (S) of S such that no point in S is inside
the circumcircle of any simplex in DT (S).

Definition 12.5 (Lifting (Lifting Map)) The map L(·) : Rn → Rn+1 , defined by L(x) ,
h i′
x′ x′ x is called the lifting map.

Definition 12.6 (Tangent plane) Given the surface f (z) = 0, where f (·) : Rp → R the
tangent plane Hf (z ∗ ) to the surface f (z) = 0 at the point z = z ∗ is

Hf (z ∗ ) , {z | (∇f (z ∗ ))′ (z − z ∗ ) = 0} (12.2.1)

where ∇f (z ∗ ) is gradient of f (z) evaluated at z = z ∗ .


Let x ∈ Rn and θ ∈ R and let g(·) : Rn+1 → R be defined by g(x, θ) , x′ x − θ, then
h i′
the gradient of g(x, θ) is ∇g(x, θ) = 2x′ −1 . The tangent hyperplane to g(·) at a
h i′
point ri = L(pi ) = p′i pi ′ pi for any i ∈ N+
q is then given by:

Hg (ri ) = {(x, θ) | 2pi ′ (x − pi ) − (θ − p′i pi ) = 0}


= {(x, θ) | hi (x, θ) = 0}

where
hi (x, θ) = 2pi ′ (x − pi ) − (θ − p′i pi ) (12.2.2)

Definition 12.7 (Upper Envelope) Let S , {pi ∈ Rn | i ∈ N+


q } be a set of points and
U , {ri = L(pi ) | i ∈ N+
q } be the lifting of S. The upper envelope of the set of points R is
defined by UE (R) , {(x, θ) | hi (x, θ) ≤ 0, ∀i ∈ N+
q } where hi (x, θ) is defined in (12.2.2).

209
Definition 12.8 (Lower Convex Hull) Let S , {pi ∈ Rn | i ∈ N+
q } be a set of points,
U , {ri = L(pi ) ∈ Rn+1 | i ∈ N+
q } be the lifting of S and let co(U ) ⊂ R
n+1 be the convex

hull of U . A facet of co(R) is called a lower facet if the halfspace defining the facet is
given by {(x, θ) ∈ Rn × R | α′ x + βθ ≤ γ} and β is less than zero. The surface formed
by all the lower facets of co(U ) is called the lower convex hull of U and is denoted by
lco(R).

Computation of Voronoi Diagrams

We proceed to show how to compute the Voronoi diagram via a PLP for a given finite
set of points S , {pi ∈ Rn | i ∈ N+
q }.

Introduction to Voronoi Diagrams

We show how the equality (12.2.2) relates to the Euclidian distance between two points,
which is used in the Voronoi diagram Definition 12.1.
Let f (·) : Rn → R be defined by f (x) , x′ x and let x and y be two points in Rn ,
then:
d2 (x, y) = (x − y)′ (x − y) = f (x) − θ̂(x, y) (12.2.3)

where θ̂(x, y) , 2y ′ x − y ′ y. Let

θi (x) , θ̂(x, pi ) = p′i x − p′i pi , i ∈ N+


q (12.2.4)

Obviously, the square of the Euclidian distance of any point x ∈ Rn from any point
pi ∈ S is given by d2 (x, pi ) = f (x) − θi (x). Furthermore, given any x ∈ Rn , θi (x) is just
a solution of Equation (12.2.2):

hi (x, θ) = 0 ⇔ 2pi ′ (x − pi ) − (θ − p′i pi ) = 0. (12.2.5)

Let θ̄(x) be defined by:


θ̄(x) , max θi (x) (12.2.6)
i∈N+
q

It is clear that,
hi (x, θ) ≤ 0, ∀i ∈ N+
q ⇔ θ ≥ θ̄(x) (12.2.7)

The lifting of the set S and the resulting calculation of the Voronoi cells is shown in
Figure 12.2.2.

Lemma 12.4 (Characterization of Voronoi Cell) Let x be in Rn and S = {pi ∈ Rn | i ∈


N+
q }, then the Voronoi cell associated with the point pi is:

V (pi ) = {x | θ̄(x) = θi (x)} (12.2.8)

where θ̄(x) and θi (x) are defined in (12.2.6) and (12.2.4), respectively.

210
h3 (x, θ) = 0
x′ x

h1 (x, θ) = 0

L(p3 ) h2 (x, θ) = 0

L(p1 )
L(p2 )

p1 p2 p3

| {z }| {z }| {z }
V (p1 ) V (p2 ) V (p3 )

Figure 12.2.2: Voronoi Lifting

Proof: The proof uses (12.2.3) and the fact that d(x, y) ≥ 0, so that:

V (pi ) , {x | d(x, pi ) ≤ d(x, pj ), ∀j ∈ N+


q }

= {x | d2 (x, pi ) ≤ d2 (x, pj ), ∀j ∈ N+
q }

= {x | f (x) − θi (x) ≤ f (x) − θj (x), ∀j ∈ N+


q }

= {x | − θi (x) ≤ −θj (x), ∀j ∈ N+


q }

= {x | θi (x) ≥ θj (x), ∀j ∈ N+
q }

= {x | θ̄(x) = θi (x)}

QeD.

Parametric linear programming formulation


of Voronoi Diagrams

Here, we will assume that the parameter θ(x) is no longer a function of x but is instead
a free variable, henceforth denoted by θ. It will be shown how a parametric optimization
problem can be posed for the variables θ and x, such that the solution to the PLP is a
Voronoi diagram. Let the set Ψ ⊆ Rn+1 be defined by:

Ψ , {(x, θ) | hi (x, θ) ≤ 0, ∀i ∈ N+
q } (12.2.9a)
= {(x, θ) | θ ≥ θ̄(x)} (12.2.9b)

211
From (12.2.4), (12.2.6) and (12.2.8) we have

Ψ = {(x, θ) | M x + N θ ≤ p} (12.2.10)

where M , N and p are given by:


     
2p′1 −1 p′1 p1
     
 2p′   −1   p′ p2 
 2     2 
M =  . , N =  . , p =  .  (12.2.11)
 ..   ..   .. 
     
2p′q −1 p′q pq

Consider the cost function


g(x, θ) = 0x + 1θ (12.2.12)

and the following parametric program PV (x):

PV (x) : g o (x) = min{g(x, θ) | (x, θ) ∈ Ψ} (12.2.13)


θ

= min{0x + 1θ | M x + N θ ≤ p}. (12.2.14)


θ

The parametric form of PV (x) is a standard form encountered in the literature on para-
metric linear programming [Gal95, Bor02]. It is obvious from (12.2.8) and (12.2.13) that
the optimiser θ◦ (x) of PV (x) is equal to θ̄(x).
Theorem 12.8. (Parametric LP formulation of Voronoi Diagrams) Let S , {pi ∈
Rn | i ∈ N+
q }. The explicit solution of the parametric problem PV (x) defined in (12.2.13)
yields the Voronoi Diagram of the set of points S.

Proof:
The optimiser θ◦ (x) for problem PV (x) is a piecewise affine function of x [Gal95, Bor02]
S
and it satisfies, for all x ∈ Rn = i∈N+
q
Ri :

θo (x) = Ti x + ti = θ̄(x), ∀x ∈ Ri

By Lemma 12.4, Ri is the Voronoi cell associated with the point pi . Hence, computing
the solution of PV (x) via PLP yields the Voronoi Diagram of S.

QeD.

Computation of the Delaunay Triangulation

We now show how to compute the Delaunay triangulation via a PLP for a given finite
set of points S , {pi ∈ Rn , i ∈ N+
q }.

212
x′ x

L(p3 )

L(p1 ) F2
F1 L(p2 )

p1 p2 p3

| {z }| {z }
T1 T2

Figure 12.2.3: Calculation of a Delaunay Triangulation

Introduction to Delaunay Triangulation

The Delaunay triangulation of the set S , {pi ∈ Rn | i ∈ N+


q } of vertices is a projection
on Rn of the lower convex hull of the set of lifted points U , {L(pi ) | pi ∈ S} ⊂ Rn+1 .
It is well known [Fuk00] that the Delaunay triangulation of the set S can be computed
in two steps. First, the lower convex hull of the lifted point set S is computed: U ,
lco({L(pi ) | pi ∈ S}). Second, each facet Fi of U, is projected to Rn : Ti , ProjRn Fi . If
U has N+
t facets, then the Delaunay triangulation of S is given by:

+
Nt
[
DT (S) , Ti
i=1

This is illustrated in Figure 12.2.3.

Parametric linear programming formulation


of Delaunay triangulation

This section shows how the Delaunay triangulation can be computed via an appropriately
formulated parametric linear program. Let S = {pi ∈ Rn | i ∈ N+
q } be a finite point set
and U = {L(pi ) | pi ∈ S} be the lifted point set. From Definition 12.8, the lower convex
hull of U can be written as:

lco(U ) = {(x, γ ◦ ) ∈ co(U ) | γ ◦ = argmin{γ | (x, γ) ∈ co(U )}}. (12.2.15)

Equation 12.2.15 is equivalent to the following parametric linear program:

PD (x) : γ o (x) = min{γ | (x, γ) ∈ co(U )} (12.2.16)


γ

Assumption 12.3 Throughout this section we assume that:

213
(i) Convex hull of S has non–empty interior, interior(co(S)) 6= ∅ and

(ii) There does not exist n + 2 points that lie of the surface of the same n-dimensional
ball.

Remark 12.12 (Existence of Delaunay Triangulation) Assumption 12.3 ensures that


the Delaunay triangulation exists, is unique and is in fact a triangulation. [Fuk00, Zie94,
OBSC00].
We will now show how (12.2.16) can be formulated in the standard form for parametric
linear program solvers. The convex hull of the lifted point set U can be written as:
( " # q
" # q )
x X pi X
co(U ) , (x, γ) ∃λ, = λi , λi = 1, λi ≥ 0 (12.2.17)
γ i=1 p′i pi i=1
( )
MI x + NI γ + LI λ ≤ bI
= (x, γ) ∃λ, (12.2.18)
ME x + NE γ + LI λ = bE

where MI , NI , LI and bI are given by:

MI = 0, NI = 0, LI = −I, bI = 0 (12.2.19)

and ME , NE , LE and be are given by:


       
I 0 −S 0
       
ME =  0 , NE =  1  , LE = −Y  , bE = 0
       (12.2.20)
0 0 1 1
h i
where S , [p1 p2 . . . pq ] and Y , p′1 p1 p′2 p2 . . . p′q pq .
Problem PD (x) can now be written in standard form as:
( )
MI x + NI γ + LI λ ≤ bI
PD (x) : γ o (x) = min 0x + 0λ + 1γ (12.2.21)
γ ME x + NE γ + LI λ = bE

The explicit solution of the parametric problem PD (x) is a piecewise affine function:

γ o (x) = Gi x + gi , x ∈ Ri , i ∈ N+
t (12.2.22)

Theorem 12.9. (Parametric LP formulation of Delaunay Triangulation) Let S , {pi ∈


Rn | i ∈ N+
q } be a given point set. Then the explicit solution of parametric form of PD (x)
defined in (12.2.21) yields the Delaunay triangulation.

Proof: From Assumption 12.3 it follows that the Delaunay triangulation exists and is
unique [Fuk00, Zie94, OBSC00], i.e., the facets of the lower convex hull of the lifted point
set U are n dimensional simplices. It follows from the construction of lco(U ) that the
optimiser for problem PD (x) is a piecewise affine function of x and that (x, γ o (x)) is in

214
lco(U ), for all x ∈ co(S). Furthermore, the optimiser γ o (x) in each region Ri obtained by
solving PD (x) (12.2.21) as a parametric program is affine, i.e., γ o (x) = Ti x + ti if x ∈ Ri .
Thus, each region Ri is equal to the projection of a facet Fi , i.e. Ri = ProjRn Fi , i ∈ N+
t .
Hence the PLP PD (x) defined in (12.2.21) solves the Delaunay triangulation problem.

QeD.

Numerical Examples

In order to illustrate the proposed PLP Voronoi and Delaunay algorithms a random set
of points S in R2 was generated and the corresponding Voronoi diagram and Delaunay
triangulation are shown in Figures 12.2.4. Figure 12.2.5 shows the Voronoi partition and
Dalaunay triangulation for a unit-cube.

10 10

8 8

6 6

4 4

2 2
2
x2

0 0
x

−2 −2

−4 −4

−6 −6

−8 −8

−10 −10
−10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10
x x
1 1

(a) Voronoi diagram (b) Delaunay triangulation

Figure 12.2.4: Illustration of a Voronoi diagram and Delaunay triangulation for a given
set of points S.

(a) Voronoi diagram (b) Delaunay triangulation

Figure 12.2.5: Illustration of the Voronoi diagram and Delaunay triangulation of a unit-
cube P in R3 .

215
The presented algorithms can be implemented by standard computational geometry
software [Ver03, KGBM03]. The algorithms presented in this section are also in the MPT
toolbox [KGBM03].

12.3 A Logarithmic – Time Solution to the point location


problem for closed–form linear MPC
It is standard practice to implement an MPC controller by solving on–line an optimal
control problem that, when the system is linear and the constraints are polyhedral,
amounts to computing a single linear or quadratic program at each sampling instant
depending on the type of control objective. In recent years, it has become well-known
that the optimal input is a piecewise affine function (PWA) defined over a polyhedral
partition of the feasible states [Bor02]. Several methods of computing this affine function
can be found in the literature (e.g., [TJB03a, BMDP02, Bor02]). The on–line calculation
of the control input then becomes one of determining the region that contains the current
state and is known as the point location problem.
The complexity of calculating this function is clearly dependent on the number of
affine regions in the solution. This number of regions is known to grow very quickly and
possibly exponentially, with horizon length and state/input dimension [BMDP02]. The
complexity of the solution therefore implies that for large problems an efficient method
for solving the point location problem is needed.
The key contributions to this end have been made by [TJB03b] and [BBBM01].
In [TJB03b], the authors propose construction of a binary search tree over the polyhedral
state-space partition. Therein, auxiliary hyper-planes are used to subdivide the partition
at each tree level. Note that these auxiliary hyper-planes may subdivide existing regions.
The necessary on–line identification time is logarithmic in the number of subdivided
regions, which may be significantly larger than the original number of regions. Although
the scheme works very well for smaller partitions, it is not applicable to large controller
structures due to the prohibitive pre-processing time. If R is the number of regions
and F̄ the average number of facets defining a region, then the approach requires the
solution to R2 · F̄ LPs (It is possible to improve the pre-processing time at the cost of less
efficient (non-logarithmic) on-line computation times). However, the scheme in [TJB03b]
is applicable to any type of closed–form MPC controller, whereas the algorithm proposed
in this section 2 considers only the case in which controllers are obtained via a linear cost.
The approach proposed here is not directly applicable to non-convex controller partitions
and can only be applied to controllers obtained for a quadratic cost if the solution exhibits
a specific structure. This is not guaranteed a priori, for controller partitions obtained for
quadratic control objectives.
In [BBBM01] the authors exploit the convexity properties of the piecewise affine
2
This section is based on work done in collaboration with Pascal Grieder and Colin Jones, whereby
primary contributor is Colin Jones.

216
(PWA) value function of linear MPC problems to solve the point location problem effi-
ciently. Instead of checking whether the point is contained in a polyhedral region, each
affine piece of the value function is evaluated for the current state. Since the value
function is PWA and convex, the region containing the point is associated to the affine
function that yields the largest value. Although this scheme is efficient, it is still linear
in the number of regions.
In this section, we combine the concept of region identification via the value-
function [BBBM01] with the construction of search trees [TJB03b]. We demonstrate
that the PWA cost function can be interpreted as a weighted power diagram, which is
a type of Voronoi diagram, and exploit recent results in [AMN+ 98] to solve the point
location problem for Voronoi diagrams in logarithmic time at the cost of very simple
pre-processing operations on the controller partition.
We focus on MPC problems with 1- or ∞-norm objectives and show that evaluating
the optimal PWA function for a given state can be posed as a nearest neighbour search
over a finite set of points. In [AMN+ 98] an algorithm is introduced that solves the nearest
neighbour problem in n dimensions with R regions in time O(cn,ǫ n log R) and space
O(nR) after a pre-processing step taking O(nR log R), where cn,ǫ is a factor depending
on the state dimension and an error tolerance ǫ. Hence, the optimal control input can
be found on–line in time logarithmic in the number of regions R.

Preliminaries and Problem Formulation

We first recall the standard linear, constrained model predictive control problem. Con-
sider the discrete, linear, time-invariant model:

x+ = Ax + Bu, (12.3.1)

where A ∈ Rn×n , B ∈ Rn×m , (A, B) is controllable and x+ is the state at the next point
in time given the current measured state x ∈ Rn and the input u ∈ Rm . The following
set of hard state and input constraints is imposed on system (12.3.1):

x ∈ X, u ∈ U (12.3.2)

In words, the state x is constrained to lie in a polytopic set X ⊂ Rn at each point in time
and the input u is required to be in the polytope U ⊂ Rm , where the sets X and U both
contain the origin in their interiors. Additionally, the terminal constraint set is imposed
on the terminal state:
xN ∈ Xf ⊆ X (12.3.3)

where xN = φ(N ; x, u) and Xf is a polytope that contains the origin as an interior point.
The path and the terminal costs are defined by:

ℓ(x, u) , |Qx|p + |Ru|p or ℓ(x, u) = |x|2Q + |u|2R


Vf (x) , |P x|p or Vf (x) = |x|2P (12.3.4)

217
where p = 1, ∞ so that the cost function is:
N
X −1
V (x, u) , ℓ(xi , ui ) + Vf (xN ) (12.3.5)
i=0

where xi = φ(i; x, u), i ∈ NN .


The set of constraints (12.3.2)–(12.3.3) constitutes the implicit constraint on the set
of admissible input sequences defined by:

U(x) , {u | (φ(i; x, u), ui ) ∈ X × U, i ∈ NN −1 , φ(N ; x, u) ∈ Xf } (12.3.6)

The finite horizon optimal control problem takes the following form:

PN (x) : V 0 (x) , min{V (x, u) | u ∈ U(x)}


u
0
u (x) , arg min{V (x, u) | u ∈ U(x)} (12.3.7)
u

Hence, u0 (x) = {u00 (x), u01 (x), . . . , u0N −1 (x)}.


In the model predictive control the optimal control problem PN (x) is solved at each
sampling time, using the current state as initial state of the process, and the control law
κN (·) defined by:
κN (x) , u00 (x) (12.3.8)

is applied to system. The set of states that are controllable, by applying MPC, is clearly
given by:
XN , {x | U(x) 6= ∅} (12.3.9)

We refrain here from general discussion on the systematic properties of the MPC
controller and refer the interested reader to [MRRS00] for a set of appropriate ingredi-
ents (such as cost, terminal constraint set, etc.) for the optimal control problem PN (x)
that ensure desire properties, (such as stability, invariance, etc.), of the resulting MPC
controller.
It is well known that the optimal control problem PN (x) is a standard linear program-
ming problem if p = 1, ∞ or a quadratic programming problem if the path and terminal
costs are quadratic functions. Since any quadratic or linear programming problem can
be solved by parametric programming tools it follows that the solution to PN (x) can be
obtained by solving parametric quadratic or linear programming problem. The optimiser
and optimal cost are piecewise affine functions in case of linear path and terminal cost
and piecewise affine and piecewise quadratic functions , respectively, in case of quadratic
path and the terminal cost. If the parametric programming tools are used to solve the
optimal control problem PN (x) we refer to its solution as a closed form MPC. In this
section we will concentrate on the linear path and terminal costs (p = 1, ∞) and therefore
we need to recall some preliminary results.

218
Linear Programming Formulation

We briefly restate a formulation of PN (x) as a linear programming problem if p = 1, ∞.


The procedure follows lines of [AP92, All93, BBM00a]. Let:

α , {α0 , α1 , . . . , αN } and β , {β0 , β1 , . . . , βN −1 } (12.3.10)

where for all i, αi ∈ Rn if p = 1 and αi ∈ R if p = ∞ and similarly, βi ∈ Rm if p = 1 and


βi ∈ R if p = ∞. Let γ , (u, α, β) and let:
8
<{γ | − αi ≤ Qxi ≤ αi , −βi ≤ Rui ≤ βi , i ∈ NN −1 , −αN ≤ P xN ≤ αN , u ∈ U (x)} ,p = 1
Γ(x) ,
:{γ | − 1αi ≤ Qxi ≤ 1αi , −1βi ≤ Rui ≤ 1βi , i ∈ NN −1 , −1αN ≤ P xN ≤ 1αN , u ∈ U (x)} ,p = ∞
(12.3.11)

where 1 is a vector of ones of appropriate length and for all i, xi = φ(i; x, u). It is
remarked in [BBM00a] that an equivalent optimization problem to the optimal control
problem PN (x) for p = 1, ∞ is obtained by defining the cost function to be:
P
 N −1 (1′ αi + 1′ βi ) + 1′ αN , p = 1
e i=0
V (x, γ) , P (12.3.12)
 N −1 (α + β ) + α ,p = ∞
i=0 i i N

where again 1 should be considered as a vector of ones of appropriate dimensions. Thus,


the equivalent optimization problem is:

PeN (x) : V 0 (x) = min{V e (x, γ) | γ ∈ Γ(x)}


γ

γ (x) = arg min{V e (x, γ) | γ ∈ Γ(x)}


0
(12.3.13)
γ

It is a trivial observation than that the optimizer u0 (x) is easily constructed from the
knowledge of γ 0 (x) by a simple selection operation, i.e. u0 (x) = Su γ 0 (x). Similarly the
first term of the optimal control input sequence is obtained by u00 (x) = Su0 γ 0 (x), where
Su and Su0 are appropriate selection matrices. Hence, the implicit model predictive
control law is given by:
κN (x) = Su0 γ 0 (x) (12.3.14)

It is clear that the optimization problem PeN (x) a standard linear programming prob-
lem and can be re-written as:

V 0 (x) = min c′ γ
γ

s. t.
(x, γ) ∈ P (12.3.15)

where c is appropriate vector easily constructed from (12.3.12) depending on the choice
of the norm and P is a polytope, readily constructed from (12.3.6) and (12.3.11).
Hence, the optimization problem PeN (x) can be solved by exploiting parametric pro-
gramming techniques so that its explicit solution, or closed–form of solution, can be
computed efficiently off-line. Thus. the optimal cost of (12.3.7) is a convex, piecewise

219
affine function of the state x, taking Rn to R and is defined over a polytopic partition
R = {Ri | i ∈ NR } of XN :

V 0 (x) = Fr′ x + fr , if x ∈ Rr , r ∈ NR , (12.3.16)

where each Rr is a polytope. Furthermore, the optimiser of LP (12.3.7) is a piecewise


affine function of x taking Rn to RN (m+l) as is the control law κN (·), which takes Rn to
Rm and is defined over the same polytopic partition:

κN (x) = u00 (x) = Tr x + tr , if x ∈ Rr , r ∈ NR . (12.3.17)

15

1
10
J (x)

0.5
*

u*(x)
5 0

−5
−0.5
0
5
−1
5 0
5
0
0 0

−5 5 x2
−5 −5
x x
2 1 x1
0 (b) Control law κN (x)
(a) Value function V (x)

Figure 12.3.1: Illustration of the value function V 0 (x) and control law κN (x) for a ran-
domly generated pLP.

Point Location Problem

The problem we are interested is: Given a measured state x and polytopic partition
R = {Ri | i ∈ NR } of XN , determine any integer3 i(x) ∈ NR such that polytope Ri(x)
contains x.
The function i(x) defines the control law κN (x) as

κN (x) = u00 (x) = Ti(x) x + ti(x) .

As V 0 (x) is convex, the calculation of i(x) can be written as [BBBM01]:

i(x) = arg max{Fr′ x + fr | r ∈ NR } (12.3.18)


r

As was proposed in [BBBM01], i(x) can be computed from (12.3.18) by simply eval-
uating the cost Fr′ x + fr for each r ∈ NR and then taking the largest. This procedure
requires 2nR flops and has a storage requirement of (n + 1)R.
We show that with a negligible pre-processing step, (12.3.18) can be computed in loga-
rithmic time, which is a significant improvement over the linear time result of [BBBM01].
3
The state may be on the boundary of several regions.

220
Point Location and Nearest Neighbours

We proceed to show that for pLPs, the point location problem can be written as an
additively weighted nearest neighbour search, or a search over R points in Rn to determine
which is closest to the state x.
Consider the finite set of points called sites S , {s1 , . . . , sR } and the weights W ,
{w1 , . . . , wR }, where (si , wi ) ∈ Rn × R, ∀i ∈ NR . Given a point x in Rn , the weighted
nearest neighbour problem is the determination of the point sr ∈ S that is closest to x,
for all (sj , wj ) ∈ S × W, j ∈ NR . Associated with each site is a set of points Lr ⊂ Rn
such that for each x ∈ Lr , x is closer to sr than to any other site:

Lr , {x | |sr − x|22 + wr ≤ |sj − x|22 + wj , ∀j ∈ Nr }. (12.3.19)

Note that the sets Lr form a polytopic partition LV , {Li | i ∈ NR } of Rn [Aur91]. If


the weights wr are all zero, then the sets Lr form a Voronoi diagram, otherwise they are
called a power diagram [Aur91].
We now state the main result of this section:
Theorem 12.10. (Embedding a polytopic partition corresponding to the solution of a
pLP into a power diagram) If R = {Ri | i ∈ NR } is a polytopic partition corresponding
to the solution of a pLP, then there exists a power diagram LV , {Li | i ∈ NR } of Rn
such that Ri ⊆ Li for all i ∈ NR .

Proof: It suffices to show that for a given polytopic partition corresponding to the
solution of a pLP, R = {Ri | i ∈ NR }, it is possible to define a set of sites and weights
such that their power diagram LV , {Li | i ∈ NR } of Rn satisfies Ri ⊆ Li for all i ∈ NR .
A state x is contained in set Rr if and only if

x ∈ XN and Fr′ x + fr ≥ Fj′ x + fj , ∀j ∈ NR ,

where XN is defined in (12.3.9), or equivalently, if and only if:

x ∈ XN and − Fr′ x − fr ≤ −Fj′ x − fj , ∀j ∈ NR .

We define the R sites and weights as:

sr , Fr /2
(12.3.20)
wr , −fr − |Fr /2|22 = −fr − |sr |22
For all r ∈ NR and a given x it follows that:

|sr − x|22 + wr = −Fr′ x − fr + |x|22

Recalling the definition of Lr in (12.3.19) we obtain the following ∀j ∈ NR :

Lr , {x | |sr − x|22 + wr ≤ |sj − x|22 + wj }


= {x | − Fr′ x − fr + |x|22 ≤ −Fj′ x − fj + |x|22 }
= {x | − Fr′ x − fr ≤ −Fj′ x − fj }
= {x | Fr′ x + fr ≥ Fj′ x + fj }

221
Thus, Lr ∩ XN = Rr for all r ∈ NR establishing the claim.

QeD.

Remark 12.13 (Equivalence of the constrained power diagram and a polytopic partition
corresponding to the solution of a pLP) If we were to define the constrained power diagram
by L∗V , {L∗i | i ∈ NR }, where (for all i ∈ NR ) L∗i , Li ∩ XN and Li are defined as in
the proof above, it is possible to establish that Ri = L∗i for all i ∈ NR .
A very important consequence of Theorem 12.10 is that the point location prob-
lem (12.3.18) can be solved by determining which site sr is closest to the current state
x:

i(x) = {r ∈ NR | |sr − x|22 + wr ≤ |sj − x|22 + wj , ∀j ∈ NR }


! !
sr x
= min √ −
r∈NR wr 0

Since this problem has been well studied in the computational geometry literature we
propose to adapt an efficient algorithm introduced in [AMN+ 98] that solves the nearest
neighbour problem in logarithmic time and thereby solves the point location problem in
logarithmic time. We refer the interested reader to [JGR05] for a more detailed descrip-
tion of the algorithm introduced in [AMN+ 98].

Remark 12.14 (Lifting of a polytopic partition corresponding to the solution of a pLP)


In [Aur87] it was shown that a polytopic partition is a power diagram if and only if
there exists a piecewise affine, continuous and convex function in Rn+1 such that the
projection of each affine piece of the function from Rn+1 to Rn is a set in the polytopic
partition. This piecewise affine function is called a lifting of the a polytopic partition.
From the proof of Theorem 12.10, it is clear that the polytopic partition corresponding
to the solution of a pLP has a lifting.

Remark 12.15 (Quadratic Cost Case) If quadratic cost is used in the formulation of
the MPC problem (12.3.7) then the resulting polytopic partition corresponding to the
solution of a pQP may or may not have a lifting. Although it is not difficult to find
problems for which a lifting does not exist, general conditions for the existence of a
lifting for quadratic costs are not known. See [Aur91, Ryb99] for details on testing when
a polytopic partition has an appropriate lifting.

Remark 12.16 (Logarithmic Time and Error Bound) The ǫ error is required in order
to prove the logarithmic search time [AMN+ 98]. As the optimal feedback κN (x) can be
chosen to be continuous this error in determining the region translates into a maximum
error in the input that is proportional to ǫ. Therefore, the error in the control input can
be made arbitrarily small with an appropriate selection of ǫ.

222
Examples

Here we consider various systems and compare the on–line calculation times of the method
proposed in this section to the scheme in [BBBM01]. Although the scheme in [TJB03b]
may lead to more significant runtime improvements than [BBBM01], the necessary pre-
processing time is prohibitive for large partitions and we therefore refrain from performing
a comparison to that scheme.

Large Random System

Consider the following 4-dimensional LTI system:


   
0.7 −0.1 0 0 0 0.1
   
 0.2 −0.5 0.1 0   0.1 1 
   
xk+1 =   xk +   uk
 0 0.1 0.1 0   0.1 0 
   
0.5 0 0.5 0.5 0 0
Subject to constraints |uk |∞ ≤ 5 and |xk |∞ ≤ 5.
The example was solved for the infinity norm p = ∞, prediction horizon N = 5 and for
weighting matrices Q = I and R = I. The resulting controller partition consists of R =
12, 290 regions. The construction of the search tree required 0.03 seconds. In comparison,
the approach in [TJB03b] would require the solution to approximately 151, 000, 000 LPs,
which is clearly prohibitive in terms of runtime. For ǫ = 0.01, the average and worst-case
number of floating point operations to compute the input using ANN [MA98] are 29, 450
and 36910 respectively. In comparison, the approach in [BBBM01] always takes exactly
160, 000 operations.

Randomly Generated Regions

In this section we compare the computational complexity of the approach presented in


this section with that discussed in [BBBM01] for very large systems. The currently
available multi-parametric solvers [KGBM03] produce reliable results for partitions of
up to approximately 30, 000 regions. However, methods are currently being developed
that will provide solutions for much larger problems. Therefore, in order to give a speed
comparison we have randomly generated vectors Fr and fr in the form of (12.3.18). The
code developed in [AMN+ 98], which is available at [MA98], was then used to execute
1, 000 random queries and the worst-case is plotted in Figure 12.3.2. For all of the queries
the error parameter ǫ was set to zero and therefore the solution returned is the exact
solution. It should be noted that the preprocessing time for one million regions and 20
dimensions is merely 22.2 seconds.
Figure 12.3.2 shows the number of floating point operations (flops) as a function of
the number of regions for the two approaches and the dimension of the state-space. Note
that both axes are logarithmic.
A 3.0GHz Pentium 4 computer can execute approximately 800 × 106 flops/second.
It follows that for a 10 dimensional system whose solution has one million regions, the

223
2
10

+ Dim = 2
1 x Dim = 10
10
−− Borelli et al.
− ANN
0
10

Millions of Flops
−1
10

−2
10

−3
10

−4
10

1 2 3 4 5 6
10 10 10 10 10 10
Nr

Figure 12.3.2: Comparison of ANN (Solid lines) to (Borelli et al., 2001) (Dashed lines)

control action can be computed at a rate of 20kHz using the proposed method, whereas
that given in [BBBM01] could run at only 35Hz.
It is clear from Figure 12.3.2 that the calculation speed of the proposed method is
very good for systems with a large number of regions. Furthermore note that controller
partitions where ANN does worse than [BBBM01] are virtually impossible to generate,
i.e. a partition in dimensions n = 10 with less than R = 100 regions is very difficult to
contrive. Hence, it can be expected that for all systems of interest, the proposed scheme
will result in a significant increase in speed. Since explicit feedback MPC is generally
being applied to systems with very fast dynamics, any speedup in the set-membership test
is useful in practice, i.e. the scheme proposed here is expected to significantly increase
sampling rates.

12.4 Summary
In Section 12.1 we have demonstrated how to exploit the efficient techniques for compu-
tations with polygons combined with set invariance theory to compute non-convex robust
positively invariant sets for piecewise-affine (PWA) systems. In addition, sufficient con-
ditions for finite time determination of the proposed algorithm are established. We have
furthermore shown how these methods may be used to obtain robust state feedback
controllers for PWA systems if combined with parametric programming techniques.
Section 12.2 demonstrated that Voronoi diagrams, Delaunay triangulations and para-
metric linear programming are connected. It was shown how to formulate appropriate
parametric linear programming problems in order to obtain the Voronoi diagram, or the
Delaunay triangulation of a finite set of points S. These algorithm are not necessarily the

224
most efficient algorithms for performing computation of Voronoi diagrams and Delanuay
triangulations but are easily generalized to arbitrary dimensions. Moreover, link estab-
lished between parametric programming techniques and Voronoi diagrams and Delaunay
triangulation contributed to the results reported in Section 12.3.
Section 12.3 has presented a method of solving the point location problem for linear-
cost MPC problems. If the controller partition exhibits a specific structure, the proposed
scheme can also be applied to quadratic-cost MPC problems. It has been shown that
the method is linear in the dimension of the state-space and logarithmic in the number
of regions. Numerical examples have demonstrated that this approach is superior to
the current state of the art and that for realistic examples, several orders of magnitude
improvement in sampling rates are possible.

225
Part IV

Conclusions

226
Chapter 13

Conclusion

This is not the end. It is not even the beginning of the end. But it is, perhaps, the end
of the beginning.

– Sir Winston Leonard Spencer Churchill

We conclude by summarizing the main contributions of this thesis with a set of


comments on rather straight – forward extensions and clear directions for future research.

13.1 Contributions
The main contributions of this thesis are in Set Invariance theory, Reachability Analysis,
Robust Model Predictive Control and Parametric Mathematical Programming.

13.1.1 Contributions to Set Invariance Theory and Reachability Analy-


sis

• A set of methods for computation of Invariant Approximations of RPI sets for


linear systems is given in the second chapter. The proposed methods provide a set
of approximation techniques that enable computation of invariant approximation
of the minimal and the maximal robust positively invariant set for linear discrete
time systems.

• Concept of Optimized Robust Control Invariance for a discrete-time, linear, time-


invariant system subject to additive state disturbances is introduced and discussed
in the third chapter. Novel procedures for the computation of robust control in-
variant sets and corresponding controllers are presented. A novel characterization
of a family of robust control invariant sets is proposed.

• Concept of Set Robust Control Invariance, introduced in the fourth chapter extends
standard concepts of robust control invariance. Concepts of set invariance are
extended to the trajectories of tubes – set of states. A family of the sets of set robust

227
control invariant sets is characterized. Analogously to the concepts of the minimal
and the maximal robust positively invariant sets the concepts of the minimal and
the maximal set robust positively invariant sets are established.

• Regulation of discrete-time linear systems with positive state and control constraints
and bounded disturbances is addressed in the fifth chapter. This problem is relevant
for the cases the controlled system is required to operate as close as possible or at
the boundary of constraint sets, i.e. when any deviation of the control and/or state
from its steady state value must be directed to the interior of its constraint set.
To address these problems, results of the third chapter are extended to enable for
characterization of a novel family of the robust control invariant sets for linear sys-
tems under positivity constraints. The existence of a constraint admissible member
of this family can be checked by solving a single linear or quadratic programming
problem.

• Robust Time Optimal Obstacle Avoidance Problem for discrete–time systems is


addressed in the sixth chapter. A set of results that enable use of polytopic algebra
in order to address a relevant problem of the robust time optimal obstacle avoidance
for discrete–time systems is provided.

• A Reachability Analysis for Constrained Discrete Time Systems with State- and
Input-Dependent Disturbances is given in the seventh chapter. We have provided
the solution of the reachability problem for nonlinear, time-invariant, discrete-time
systems subject to mixed constraints on the state and input with a persistent
disturbance, dependent on the current state and input. These are new results that
allow one to compute the set of states which can be robustly steered in a finite
number of steps, via state feedback control, to a given target set. Existing methods
fail to address state- and input-dependent disturbances.

• The problem of State Estimation for Piecewise Affine discrete time systems subject
to bounded disturbances is solved in the eighth chapter. It is shown that the state
lies in a closed uncertainty set that is determined by the available observations and
that evolves in time. The uncertainty set is characterized and a recursive algorithm
for its computation is presented. Recursive algorithms are proposed for filtering
prediction and smoothing problems.

13.1.2 Contributions to Robust Model Predictive Control

• A basic idea for feedback model predictive control based on the use of tubes –
sequences of set of states – is discussed. Additionally, a set of techniques for
designing stabilizing controllers is identified. More precisely, we have discussed
choice of ‘tube’ path cost and terminal cost, as well as choice of ‘tube terminal set’
and ‘tube cross–section’ in order to ensure the adequate stability properties.

228
• A set of efficient computational algorithms for efficient robust model predictive
control of constrained linear, discrete-time systems in the presence of bounded
disturbances is presented. Three methods ensuring robust exponential stability of
an appropriate robust control invariant set have been devised.

13.1.3 Contributions to Parametric Mathematical Programming

• Reverse transformation and its use in Parametric Mathematical Programming is


discussed. These techniques are used to characterize solutions to a number of im-
portant constrained optimal control problems such as constrained linear quadratic
regulator and optimal control of constrained piecewise affine discrete time systems.

• A set of applications of parametric mathematical programming is reported; these


include robust one step ahead controllers, computation of Voronoi diagrams and
Delaunay triangulations as well as an efficient algorithm for the point location
problem in closed loop MPC.

13.2 Directions for future research


There is scope for numerous extensions of the work reported in this thesis. We shall just
briefly remark on a subset of possible simple extensions and lines of future research.

13.2.1 Extensions of results related to Set Invariance Theory and


Reachability Analysis

• Procedures for computation of invariant robust positively invariant sets can be


extended to the case when dynamics are piecewise affine. This would be an inter-
esting extension of results reported in the second chapter, where linear dynamics
are treated.

• A set of efficient computational procedures for optimized robust control invariance


can be extended to a more general setting as already remarked in the third and the
fifth Chapter. These extensions can incorporate cases when the dynamics are para-
metrically uncertain and when the disturbances belong to an arbitrary polytope.
Moreover, an interesting extension would be to consider piecewise affine systems
and try to devise an analogous concept (optimized robust control invariance) for
this class of discrete time systems. This extension might not be simple but it is
certainly a relevant line of reserach.

• Concept of Set Robust Control Invariance can be extended to address the problem of
set invariance in case of imperfect state information. An application of this concept
to output feedback invariance (computation of an invariant set of states controlled
by output information) is under current investigation and a family of such sets has
been identified and the first computational procedures are being developed.

229
• Robust one step ahead controller can be combined with results of the sixth and the
seventh chapter to devise a set of efficient and relatively low complexity controllers
for the problems considered in these chapters.

• Computational complexity of the method for state estimation of uncertain piecewise


affine discrete time systems can be reduced by employing appropriate approxima-
tions. These would reduce the computational burden but it is necessary to perform
additional analysis due to the highly complex dynamical behavior of piecewise affine
discrete time systems.

13.2.2 Extensions of results related to Robust Model Predictive Con-


trol

• Some of the extensions of the reported results have already been reported, such
as [RM04a]. It is possible to extend these results to guarantee certain improved
stability properties. Moreover, it is an interesting issue to establish what classes
of discrete time systems allow for efficient application of tube MPC. It is possible
to devise a robust output feedback MPC for constrained linear discrete time sys-
tems by using tubes. This method would take into account estimation errors as
well as additive disturbances and it would ensure strong stability properties of the
controlled uncertain system.

13.2.3 Extensions of results related to Parametric Mathematical Pro-


gramming

• A promising direction for further research is to consider possible simplifications of


certain optimal control problems and to attempt to reduce complexity of the re-
sulting controller, obtained by parametric mathematical programming, by relaxing
requirements of optimality and by considering simpler objectives.

I will continue!

– Leonardo da Vinci

230
Appendix A

Geometric Computations with


Polygons

Given two polygons C and D that are unions of finite set of non-empty polyhedral (poly-
optic) sets, i.e.
[ [
C, Ai and D , Bj
i∈Nq j∈Np

The following are trivial consequences of the basic set operations:

◦ Set Intersection of two polygons:


   
[ \ [ [  \ 
C∩D , Ai   Bj  = Ai Bj
i∈Nq j∈Np (i,j)∈Nq ×Np

◦ Set Union of two polygons:


   
[ [ [ [  [ 
C∪D , Ai   Bj  = Ai Bj
i∈Nq j∈Np (i,j)∈Nq ×Np

◦ Complement of a polygon C contained in a polygon D:

Cc ∩ D , D \ C

◦ Subset test of two polygons C and D (test whether C is a subset of D):

C ⊆D ⇔C\D =∅

◦ Equality Test of two polygons C and D (test whether C is equal to D):

C = D ⇔ (C ⊆ D and D ⊆ C) ⇔ (C \ D = ∅ and D \ C = ∅) ⇔ C △ D = ∅

231
◦ Minkowski Set Addition of a polygon C and a polytope B:

C ⊕ B , {z | z = x + y, x ∈ C, y ∈ B}
[
= {z | z = x + y, x ∈ Ai , y ∈ B}
i∈Nq

= {z | z = x + y, x ∈ Ai , y ∈ B, i ∈ Nq }
[
= {z | z = x + y, x ∈ Ai , y ∈ B}
i∈Nq
[
= Ai ⊕ B
i∈Nq

◦ Minkowski Set Addition of two polygons:

C ⊕ D , {z | z = x + y, x ∈ C, y ∈ D}
[ [
= {z | z = x + y, x ∈ Ai , y ∈ Bj }
i∈Nq j∈Np

= {z | z = x + y, x ∈ Ai , y ∈ Bj , (i, j) ∈ Nq × Np }
[
= {z | z = x + y, x ∈ Ai , y ∈ Bj }
(i,j)∈Nq ×Np
[
= Ai ⊕ Bj
(i,j)∈Nq ×Np

◦ Pontryagin Set Difference of a polytope B and a polygon C:


\
B⊖C = B ⊖ Ai
i∈Nq

232
Appendix B

Constrained Control Toolbox

A relevant number of MATLAB functions has been developed. These routines carry out
computations related to results in this thesis. It is aim to gradually make these routines
available on-line. A set of toolboxes has been developed. The most relevant sets of
MATLAB functions are:

⊲ Set Invariance and Reachability Analysis:

• Geometric Computations with Polygons (Pontryagin Difference, Symmetric


and Set Difference, Minkowski Addition, etc...),
• Standard Set Invariance Computations (Minimal and maximal robust con-
trol/positively sets, etc...),
• Invariant Approximations of Invariant Sets,
• Optimized Invariance,
• Abstract Invariance,
• Robust Control Invariant Sets Computations (Linear System, Piecewise affine
discrete systems, Discrete time systems with state input dependent distur-
bances),

⊲ Robust Model Predictive Control:

• Linear Systems – Simple Robust Control Invariant Tubes,


• Linear Systems – State – Control Tubes and Convex Interpolation Policy,
• Linear Systems – Optimized Robust Control Invariant Tubes,
• Linear Systems – additional variations,
• Piecewise Affine Systems – Robust Control Invariant Tubes,

⊲ Parametric Mathematical Programming:

• Constrained Linear Quadratic Regulator (CLQ),


• Optimal Control of Constrained Piecewise Affine Discrete Time Systems with
quadratic cost (NLC),

233
• Computation of Voronoi Diagrams and Delaunay triangulations,
• (Robust) Time optimal controllers ,

⊲ Additional Set of Files

• H∞ and CLQ DP recursions,


• Open loop optimal control problems for linear and piecewise affine discrete
time systems,
• State estimation routines,
• Various set of additional functions necessary for the computations

It remains to improve and additionally test the developed functions so that they can
form an appropriate toolbox for robust control of constrained discrete time systems. An
α version of the toolbox should be made available on–line in the first months of 2006.

234
Bibliography

[ABC03] T. Alamo, J.M. Bravo, and E.F. Camacho. Guaranteed state estimation by
zonotopes. In Proc. 42nd IEEE Conference on Decision and Control, Maui,
Hawaii, USA, December 2003.

[ABQ+ 99] Frank Allgöwer, Thomas A. Badgwell, Joe S. Qin, James B. Rawlings,
and Stephen J. Wright. Nonlinear predictive control and moving horizon
estimation - an introductory overview. In Paul M. Frank, editor, Advances
in Control: highlights of ECC’99, pages 391–449, London, 1999. Springer.

[AF90] J. P. Aubin and H. Frankowska. Set-Valued Analysis. Systems & Control:


Foundations & Applications. Birkhauser, Boston, Basel, Berlin, 1990.

[All93] J. C. Allwright. On min-max model-based predictive control. In Proceedings


of Oxford Symposium on Advances in Model Based Predictive Control, pages
4153–426, Oxford, 1993.

[AMN+ 98] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, and A.Y. Wu. An
optimal algorithm for approximate nearest neighbor searching fixed dimen-
sions. Journal of the ACM, 45(6):891–923, 1998.

[AP92] J. C. Allwright and G. C. Papavasiliou. On linear programming and ro-


bust model-predictive control using impulse-repsonses. Systems & Control
Letters, 18:159–164, 1992.

[Aub77] J. P. Aubin. Applied Abstract Analysis. Pure & Applied Mathematics.


Wiley-Interscience, 1977.

[Aub91] J. P. Aubin. Viability theory. Systems & Control: Foundations & Applica-
tions. Birkhauser, Boston, Basel, Berlin, 1991.

[Aur87] F. Aurenhammer. A criterion for the affine equivalence of cell complexes in


Rd and convex polyhedra in Rd+1 . Discrete and Computational Geometry,
2:49–64, 1987.

[Aur91] F. Aurenhammer. Voronoi diagrams – a survey of a fundamental geometric


data structure. ACM Computing Surveys, 23(3):345–405, September 1991.

235
[AZ99] F. Allgöwer and A. Zheng. Model Predictive Control: Assessment and Fu-
ture Directions. Springer Verlag, 1999. Proceedings of International Work-
shop on Model Predictive Control, Ascona, 1998.

[Bas91] T. Basar. Optimum performance levels for minimax filters, predictors and
smoothers. Systems and Control Letters, 16:309–317, 1991.

[BBBG02] Roscoe A. Bartlett, Lorenz T. Biegler, Johan Bakstrom, and Vipin Gopal.
Quadratic programming algorithms for large-scale model predictive control.
Journal of Process Control, 12:775–795, 2002.

[BBBM01] F. Borelli, M. Baotić, A. Bemporad, and M. Morari. Efficient on-line com-


putation of constrained optimal control. In Proceedings of the 40th IEEE
Conference on Decision and Control, pages 1187–1192, Orlando, Florida,
December 2001.

[BBM00a] A. Bemporad, F. Borrelli, and M. Morari. The explicit solution of con-


strained LP-based receding horizon control. In Proceedings of the 39th
IEEE Conference on Decision and Control, page 632, Sydney, December
2000.

[BBM00b] A. Bemporad, F. Borrelli, and M. Morari. Piecewise linear optimal con-


trollers for hybrid systems. In Proceedings of the American Control Con-
ference, pages 1190–1194, Chicago, 2000.

[BBM03a] A. Bemporad, F. Borrelli, and M. Morari. Min-max control of constrained


uncertain discrete-time linear systems. IEEE Trans. Automatic Control,
48(9):1600–1606, September 2003.

[BBM03b] F. Borrelli, A. Bemporad, and M. Morari. Geometric algorithm for multi-


parametric linear programming. Journal of Optimization Theory and Ap-
plications, 118(3):515–540, September 2003.

[Ber71] Dimitri P. Bertsekas. Control of Uncertain Systems with a Set-Membership


Description of the Uncertainty. PhD thesis, M.I.T., 1971.

[Ber72] D.P. Bertsekas. Infinite-time reachability of state-space regions by using


feedback control. IEEE Trans. Automatic Control, AC-17(5):604–613, 1972.

[BGD+ 83] B Bank, J. Guddat, Klatte D., B. Kummer, and K. Tammer. Non–linear
Parametric Optimization. Birkhauser, Basel, Boston, Stuttgart, 1983.

[BGFB94] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix


Inequalities in System and Control Theory. Studies in Applied Mathematics.
SIAM, 1994.

[BGW90] R. R. Bitmead, M. Gevers, and V. Wertz. Adaptive Optimal Control—The


Thinking Man’s GPC. Prentice Hall Int., 1990.

236
[Bit88a] G. Bitsoris. On the positive invariance of polyhedral sets for discrete-time
systems. Systems & Control Letters, 11:243–248, 1988.

[Bit88b] G. Bitsoris. Positively invariant polyhedral sets of discrete-time linear sys-


tems. International Journal of Control, 47(6):1713–1726, 1988.

[Bla92] F. Blanchini. Minimum-time control for uncertain discrete-time linear sys-


tems. In Proc. 31st IEEE Conference on Decision and Control, volume 3,
pages 2629–34, Tuczon AZ, USA, December 1992.

[Bla94] F. Blanchini. Ultimate boundedness control for uncertain discrete-time sys-


tems via set-induced Lyapunov functions. IEEE Trans. Automatic Control,
39(2):428–433, 1994.

[Bla99] F. Blanchini. Set invariance in control. Automatica, 35:1747–1767, 1999.


survey paper.

[BM99] A. Bemporad and M. Morari. Robust model predictive control: a survey.


In A. Garulli, A. Tesi, and A. Vicino, editors, Robustness in Identification
and Control. Springer-Verlag, Boston, USA, 1999.

[BMDP02] A. Bemporad, M. Morari, V. Dua, and E.N. Pistikopoulos. The explicit


linear quadratic regulator for constrained systems. Automatica, 38:3–20,
2002.

[Bor02] Francesco Borrelli. Discrete Time Constrained Optimal Control. PhD thesis,
Swiss Federal Instritute of Technology, Zurich, 2002.

[BR71a] D. P. Bertsekas and I. B. Rhodes. Recursive state estimation for a set-


membership description of uncertainty. IEEE Transactions on Automatic
Control, 16:117–128, 1971.

[BR71b] D.P. Bertsekas and I.B. Rhodes. On the minimax reachability of target sets
and target tubes. Automatica, 7:233–247, 1971.

[Bro79] K. Q. Brown. Voronoi diagrams from convex hulls. Inform. Process. Lett.,
9(5):223–228, 1979.

[Bro80] K. Q. Brown. Geometric transformations for fast geometric algorithms.


PhD Thesis. Technical Report CMU-CS-80-101, Department of Comput.
Science, Carnegie-Mellon University, Pittsburgh, PA, 1980.

[CA98a] H. Chen and F. Allgöwer. Nonlinear model predictive control schemes with
guaranteed stability. In R. Berber and C. Kravaris, editors, NATO ASI on
Nonlinear Model Based Process Control, pages 465–494. Kluwer, 1998.

[CA98b] H. Chen and F. Allgöwer. A quasi-infinite predictive control scheme for


constrained nonlinear systems. Automatica, 14(10):1205–1217, 1998.

237
[CB98] E.F. Camacho and C. Bordons. Model Predictive Control. Springer, Berlin,
1998.

[CE04] G. Calafiore and L. El Ghaoui. Ellipsoidal bounds for uncertain linear


equations and dynamical systems. Automatica, 40(5):773–787, 2004.

[CGZ96] L. Chisci, A. Garulli, and G. Zappa. Recursive state bounding by paral-


lelotpes. Automatica, 32:1049–1055, 1996.

[Che88] F. L. Chernousko. Estimation of the phase state of dynamic system, [In


Russian]. Nauka, Moscow, 1988.

[Che94] F. L. Chernousko. State Estimation of Dynamic systems. CRC Press, Boca


Raton, 1994.

[Che02] F. L. Chernousko. Optimal ellipsoidal estimation of dynamic systems sub-


ject to uncertain disturbances. Cybernetics and Systems Analysis, 38:221–
229, 2002.

[CK77] Ming-Jeh Chien and Ernest S. Kuh. Solving nonlinear resistive networks
using piecewise-linear analysis and simplicial subdivision. CAS–24(6):305–
317, June 1977.

[CKR01] Mark Cannon, Basil Kouvaritakis, and Anthony Rossiter. Efficient active
set optimization in triple mode MPC. IEEE Transactions on Automatic
Control, 46(8):1307–1312, August 2001.

[Cla94] D. W. Clarke. Advances in model predictive control. Oxford Science Publi-


cations, Oxford, U.K., 1994.

[CLM96] L. Chisci, A. Lombardi, and E. Mosca. Dual receding horizon control of


constrained discrete-time systems. European Journal of Control, 2:278–285,
1996.

[CR80] C. R. Cutler and B. L. Ramaker. Dynamic matrix control—a computer


control algorithm. In Proceedings Joint Automatic Control Conference, San
Francisco, California, 1980.

[CRZ01] L. Chisci, J. A. Rossiter, and G. Zappa. Systems with persistent distur-


bances: predictive control with restricted constraints. Automatica, 37:1019–
1028, 2001.

[CS91] D. W. Clarke and R. Scattolini. Constrained receding horizon predictive


control. Proc. IEE, Part D, Control Theory and Applications, 138:347–354,
1991.

[D’A97] P. D’Alessandro. A conical approach to linear programming. Scalar and


vector optimization problems. Gordon and Breach Science Publishers, 1997.

238
[DBP02] V. Dua, N.A. Bozinis, and E.N. Pistikopoulos. A multiparametric pro-
gramming approach for mixed integer and quadratic engineering problems.
Computers and Chemical Engineering, 26(4-5):715–733, 2002.

[De 94] E. De Santis. On positively invariant sets for discrete-time linear systems
with disturbance: An application of maximal disturbance sets. IEEE Trans-
actions on Automatic Control, 31(1):245–249, 1994.

[De 97] E. De Santis. On invariant sets for constrained discrete-time linear systems
with disturbance and parametric uncertainties. Automatica, 33(11):2033–
2039, 1997.

[De 98] E. De Santis. Invariant sets: A generalization to constrained systems with


state dependent disturbances. In Proc. 37th IEEE Conference on Decision
and Control, pages 622–3, Tampa, Florida, USA, December 1998.

[Del34] B. Delaunay. Sur la sphère. A la memoire de Georges Voronoi. Izv. Akad.


Nauk SSSR, 7:793–800, 1934.

[DG99] José A. De Doná and Graham G. Goodwin. Elucidation of the state-space


regions wherein model predictive and anti-windup strategies achieve identi-
cal control policies. Technical Report EE9944, The University of Newcastle,
Australia, 1999.

[DG00] José A. De Doná and Graham G. Goodwin. Elucidation of the state-space


regions wherein model predictive and anti-windup strategies achieve iden-
tical control policies. In Proceedings of the American Control Conference,
pages 1924–1928, Chicago, Illinois, 2000.

[Dir50] G. L. Dirichlet. Über die Reduktion der positiven quadratischen Formen


mit drei unbestimmten ganzen Zahlen. J. Reine Angew. Math., 40:209–227,
1850.

[DMD89] P. D’Alessandro, M. Dalla Mora, and E. De Santis. Techniques of linear


programming based on the theory of convex cones. Optimization, 20:761–
777, 1989.

[DMS96] G. De Nicolao, L. Magni, and R. Scattolini. Stabilizing nonlinear receding


horizon control via a nonquadratic penalty. In Proceedings IMACS Multi-
conference CESA, volume 1, pages 185–187, Lille, France, 1996.

[DMS00] G. De Nicolao, L. Magni, and R. Scattolini. Stability and robustness of


nonlinear model predictive control. In Frank Allgöwer and Alex Zheng,
editors, Nonlinear Model Predictive Control, pages 3–22. Birkhäuser Verlag,
Basle, 2000.

239
[DP00] V. Dua and E. N. Pistikopoulos. An algorithm for the solution of multipara-
metric mixed integer linear programming problems. Annals of Operations
Research, 99:123–139, 2000.

[ES86] H. Edelsbrunner and R. Seidel. Voronoi diagrams and arrangements. Dis-


crete Computational Geometry, 1:25–44, 1986.

[FIAF03] R. Findeisen, L. Imsland, F. Allgöwer, and B. A. Foss. State and out-


put feedback nonlinear model predictive control: An overview. European
Journal of Control, 9(2–3):190–206, 2003. Survey paper.

[Fin05] R. Findeisen. Nonlinear Model Predictive Control: A Sampled-Data Feed-


back Perspective. Fortschr.-Ber. VDI Reihe 8, VDI Verlag, Düsseldorf, 2005.
To appear.

[Fod01] Michael Fodor. An automobile application of mpc: traction control. Pre-


sented at American Control Conference, Arlington, Virginia, 2001.

[Fon99] F. A. C. C. Fontes. Optimisation–Based control of constrained nonlinear


systems. PhD thesis, Imperial College London, University of London, 1999.

[FR75] W. H. Fleming and R. W. Rishel. Deterministic and stochastic optimal


control. Springer-Verlag, New York, Heidelberg, Berlin, 1975.

[FTMM02] G. Ferrari-Trecate, D. Mignone, and M. Morari. Moving horizon estimation


for hybrid systems. IEEE Transactions on Automatic Control, 47:1663–
1676, 2002.

[Fuk00] K. Fukuda. Polyhedral computation FAQ, 2000. On line document. Both


html and ps versions available from http://www.ifor.math.ethz.ch/
staff/fukuda.

[Gal79] Tomas Gal. Postoptimal Analyses, Parametric Programming and Related


Topics. McGraw-Hill Inc., 1979.

[Gal95] T. Gal. Postoptimal Analyses, Parametric Programming, and Related Top-


ics. de Gruyter, Berlin, 2nd edition, 1995.

[Gay91] J. E. Gayek. A survey of techniques for approximating reachable and con-


trollable set. In Proc. 30th IEEE Conference on Decision and Control, pages
1724–1729, Brighton, England, 1991. Survey paper.

[GKBM04] P. Grieder, M. Kvasnica, M. Baotić, and M. Morari. Low complexity con-


trol of piecewise affine systems with stability guarantee. In Proc. of the
American Control Conference, Boston, USA, June 2004.

[GM86] C. E. Garcı́a and A. M. Morshedi. Quadratic programming solution of


dynamic matrix control (QDMC). Chemical Engineering Communications,
46:73–87, 1986.

240
[GPM89] C. E. Garcı́a, D. M. Prett, and M. Morari. Model predictive control: Theory
and practice—a survey. Automatica, 25(3):335–348, 1989.

[GPM03] P. Grieder, P. Parillo, and M. Morari. Robust receding horizon control


- analysis & synthesis. In Proceedings of the IEEE 2003 Conference on
Decision and Control, Maui, Hawaii, USA, December 2003.

[Gri04] P. Grieder. Efficient Computation of Feedback Controllers for Constrained


Systems. PhD thesis, Swiss Federal Instritute of Technology, Zurich, 2004.

[GRMM05] Pascal Grieder, Saša V. Raković, Manfred Morari, and David Q. Mayne.
Invariant sets for switched discrete time systems subject to bounded dis-
turbances. In Proceedings of the 16th IFAC World Congress IFAC 2005,
Praha, Czech Republic, July 2005.

[GS55a] S. I. Gass and T. L. Saaty. The computational algorithm for the parametric
objective function. Naval Research Logistics Quarterly, 2:39–45, 1955.

[GS55b] S. I. Gass and T. L. Saaty. The parametric objective function 2. Operations


Research, 3:395–401, 1955.

[GSD03] G.C. Goodwin, M.M Seron, and J. De Dońa. Constrained Control and
Estimation. Springer, Berlin, 2003.

[GT91] E. G. Gilbert and K. T. Tan. Linear systems with state and control con-
straints: the theory and application of maximal output admissible sets.
IEEE Transactions on Automatic Control, AC-36:1008–1020, 1991.

[Gur95] L. Gurvits. Stability of discrete linear inclusion. Linear Algebra and Its
Applications, 231:47 –85, 1995.

[Hah67] W. Hahn. Stability of motions. Springer, Berlin, 1967.

[HO03] K. Hirata and Y. Ohta. ε–feasible approximation of the state reachable set
for discrete time systems. In Proc. 42nd IEEE Conference on Decision and
Control, pages 5520–5525, Maui HI, USA, 2003.

[Jaz70] A. H. Jazwinski. Stochastic processes and filtering theory. Academic Press,


New York, 1970.

[JGR05] C.N. Jones, P. Grieder, and S. V. Raković. A logarithmic – time solution


to the point location problem for closed–form linear mpc. In Proceedings
of the 16th IFAC World Congress IFAC 2005, Praha, Czech Republic, July
2005.

[KA04] E.C. Kerrigan and T. Alamo. A convex parametrization for solving con-
strained min-max problems with a quadratic cost. In Proc. American Con-
trol Conference, Boston, MA, USA, June 2004.

241
[KBM96] M. V. Kothare, V. Balakrishnan, and M. Morari. Robust constrained model
predictive control using linear matrix inequalities. Automatica, 32(10):1361–
1379, 1996.

[KC01] B. Kouvaritakis and M. Canon. Nonlinear Predictive Control: theory and


practice. The Institution of Electrical Engineers, 2001.

[Ker00] E. C. Kerrigan. Robust Constraint Satisfaction: Invariant Sets and Pre-


dictive Control. PhD thesis, University of Cambridge, 2000. Downloadable
from http://www-control.eng.cam.ac.uk/eck21.

[KF57] A.N. Kolmogorov and S.V. Fomin. Elements of the Theory of Functions
and Functional Analysis. Dover Publications, 1957.

[KF70] A.N. Kolmogorov and S.V. Fomin. Introductory Real Analysis. Dover Pub-
lications, 1970.

[KF93] A. B. Kurzhanski and T. F. Filippova. On the Theory of Trajectory Tubes:


A Mathematical Formalism for Uncertain Dynamics, Viability and Control.
in: Advances in Nonlinear Dynamics and Control: A Report from Russia,
A.B. Kurzhanski, ed., ser. PSCT 17. Birkhauser, Boston, Basel, Berlin,
1993.

[KG87] S.S. Keerthi and E.G. Gilbert. Computation of minimum-time feedback


control laws for discrete-time systems with state-control constraints. IEEE
Trans. Automatic Control, AC-32:432–435, 1987.

[KG88] S. S. Keerthi and E. G. Gilbert. Optimal, infinite horizon feedback laws for
a general class of constrained discrete time systems: Stability and moving-
horizon approximations. Journal of Optimization Theory and Applications,
57:265–293, 1988.

[KG98] I. Kolmanovsky and E. G. Gilbert. Theory and computation of disturbance


invariance sets for discrete-time linear systems. Mathematical Problems in
Engineering: Theory, Methods and Applications, 4:317–367, 1998.

[KGBM03] M. Kvasnica, P. Grieder, M. Baotić, and M. Morari. Multi Parametric


Toolbox (MPT). In Hybrid Systems: Computation and Control, Lecture
Notes in Computer Science. Springer Verlag, 2003. http://control.ee.
ethz.ch/∼mpt.

[KLM02] E.C. Kerrigan, J. Lygeros, and J.M. Maciejowski. A geometric approach to


reachability computations for constrained discrete-time systems. In Proc.
15th IFAC World Congress on Automatic Control, Barcelona, Spain, July
2002.

242
[KM02a] E.C. Kerrigan and D.Q. Mayne. Optimal control of constrained, piecewise
affine systems with bounded disturbances. In Proc. 41st IEEE Conference
on Decision and Control, Las Vegas, Nevada, USA, December 2002.

[KM02b] Eric C. Kerrigan and Jan M. Maciejowski. Robustly stable model predictive
control using a single linear program. Technical Report t.b.c., Cambridge
University Engineering Department, 2002. submitted to International Jour-
nal of Robust and Nonlinear Control.

[KM03a] E. C. Kerrigan and J. M. Maciejowski. On robust optimization and the


optimal control of constrained linear systems with bounded state distur-
bances. In Proc. European Control Conference, Cambridge, UK, September
2003.

[KM03b] Eric C. Kerrigan and Jan M. Maciejowski. Feedback min-max model predic-
tive control using a single linear program: robust stability and the explicit
solution. Technical Report CUED/F-INFENG/TR.440, Department of En-
gineering, University of Cambridge, 2003.

[KM04] E. C. Kerrigan and J. M. Maciejowski. Feedback min-max model predictive


control using a single linear program: Robust stability and the explicit
solution. International Journal of Robust and Nonlinear Control, 14(4):395–
413, March 2004.

[KMV04] A. B. Kurzhanski, I. M. Mitchell, and P. Varaiya. Control synthesis for state


constrained systems and obstacle problems. In Proc. 6th IFAC Symposium
– NOLCOS2004, Stuttgart, Germany, September 2004.

[Kou02] K. I. Kouramas. Control of linear systems with state and control constraints.
PhD thesis, Imperial College of Science, Technology and Medicine, Univer-
sity of London, UK, 2002.

[KRS00] B. Kouvaritakis, J. A. Rossiter, and J. Schuurmans. Efficient robust pre-


dictive control. IEEE Transactions on Automatic Control, 45(8):145–1549,
2000.

[KT02] Christopher M. Kellet and Andrew R. Teel. On robustness of stability and


Lyapunov functions for discontinuous difference equations. In Proceedings
of the 41st IEEE Conference on Decision and Control, Las Vegas, USA,
December 2002.

[Kur77] A. B. Kurzhanski. Control and Observation under conditions of uncertainty,


[In Russian]. Nauka, Moscow, 1977.

[Kur04] A. B. Kurzhanski. Dynamic optimization for nonlinear target control syn-


thesis. In Proc. 6th IFAC Symposium – NOLCOS2004, Stuttgart, Germany,
September 2004.

243
[KV88] A. B. Kurzhanski and I. Vályi. Set-valued solutions to control problems and
their approximations, volume 111 of Analysis and Optimization of Systems.
Springer-Verlag, 1988.

[KV97] A. Kurzhanski and I. Vályi. Ellipsoidal Calculus for Estimation and Con-
trol. Systems & Control: Foundations & Applications. Birkhauser, Boston,
Basel, Berlin, 1997.

[KV04] A. B. Kurzhanski and P. Varaiya. Ellipsoidal Techniques for Hybrid Dy-


namics: the Reachability Problem. In Proceedings of 16th International
Symposium on Mathematical Theory of Networks and Systems, MTNS2004,
Leuven, Belgium, July 2004.

[L0̈3a] J. Löfberg. Approximations of closed-loop minimax mpc. In Proceedings of


IEEE 2003 Conference on Decision and Control, pages 1438 – 1442, 2003.

[L0̈3b] J. Löfberg. Minimax approaches to robust model predictive control. PhD the-
sis, Department of Electrical Engineering, Linköping University, Linköping
, Sweden, 2003.

[Las76] J. P. Lasalle. The Stability of Dynamical Systems. Society For Industrial


And Applied Mathematics, Philadelphia, Pennsylvania, 1976.

[Las87] J. B. Lasserre. A complete characterization of reachable sets for constrained


linear time-varying systems. IEEE Trans. Automatic Control, 32:836–838,
1987.

[Las93] J. B. Lasserre. Reachable, controllable sets and stabilizing control of con-


strained linear systems. Automatica, 29(2):531–536, 1993.

[LCRM04] W. Langson, I. Chryssochoos, S. V. Raković, and D. Q. Mayne. Robust


model predictive control using tubes. Automatica, 40:125–133, 2004.

[Lev94] E. S. Levitin. Pertubation theory in mathematical programming. John Wiley


& Sons, Chichester, New York, Brisbane, Toronto, Singapore, 1994.

[LK00] Young Il Lee and Basil Kouvaritakis. Robust receding horizon control
for systems with uncertain dynamics and input saturation. Automatica,
36:1497–1504, 2000.

[LKC02] Y. I. Lee, B. Kouvaritakis, and M. Cannon. Constrained receding horizon


predictive control for nonlinear systems. Automatica, 38(12), 2002.

[LM67] E. B. Lee and L. Markus. Foundations of Optimal Control Theory. Wiley,


New York, 1967.

[LY97] J. H. Lee and Z. Yu. Worst-case formulations of model predictive control


for systems with bounded parameters. Automatica, 33(5):763–781, 1997.

244
[Lya66] A.M. Lyapunov. Stability of motions. Academic Press, New York, 1966.

[Lya92] A.M. Lyapunov. The general problem of the stability of motion. Taylor &
Francis, 1992.

[MA98] D. Mount and S. Arya. Ann: Library for approximate nearest neighbor
searching, June 1998. http://www.cs.umd.edu/∼mount/ANN/.

[Mac02a] J. M. Maciejowski. Predictive Control with constraints. Prentice Hall, 2002.

[MAC02b] D. Limón Marruedo, T. Álamo, and E. F. Camacho. Stability analysis of


systems with bounded additive uncertainties based on invariant sets: sta-
bility and feasibility of MPC. In Proceedings American Control Conference,
pages 364–369, Anchorage, Alaska, May 2002.

[May79] P. S. Maybeck. Stochastic models, estimation, and control, volume 141.


Mathematics in Science and Engineering, 1979.

[May95] D. Q. Mayne. Optimization in model based control. In Proceedings of


the IFAC symposium on dynamics and control chemical reactors and batch
processes (Dycord+’95), Helsingor, Denmark, pages 229–242. Elsevier Sci-
ence, Oxford, 7–9 June 1995. Plenary address.

[May97] D. Q. Mayne. Nonlinear model predictive control: an assessment. In Jef-


frey C. Kantor, Carlos E. Garcı́a, and Brice Carnahan, editors, Chemical
Process Control-V: Assessment and New Directions for Research, Proceed-
ings of the Fifth International Conference on Chemical Process Control,
volume 93, pages 217–231. American Institute of Chemical Engineers Sym-
posium Series, 1997. Plenary address.

[May01] D.Q. Mayne. Control of constrained dynamic systems. European Journal


of Control, 7:87–99, 2001. Survey paper.

[MDSA03] L. Magni, G. De Nicolao, R. Scattolini, and F. Allgöwer. Robust model


predictive control of nonlinear discrete-time systems. International Journal
of Robust and Nonlinear Control, 13:229–246, 2003.

[MFTM00] D. Mignone, G. Ferrari-Trecate, and M. Morari. Stability and stabilization


of piecewise affine and hybrid systems: An LMI approach. In Proc. of the
IEEE 2000 Control and Decision Conference, December 2000.

[MHER95] E. S. Meadows, M. A. Henson, J. W. Eaton, and J. B. Rawlings. Receding


horizon control and discontinuous state feedback stabilization. International
Journal of Control, 62:1217–1229, 1995.

[ML01] D. Q. Mayne and W. Langson. Robustifying model predictive control of


constrained linear systems. Electronics Letters, 37:1422–1423, 2001.

245
[MLZ90] E. Mosca, J. M. Lemos, and J. Zhang. Stabilizing I/O receding horizon
control. In Proceedings 29th IEEE Conference on Decision and Control,
pages 2518–2523, Honolulu, 1990.

[MM90] D. Q. Mayne and H. Michalska. Receding horizon control of non-linear


systems. IEEE Transactions on Automatic Control, 35(5):814–824, 1990.

[MM93] H. Michalska and D. Q. Mayne. Robust receding horizon control of con-


strained nonlinear systems. IEEE Transactions on Automatic Control,
38:1623–1632, 1993.

[MNv01] L. Magni, H. Nijmeijer, and A. van der Schaft. A receding horizon approach
to the nonlinear H∞ problem. Automatica, 37(3):429–435, 2001.

[Mos94] E. Mosca. Optimal, predictive and adaptive control. Information and System
Science Series. Prentice Hall, Englewoods Cliffs, New Jersey, 1994.

[MQV02] Roderı́c Moitié, Marc Quincampoix, and Vladimir M. Veliov. Optimal con-
trol of discrete-time uncertain systems with imperfect measurement. IEEE
Transactions on Automatic Control, 47(11):1909–1914, November 2002.

[MR93] K. R. Muske and J. B. Rawlings. Model predictive control with linear


models. AIChE Journal, 39(2):262–287, 1993.

[MR02] D. Q. Mayne and S. Raković. Optimal control of constrained piecewise


affine discrete-time systems using reverse transformation. In Proceedings
of the IEEE 2002 Conference on Decision and Control, Las Vegas, USA,
December 2002.

[MR03a] D. Q. Mayne and S. Raković. Model predictive control of constrained piece-


wise affine discrete-time systems. International Journal of Robust and Non-
linear Control, 13:261–279, 2003.

[MR03b] D. Q. Mayne and S. Raković. Optimal control of constrained piecewise


affine discrete-time systems. Journal of Computational Optimization and
Applications, 25:167–191, 2003.

[MRRS00] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert. Constrained


model predictive control: Stability and optimality. Automatica, 36:789–814,
2000. Survey paper.

[MRVK04] D. Q. Mayne, S. V. Raković, R. B. Vinter, and E. C. Kerrigan. Char-


acterization of the solution to a constrained h∞ optimal control problem.
Automatica, 2004. submitted.

[MS97a] L. Magni and R. Sepulchre. Stability margins of nonlinear receding-horizon


control via inverse optimality. Systems & Control Letters, 32:241–245, 1997.

246
[MS97b] D. Q. Mayne and W. R. Schroeder. Robust time-optimal control of con-
strained linear systems. Automatica, 33:2103–2118, 1997.

[MSR05] D. Q. Mayne, M. Seron, and S. V. Raković. Robust model predictive con-


trol of constrained linear systems with bounded disturbances. Automatica,
41:219–224, 2005.

[MV91] M. Milanese and A. Vicino. Optimal estimation theory for dynamic systems
with set membership uncertainty: An overview. Automatica, 27:997–1099,
1991.

[MZ92] E. Mosca and J. Zhang. Globally convergent predictive adaptive control. In


Proceedings 1st European Control Conference, pages 2172–2174, Grenoble,
France, 1992.

[NK91] K. Nagpal and P. Khargonekar. Filltering and smoothing in H ∞ setting.


IEEE Transactions on Automatic Control, 36:152–166, 1991.

[OBSC00] Atsuyuki Okabe, Barry Boots, Kokichi Sugihara, and Sung Nok Chiu. Spa-
tial Tessellations Concepts and Applications of Voronoi Diagrams. John
Wiley & Son, second edition, 2000.

[PBG88] M. A. Poubelle, R. R. Bitmead, and M. Gevers. Fake algebraic Riccati


techniques and stability. IEEE Transactions on Automatic Control, AC–
31:379–381, 1988.

[PG80] D. M. Prett and R. D. Gillette. Optimization and constrained multivariable


control of a catalytic cracking unit. In Proceedings of the Joint Automatic
Control Conference, pages WP5–C, San Francisco, CA, 1980.

[PGM98] A. Pertsinides, I. E. Grossmann, and G. J. Mcrae. Parametric optimization


of MILP programs and a framework for the parametric optimization of
MINLPS. Computers and Chemical Engineering, 22, Supplement S 205,
1998.

[Pol97] E. Polak. Optimization: Algorithms and Consistent Approximations.


Springer-Verlag, New York, 1997. ISBN 0-387-94971-2.

[Pro63] A. I. Propoi. Use of linear programming methods for synthesizing sampled-


data automatic systems. Automation and Remote Control, 24(7):837–844,
1963.

[PWR03] G. Pannocchia, S. J. Wright, and J. B. Rawlings. Existence and compu-


tation of infinite horizon model predictive control with active steady-state
input constraints. IEEE Trans. Automatic Control, 48(6):1002–1006, 2003.

[PY78a] A. I. Propoi and A. B. Yadykin. Parametric quadratic and linear program-


ming i. Automatic and remote control, 2:241–251, 1978.

247
[PY78b] A. I. Propoi and A. B. Yadykin. Parametric quadratic and linear program-
ming ii. Automatic and remote control, 4:578–586, 1978.

[QB97] S. J. Qin and T. A. Badgwell. An overview of industrial model predictive


control technology. In Jeffrey C. Kantor, Carlos E. Garcı́a, and Brice Carna-
han, editors, Fifth International Conference on Chemical Process Control,
pages 232–256. CACHE, AIChE, 1997.

[QB00] S. J. Qin and T. A. Badgwell. An overview of nonlinear model predictive


control applications. In Frank Allgöwer and Alex Zheng, editors, Nonlinear
Model Predictive Control, pages 369–392. Birkhäuser Verlag, Basle, 2000.

[QDG02] D. E. Quevedo, J. A. De Don, and G. C. Goodwin. Receding horizon linear


quadratic control with finite input constraint sets. In 15th IFAC World
Congress, Barcelona, Spain, 2002.

[QV02] M. Quincampoix and V. M. Veliov. Solution tubes to differential inclusions


within a collection of sets. Control and Cybernetics, 31(3), 2002.

[Rak04] S.V. Raković. Optimized robustly controlled invariant sets for constrained
linear discrete-time systems. Technical Report EEE/C&P/SVR/5/2004,
Imperial College London, Downloadable from http://www2.ee.ic.ac.uk/
cap/cappp/projects/11/reports.htm, 2004.

[RG04] S. V. Raković and P. Grieder. Approximations and properties of the dis-


turbance response set of pwa systems. Technical Report AUT04-02, ETHZ
Zürich, February 2004.

[RGJ04] S. V. Raković, P. Grieder, and C.N. Jones. Computation of voronoi dia-


grams and delaunay triangulation via parametric linear programming. Tech-
nical Report AUT04-03, ETHZ Zürich, May 2004.

[RGK+ 04] S. V. Raković, P. Grieder, M. Kvasnica, D. Q. Mayne, and M. Morari. Com-


putation of invariant sets for piecewise affine discrete time systems subject
to bounded disturbances. In Proceedings of the IEEE 2004 Conference on
Decision and Control, Paradise Island, Bahamas, December 2004.

[RKKM03] S. V. Raković, E. C. Kerrigan, K. I. Kouramas, and D. Q. Mayne. Approx-


imation of the minimal robustly positively invariant set for discrete-time
lti systems with persistent state disturbances. In Proceedings of the IEEE
2003 Conference on Decision and Control, Maui, Hawaii, USA, December
2003.

[RKKM04a] S. V. Raković, E. C. Kerrigan, K. I. Kouramas, and D. Q. Mayne. In-


variant approximations of robustly positively invariant sets for constrained

248
linear discrete-time systems subject to bounded disturbances. Technical Re-
port CUED/F-INFENG/TR.473, Department of Engineering, University of
Cambridge, Trumpington Street, CB2 1PZ Cambridge, UK, January 2004.

[RKKM04b] S. V. Raković, E.C. Kerrigan, K.I. Kouramas, and D. Q. Mayne. Invariant


approximations of the minimal robustly positively invariant sets. IEEE
Trans. Automatic Control, 2004. In press.

[RKM03] Saša V. Raković, Eric C. Kerrigan, and David Q. Mayne. Reachability


computations for constrained discrete-time systems with state- and input-
dependent disturbances. In Proceedings of the IEEE 2003 Conference on
Decision and Control, Maui, Hawaii, USA, December 2003.

[RKM04] S. V. Raković, E. C. Kerrigan, and D. Q. Mayne. Optimal control of con-


strained piecewise affine systems with state- and input-dependent distur-
bances. In Proceedings of the 16th International Symposium on Mathemat-
ical Theory of Networks and Systems (MTNS2004), Lueven, Belgium, July
2004.

[RKML05] S. V. Raković, E.C. Kerrigan, D. Q. Mayne, and J. Lygeros. Reachability


analysis of discrete time systems with distuebances. IEEE Trans. Automatic
Control, 2005. Submitted.

[RKR98] J. A. Rossiter, B. Kouvaritakis, and M. J. Rice. A numerically robust


state-space approach to stable-predictive control strategies. Automatica,
34(1):65–73, 1998.

[RM04a] S. V. Raković and D. Q. Mayne. Robust model predictive control of con-


strained piecewise affine discrete time systems. In Proceedings of the 6th
IFAC Symposium on nonlinear control systems – NOLCOS2004, Stuttgart,
Germany, September 2004.

[RM04b] S. V. Raković and D. Q. Mayne. State estimation for piecewise affine,


discrete time systems with bounded disturbances. In Proceedings of the
IEEE 2004 Conference on Decision and Control, Paradise Island, Bahamas,
December 2004.

[RM05a] S. V. Raković and D. Q. Mayne. Regulation of discrete – time systems


with positive state and control constraints and bounded disturbances. In
Proceedings of the 16th IFAC World Congress IFAC 2005, Praha, Czech
Republic, July 2005.

[RM05b] S. V. Raković and D. Q. Mayne. A simple tube controller for efficient robust
model predictive control of constrained linear discrete time systems subject
to bounded disturbances. In Proceedings of the 16th IFAC World Congress
IFAC 2005, Praha, Czech Republic, July 2005. Invited Session.

249
[RMKK05] S. V. Raković, D. Q. Mayne, E. C. Kerrigan, and K. I. Kouramas. Optimized
robust control invariant sets for constrained linear discrete – time systems.
In Proceedings of the 16th IFAC World Congress IFAC 2005, Praha, Czech
Republic, July 2005.

[Ros03] J. A. Rossiter. Model-Based Predictive Control: A Practical Approach. CRC


Press, 2003.

[RR99] C. V. Rao and J. B. Rawlings. Steady states and constraints in model


predictive control. AIChe, 45(6):1266–1278, 1999.

[RRTP76] J. Richalet, A. Rault, J. L. Testud, and J. Papon. Algorithmic control of


industrial processes. In Proceedings of the 4th IFAC Symposium on Identi-
fication and System Parameter Estimation, pages 1119–1167, 1976.

[RRTP78] J. Richalet, A. Rault, J. L. Testud, and J. Papon. Model predictive heuristic


control: applications to industrial processes. Automatica, 14:413–428, 1978.

[RW98] R.T. Rockafellar and R.J-B. Wets. Variational Analysis. Springer-Verlag,


1998.

[Ryb99] K. Rybnikov. Stresses and liftings of cell complexes. Discrete and Compu-
tational Geometry, 21(4):481 – 517, June 1999.

[Sch68] F. C. Schweppe. Recursive state estimation: Unknown but bounded errors


and system inputs. IEEE Transactions on Automatic Control, 13(1):22–28,
February 1968.

[Sch73] F. C. Schweppe. Uncertain Dynamic Systems. Prentice Hall, Englewood


Cliffs, NJ, 1973.

[Sch87] M. Schechter. Polyhedral functions and multiparametric linear program-


ming. Journal of Optimization Theory and Applications, 53(2):269–280,
May 1987.

[Sch93] R. Schneider. Convex bodies: The Brunn-Minkowski theory, volume 44 of


Encyclopedia of Mathematics and its Applications. Cambridge University
Press, Cambridge, England, 1993.

[SDG00] Marià M. Seron, José A. De Doná, and Graham C. Goodwin. Global an-
alytical model predictive control with input constraints. In Proceedings of
the 39th IEEE Conference on Decision and Control, pages 154–159, Sydney,
Australia, December 2000.

[SDPP02] V. Sakizlis, V. Dua, J. D. Perkins, and E. N. Pistikopoulos. The explicit


control law for hybrid systems via parametric programming. In Proceedings
of the American Control Conference, pages 674–679, Anchorage, Alaska,
2002.

250
[Ser88] J. Serra. Image Analysis and Mathematical Morphology, Vol II: Theoretical
advances. Academic Press, 1988.

[SGD00] Marià M. Seron, Graham C. Goodwin, and José A. De Doná. Geometry of


model predictive control for constrained linear systems. Technical Report
EE0031, The University of Newcastle, Australia, 2000.

[SM98] P. O. M. Scokaert and D. Q. Mayne. Min-max feedback model predictive


control for constrained linear systems. IEEE Transactions on Automatic
Control, 43(8):1136–1142, August 1998.

[Smi04] R.S. Smith. Robust model predictive control of constrained linear systems.
In Proc. American Control Conference, pages 245 – 250, 2004.

[SMR99] P. O. M. Scokaert, D. Q. Mayne, and J. B. Rawlings. Suboptimal model


predictive control (feasibility implies stability). IEEE Transactions on Au-
tomatic Control, 44(3):648–654, March 1999.

[SR95] P. O. M. Scokaert and J. B. Rawlings. Stability of model predictive control


under perturbations. In Proceedings of the IFAC symposium on nonlinear
control systems design, Lake Tahoe, California, June 1995.

[SR00] J. Schuurmans and J. A. Rossiter. Robust model predictive control using


tight sets of predicted states. Proceedings of the IEE, 147(1):13–18, January
2000.

[Tan91] K. T. Tan. Maximal output admissible sets and the nonlinear control of
linear discrete-time systems with state and control constraints. PhD thesis,
University of Michigan, 1991.

[TJB03a] P. Tøndel, T. A. Johansen, and A. Bemporad. An algorithm for multi-


parametric quadratic programming and explicit MPC solutions. Automat-
ica, 39(3):489–497, 2003.

[TJB03b] P. Tøndel, T. A. Johansen, and A. Bemporad. Computation of piecewise


affine control via binary search tree. Automatica, 39(5):945–950, 2003.

[VEB00] A. Vladimirov, L. Elsner, and W.J. Beyn. Stability and paracontractivity


of discrete linear inclusion. Linear Algebra and Its Applications, 312:125
–134, 2000.

[Ver03] S.M. Veres. Geometric Bounding Toolbox (GBT 7.2) for Matlab. Official
website: http://sysbrain.com/gbt/, 2003.

[vHB03] D.H. van Hessem and O.H. Bosgra. A full solution to the constrained sto-
chastic closed-loop mpc problem via state and innovations feedback and its
receding horizon implementation. In Proceedings of IEEE 2003 Conference
on Decision and Control, pages 929 – 934, 2003.

251
[VM91] A. Vicino and M. Milanese. Optimal inner bounds of feasible parameter set
in linear estimation with bounded noise. IEEE Transactions on Automatic
Control, 36:759–763, 1991.

[Vor08] G. Voronoi. Nouvelles applications des paramètres continus à la théorie des


formes quadratiques. deuxième mémoire: recherches sur les paralléloedres
primitifs. J. Reine Angew. Math., 134:198–287, 1908.

[Vor09] G. Voronoi. Deuxième mémoire: recherches sur les paralléloedres primitifs.


J. Reine Angew. Math., 136:67–181, 1909.

[VSS+ 01] R. Vidal, S. Schaffert, O. Shakernia, J. Lygeros, and S. Sastry. Decidable


and semi-decidable controller synthesis for classes of discrete time hybrid
systems. In Proc. 40th IEEE Conference on Decision and Control, Orlando,
Florida, USA, December 2001.

[Wit68] H. S. Witsenhausen. Sets of possible states of linear systems given perturbed


observations. IEEE Transactions on Automatic Control, pages 556–558,
October 1968.

[YB02] Jun Yan and Robert Bitmead. Model predictive control and state estima-
tion. In Proceedings of the 15th IFAC Conference, Barcelona, Spain, July
2002. IFAC.

[Zie94] G. M. Ziegler. Lectures on Polytopes. Springer, 1994.

[ZM93] A. Zheng and M. Morari. Robust stability of constrained model predictive


control. In Proceedings of the 1993 American Control Conference, pages
379–383, 1993.

252

View publication stats

You might also like