1996 - Robust Constrained Model Predictive Control Using LMI
1996 - Robust Constrained Model Predictive Control Using LMI
1996
Key Words-Model predictive control; linear matrix inequalities: convex optimization; multivariable
control systems: state-feedback; on-line operation; robust control; robust stability; time-varying
systems.
Abstract-The primary disadvantage of current design for the control of slow dynamical systems, such
techniques for model predictive control (MPC) is their
inability to deal explicitly with plant model uncertainty. In as those encountered in chemical process control
this paper, we present a new approach for robust MPC in the petrochemical, pulp and paper industries,
synthesis that allows explicit incorporation of the description and in gas pipeline control. At every time
of plant uncertainty in the problem formulation. The
uncertainty is expressed in both the time and frequency instant, MPC requires the on-line solution of an
domains. The goal is to design, at each time step, a optimization problem to compute optimal
state-feedback control law that minimizes a ‘worst-case’ control inputs over a fixed number of future time
infinite horizon objective function, subject to constraints on
the control input and plant output. Using standard instants, known as the ‘time horizon’. Although
techniques, the problem of minimizing an upper bound on more than one control move is generally
the ‘worst-case’ objective function, subject to input and calculated, only the first one is implemented. At
output constraints, is reduced to a convex optimization
involving linear matrix inequalities (LMIs). It is shown that the next sampling time, the optimization
the feasible receding horizon state-feedback control design problem is reformulated and solved with new
robustly stabilizes the set of uncertain plants. Several measurements obtained from the system. The
extensions, such as application to systems with time delays,
problems involving constant set-point tracking, trajectory on-line optimization can be typically reduced to
tracking and disturbance rejection, which follow naturally either a linear program or a quadratic program.
from our formulation, are discussed. The controller design is Using MPC, it is possible to handle inequality
illustrated with two examples. Copyright 0 1996 Elsevier
Science Ltd. constraints on the manipulated and controlled
variables in a systematic manner during the
1. INTRODUCTION
design and implementation of the controller.
Moreover, several process models as well as
Model predictive control (MPC), also known as many performance criteria of significance to the
moving horizon control (MHC) or receding process industries can be handled using MPC. A
horizon control (RHC), is a popular technique fairly complete discussion of several design
*Received 23 March 1995; revised 5 October 1995; techniques based on MPC and their relative
received in final form 5 February 1996. This paper was not merits and demerits can be found in the review
presented at any IFAC meeting. This paper was recom- article by Garcia et al. (1989).
mended for publication in revised form by Associate
Editor Y. Yamamoto under the direction of Editor Ruth F. Perhaps the principal shortcoming of existing
Curtain. Corresponding author Professor Manfred Morari. MPC-based control techniques is their inability
Tel. +41 1 632 7626; Fax +41 1 632 1211; E-mail to explicitly incorporate plant model uncertainty.
[email protected].
t Chemical Engineering, 210-41, California Institute of Thus nearly all known formulations of MPC
Technology, Pasadena, CA 91125, U.S.A. minimize, on-line, a nominal objective function,
$ School of Electrical Engineering, Purdue University, using a single linear time-invariant (LTI) model
West Lafayette, IN 47907-1285, USA. This work was
initiated when this author was affiliated with Control and to predict the future plant behaviour. Feedback,
Dynamical Systems, California Institute of Technology, in the form of plant measurement at the next
Pasadena, CA 91125, U.S.A. sampling time, is expected to account for plant
$ Institut ftlr Automatik, Swiss Federal Institute of
Technology (ETH), Physikstrasse 3, ETH-Zentrum, 8092 model uncertainty. Needless to say, such control
Zurich, Switzerland. systems that provide ‘optimal’ performance for a
1361
1362 M. V. Kothare it al
particular model may perform very poorly when response coefficients. For certain choices of
implemented on a physical system that is not the objective function, the on-line problem is
exactly described by the model (see e.g. Zheng shown to be reducible to a linear program.
and Morari. 1993). Similarly, the extensive One of the problems with this linear
amount of literature on stability analysis of MPC programming approach is that to simplify the
algorithms is by and large restricted to the on-line computational complexity, one must
nominal case, with no plant-model mismatch choose simplistic, albeit unrealistic, model
(Garcia and Morari, 1982: Clarke rt rrf. 1987: uncertainty descriptions, for, e.g., fewer FIR
Clarke and Mohtadi. 1989: Zafiriou. 1990: coefficients. Secondly. this approach cannot be
Zafiriou and Marchal, 1991; Tsirukis and Morari. extended to unstable systems.
1992; Muska and Rawlings. 1993; Rawlings and
Muske, 1993; Zheng rt al.. 1995): the issue of the From the preceding review, we see that there
behavior of MPC algorithms in the face of has been progress in the mnlysis of robustness
uncertainty. i.e. ‘robustness’, has been addressed properties of MPC. But robust .synthesis. i.e. the
to a much lesser extent. Broadly, the existing explicit incorporation of realistic plant uncer-
literature on robustness in MPC can be tainty description in the problem formulation.
summarized as follows. has been addressed only in a restrictive
framework for FIR models. There is a need for
l Anulysis of robustness properties o,f’ MPC’. computationally inexpensive techniques for rob-
Garcia and Morari (1982. 198Sa, b) have ust MPC synthesis that arc suitable for on-line
analyzed the robustness of unconstrained implementation and that allow incorporation ot
MPC in the framework of internal model a broad class of model uncertainty descriptions.
control (IMC), and have developed tuning In this paper, we present one such MPC-based
guidelines for the IMC filter to guarantee technique for the control of plants with
robust stability. Zafiriou ( 1990) and Zafiriou uncertainties. This technique is motivated by
and Marchal (1991) have used the contraction recent developments in the theory and applica-
properties of MPC to develop necessary/ tion (to control) of optimization involving linear
sufficient conditions for robust stability of matrix inequalities (LMIs) (Boyd rt rrl., 1994).
MPC with input and output constraints. Given There are two reasons why LMI optimization is
upper and lower bounds on the impulse relevant to MPC. First, LMI-based optimization
response coefficients of a single-input single- problems can be solved in polynomial time,
output (SISO) plant with finite impulse often in times comparable to that required for
responses (FIR), Gencelli and Nikolaou the evaluation of an analytical solution for a
(1993) have presented robustness analysis of similar problem. Thus LMI optimization can he
constrained [,-norm MPC algorithms. Polak implemented on-line. Secondly, it is possible to
and Yang (1993a. b) have analyzed robust recast much of existing robust control theory in
stability of their MHC algorithm for the framework of LMIs. The implication is that
continuous-time linear systems with variable wc can devise an MPC scheme where, at each
sampling times by using a contraction time instant. an LMI optimization problem (as
constraint on the state. opposed to conventional linear or quadratic
l MPC with explicit uncertuitlty description. The programs) is solved that incorporates input and
basic philosophy of MPC-based design algo- output constraints and a description of the plant
rithms that account explicitly for plant uncertainty and guarantees certain robustness
uncertainty is the following (Camp0 and properties.
Morari, 1987: Allwright and Papavasiliou. The paper is organized as follows. In Section
1992: Zheng and Morari. 1993). 7 we discuss
-1 background material such as
models of systems with uncertainties. LMIs and
Modify the on-line constrained minimiza- MPC. In Section 3. we formulate the robust
tion problem to a min-max problem unconstrained MPC problem with state feedback
(mininimizing the worst-case value of the 21s an LMI problem. We then extend the
objective function, where the worst case is formulation to incorporate input and output
taken over the set of uncertain plants). constraints, and show that the feasible receding
horizon control law that we obtain is robustly
Based on this concept, Campo and Morari stabilizing. In Section 4. we extend our
(19X7), Allwright and Papavasiliou ( 1992) and formulation to systems with time delays and to
Zheng and Morari (1993) have presented problems involving trajectory tracking, constant
robust MPC schemes for SISO FIR plants. set-point tracking and disturbance rejection. In
given uncertainty bounds on the impulse Section 5. we present two examples to illustrate
Robust constrained MPC using LMI 1363
the design procedure. Finally, in Section 6, we FIR plants can be translated to a polytopic
present concluding remarks. uncertainty description on the state-space mat-
rices. Thus this polytopic uncertainty description
2. BACKGROUND is suitable for several problems of engineering
significance.
2.1. Models for uncertain systems
We present two paradigms for robust control,
Structured feedback uncertainty. A second, more
which arise from two different modeling and
common, paradigm for robust control consists of
identification procedures. The first is a ‘multi-
an LTI system with uncertainties or perturba-
model’ paradigm, and the second is the more
tions appearing in the feedback loop (see Fig.
popular ‘linear system with a feedback uncer-
lb):
tainty’ robust control model. Underlying both
these paradigms is a linear time-varying (LTV) x(k + 1) = Ax(k) + Bu(k) + &p(k),
system y(k) = Wk),
x(k + 1) = A(k)x(k) + B(k)u(k), q(k) = C+(k) + D+(k),
Y(k) = CX(k), (1) p(k) = W)(k).
[A(k) WI1 E Q The operator A is block-diagonal:
where u(k) E lFY1is the control input, x(k) E R”l
is the state of the plant and y(k) E R”) is the
rAl AZ 1
plant output, and Q is some prespecified set. A= (4)
. I
Polytopic or multi-model paradigm. For poly- L 41
topic systems, the set S2 is the polytope with A,: KY”‘+Iw”l. A can represent either a
memoryless time-varying matrix with
Q = Co&t, B,l, [AZ &I,. . . 7[A, BrJt IlAi(k)ll, z G(Ai(k)) 5 1, i = 1,2,. . . , r, k ~0,
(2) or a convolution operator (for, e.g., a stable LTI
where Co devotes to the convex hull. In other dynamical system), with the operator norm
words, if [A B] E R then, for some nonnegative induced by the truncated &norm less than 1,
A,, A*, . , AL summing to one, we have i.e.,
Fig
consider the LTV case, since the results wc the output y(k + i 1k). i = 1.2,. . . ,p. Here WC
obtain are identical for the general mixed USC the following notation:
uncertainty case, with one exception, as pointed
r(k+i/k), state and output respectively. at
out in Section 3.2.2. The details can be found in
y(k + i 1k) time k + i, predicted based on the
Boyd rt rrl. (1994, Section X.2), and will he
measurements at time k: x(k I k)
omitted here for brevity. For the L-TV case. it is
and y(k ) k) refer respectively to
easy to show through routine algebraic man-
the state and output measured at
ipulations that the system (3) corresponds to the
time k:
system (1) with
lr(k + i / k) control move at time k + I,
R = {[A + B,,K,, B + B,,hD ,,,,I : computed by the optimization
Asatisfies (4) with *(A,) 5 I}. (6) problem (7) at time k: u(k I k) is
the control move to be imple-
A = 0, p(k) - 0. k 2 0, corresponds to the mented at time k;
nominal LTI system. output or prediction horizon;
I)
The issue of whether to model a system as a 111 input or control horizon.
polytopic system or a system with structured
uncertainty depends on a number of factors. It is assumed that there is no control action
such as the underlying physical model of the after time k + ttr - 1, i.e. u(k + i I k) = 0, i 2 n7.
system, available model identification an d In the receding horizon framework, only the first
validation techniques. etc. For example. non- computed control move u(k ( k) is implemented.
linear systems can be modeled tither as At the next sampling time. the optimization (7)
polytopic systems or as systems with structured is resolved with new measurements from the
perturbations. We shall not concern ourselves plant. Thus both the control horizon m and the
with such issues here: instead we shall assume prediction horizon p move or recede ahead by
that one of the two models discussed thus far is one step as time moves ahead by one step. This
available. i$ the reason why MPC is also sometimes
referred to as receding horizon control (RHC) or
2.2. Model predictive control moving horizon control (MHC). The purpose of
Model predictive control is an open-loop taking new measurements at each time step is to
control design procedure where at each sampling compensate for unmeasured disturbances and
time k, plant measurements are obtained and a model inaccuracy, both of which cause the
model of the process is used to predict future system output to be different from the one
outputs of the system. Using these predictions, ttl predicted by the model. We assume that exact
control moves u(k + i 1k), i = 0, I, , tt7 ~ I, measurement of the state of the system is
are computed by minimizing a tzottzitd cost available at each sampling time k, i.e.
J,,(k) over a prediction horizon p as follows: .r(k 1k) =x(k). (8)
Several choices of the objective function J,,(k) in
the optimization (7) have been reported (Garcia
subject to constraints on the control input ot rrl., 1989; Zafiriou and Marchal, 1991; Muske
u(k + i 1k). i = 0, 1. , m - 1, and possiblv and Rawlings, 1993; Genceli and Nikolaou.
alsoonthestate.w(k+iIk), i=O.l,,.., p, ani 1993) and have been compared in Campo and
Robust constrained MPC using LMI 1365
Note that the output constraints have been We often encounter problems in which the
1366 M. V. Kothare et ul.
variables are matrices, for example the con- 3.1. Robust unconstrained IH-MPC
straint P > 0, where the entries of P arc the The system is described by (1) with the
optimization variables. In such cases, we shall associated uncertainty set Q (either (2) or (6)).
not write out the LMI explicitly in the form Analogously to the familiar approach from linear
F(x) > 0, but instead make clear which matrices robust control, we replace the minimization, at
are the variables. each sampling time k. of the nominal perfor-
The LMI-based problem of central importance mance objective (given in (7)). by the minimiza-
to this paper is that of minimizing a linear tion of a rnhust performance objective as
subject to LMI constraints: follows:
performance objective. Thus the goal of our objective minimization problem with variables y,
robust MPC algorithm has been redefined to Q, Y and A:
synthesize, at each time step k, a constant
state-feedback control law u(k + i 1k) = Fx(k + min y (23)
v.9.Y.A
i ( k) to minimize this upper bound V(x(k 1k)).
As is standard in MPC, only the first computed subject to
input u(k 1k) = Fx(k 1k) is implemented. At the
1
1 x(kI)k’ >.
next sampling time, the state x(k + 1) is (24)
measured, and the optimization is repated to [ XV+) Q - ’
recompute F. The following theorem gives us and
conditions for the existence of the appropriate
P > 0 satisfying (16) and the corresponding state Q YTR’” QQt"
feedback matrix F. R”*Y Yl 0
20,
Q:“Q 0 YI
1
time k. Assume that there are no constraints on
the control input and plant output. QC; + YTD& QA’ + YTBT
(a) Suppose that the uncertainty set Q is 0 0
defined by a polytope as in (2). Then the state 0 0 (25)
feedback matrix F in the control law u(k +
i ) k) = Fx(k + i ) k), i r0 that minimizes the
upper bound V(x(k I k)) on the robust perfor- 0 Q - B/JIB,
A 0
mance objective function at sampling time k is
given by where
Al&l,
F = YQ-‘,
I
(18)
h2Zn2
A= >O. (26)
where Q >O and Y are obtained from the
solution (if it exists) of the following linear u,
objective minimization problem (this problem is I
of the same form as the problem (14)): Proof. See Appendix A.
[ xV+) 1
1 #VT
Q
>.
-
(20) the discrete-time case (Geromel et al., 1991).
Part (b) can be derived using the same basic
techniques in conjunction with the S-procedure
and (see Yakubovich, 1992, and the references
therein).
Q QA; + YTB; QQ;” YTR”*
Remark 3. Strictly speaking, the variables in the
AjQ + BiY Q 0 0
above optimization should be denoted by Qk, Fk,
QQ
l/2
0 Yl 0
Y, etc. to emphasize that they are computed at
R’j2Y 0 0 YI time k. For notational convenience, we omit the
I 1
subscript here and in the next section. We shall,
20, j=l,2 ,..., L. (21)
however, briefly utilize this notation in the
robust stability proof (Theorem 3). Closed-loop
(b) Suppose the uncertainty set n is defined
stability of the receding horizon state-feedback
by a structured norm-bounded perturbation A as
control law given in Theorem 1 will be
in (6). In this case, F is given by
established in Section 3.2.
F = YQ-‘, (22)
Remark 4. For the nominal case, (L = 1 or
where Q > 0 and Y are obtained from the A(k) = 0, p(k) = 0, k 2 0), it can be shown that
solution (if it exists) of the following linear we recover the standard discrete-time linear
1368 M. V. Kothare et ul.
quadratic regulator (LQR) solution (see Kwa- conservatism in our worst-case MPC synthesis by
kernaak and Sivan (1972) for the standard LQR recomputing F using new plant measurements.
solution).
Remark 7. The speed of the closed-loop
Remark 5. The previous remark establishes that response can be influenced by specifying a
for the nominal case. the feedback matrix F minimum decay rate on the state x(llx(k)jl 5
computed from Theorem 1 is constant, indepen- cp I/x(O) 11,0 < p < 1) as follows:
dent of the state of the system. However, in the
_u(k + i + 1 1kj“Px(k + i + 1 / k)
presence of uncertainty, even without constraints
on the control input or plant output, F can show 5 p2x(k + i ( k)“‘Px(k + i 1k), i 2 0, (27)
a strong dependence on the state of the system. for any [A(k + i) B(k + i)] E R, i 20. This
In such cases, using a receding horizon approach implies that
and recomputing F at each sampling time shows
significant improvement in performance as il.r(k t i + 1 ( k)ll i [~~~“‘p Ilx(k + i 1k)ll,
opposed to using a static state feedback control
i 20.
law. This, we believe, is one of the key ideas in
this paper. and is illustrated with the following Following the steps in the proof of Theorem 1, it
simple example. Consider the polytopic system can be shown that the requirement (27) reduces
(l), n being defined by (2) with to the following LMIs for the two uncertainty
0.0347 0.5 1Y4 descriptions:
I
A, -= 0.3835 0.8310 1 ’ fbr polytopic uncrrtuinty.
A? =
IO.0591 0.2641 1 P2Q (A,Q + KY)’ ,.
1 I.7971 0.x717 1, I A,Q + B,Y Q 1 -’
_ [ .- 1.4462 I
i=l.....L; (2X)
fijr .structw-ed uncertuinty,
1
Figure 2(a) shows the initial state response of a
(C,Q + D<,,,Y)’ (AQ + BYI’
20,
time-varying system in the set <1, using the p?Q
receding horizon control law of Theorem 1 C‘,,Q + ~<,!,y 1 0
(Q, = R, R = 1). Also included is the static I AQ+BY 0 Q - B,,AB;,
state-feedback control law from Theorem 1.
(2Y)
where the feedback matrix F is not recomputed
at each time k. The response with the receding where A > 0 is of the form (26).
horizon controller is about five times faster. Thus an additional tuning parameter p E (0, 1)
Figure 2(b) shows the norm of F as a function of is introduced in the MPC algorithm to influence
time for the two schemes, and thus explains the the speed of the closed-loop response. Note that
significantly better performance of the receding with p = 1. the above two LMIs are trivially
horizon scheme. satisfied if (21) and (25) arc satisfied.
time
(b)
Fig. 2. (a) Clnconstraincd closed-loop responses and (b) norm ot the teedback matrix F: solid lines. using receding horizon state
feedback: dashed lines. using robust static state feedback.
Robust constrained MPC using LMI 1369
incorporated as LMI constraints in the robust u(k). In this section, we show how limits on the
MPC problem. As a first step, we need to control signal can be incorporated into our
establish the following lemma, which will also be robust MPC algorithm as sufficient LMI con-
required to prove robust stability. straints. The basic idea of the discussion that
Lemma 1. (Invariant ellipsoid). Consider the follows can be found in Boyd et al. (1994) in the
system (1) with the associated uncertainty set R. context of continuous-time systems. We present
(a) Let SL be a polytope described by (2). At it here to clarify its application in our
sampling time k, suppose there exist Q > 0, y (discrete-time) robust MPC setting and also for
and Y = FQ such that (21) holds. Also suppose completeness of exposition. We shall assume for
thatu(k+iIk)=Fx(k+iIk),irO.Thenif the rest of this section that the postulates of
Lemma 1 are satisfied, so that 8 is an invariant
x(k 1k)‘Q-‘x(k 1k) I 1
ellipsoid for the predicted states of the uncertain
(or, equivalently, x(k I l~)~Px(k I k) system.
I y with P = yQ_‘), At sampling time k, consider the Euclidean
norm constraint (9):
then
Ilu(k + i I k)1125 u,,,, i 2 0.
max x(k + i 1k)‘Q-‘x(k + i I k) < 1,
[A(k+j) i?(k+j)]~R,j~O The constraint is imposed on the present and the
entire horizon of future manipulated variables,
irl, (30)
although only the first control move u(k I k) =
or, equivalently, u(k) is implemented. Following Boyd et al.
(1994) we have
max x(k+iIk)TPx(k+iIk)<y,
[A(k+j) B(k+j)]eQ.j?O ~2; IW + i I WI’2 = yz; IIYQ-‘x(k + i I k)lli
irl. (31)
smax lIYQ-‘zll:
Thus, %={(z IzTQ-‘z~l}={z IzTPzsy} is an ZEX
invariant ellipsoid for the predicted states of the = A,,,(Q-“‘YTYQ-I”).
uncertain system (see Fig. 3).
(b) Let R be described by (6) in terms of a Using (13), we see that Ilu(k + i ) k)ll~~ uf,_
structured A block as in (4). At sampling time k, i 2 0, if
suppose there exist Q > 0, y, Y = FQ and A > 0
such that (25) and (26) hold. If u(k + i I k) =
Fx(k + i I k), i 2 0, then the result in (a) holds as [“f’eyl 2 0. (32)
1
x Y
2 0, with X, % ufmax,
[ YT Q
j = 1, 2, . . . , n,,, (33)
Fig. 3. Graphical representation of the state-invariant guarantees that luj(k + i I k)ls uj.,,,, i 2 0, j =
ellipsoid 8 in two dimensions. 1, 2, . . . , ncr. These are LMIs in X, Y and Q.
1370 M. V. Kothare et ~1.
Note that (33) is a slight generalization of the The condition (35) is an LMI in Y, Q > 0 and
result derived in Boyd et al. (1994). T-‘>O.
1
Q (A,Q + V’)‘C’ >o i 1) 0. For componentwise peak bounds on the
[ C(A)& + BjJ’) .v;,,1 - * output, we replace C by C,, I = 1, . . , n,..
3.2.3. Robust stability. We are now ready to
;=1,2 (..., I>, (34) state the main theorem for robust MPC synthesis
then with input and output constraints and establish
robust stability of the closed loop.
[A(k+/)
Theorem 2. Let x(k) =x(k ) k) be the state of
The condition (34) represents a set of LMIs in Y the uncertain system (1) measured at sampling
and Q >O. time k.
(a) Suppose the uncertainty set G is defined
Structured uncertainty. In this case, L(1 is by a polytope as in (2). Then the state feedback
described by (3) and (4) in terms of a structured matrix F in the control law u(k + i 1k) =
A block. As shown in Appendix C, if F?c(k + i / k), i 20, that minimizes the upper
bound V(x(k I k)) on the robust performance
Y$B~Q (C,Q + &Y)’ objective function at sampling time k and
c,Q + DqrtY T- ’ satisfies a set of specified input and output
20(35)
[ C(AQ + BY) 0 constraints is given by
I- 1
(AQ + BY)‘.C’ F=YQ ‘,
(b) Suppose the uncertainty set n is defined it must also satisfy this inequality, i.e.
by (6) in terms of a structured perturbation A as
x(k + 1 1k + l)TQ-‘x(k + 1 1k + 1) < 1,
in (4). In this case, F is given by
or
F = YQ-‘,
1
1 x(k+l[k+l)T >.
where, Q ~0 and Y are obtained from the
solution (if it exists) of the following linear [ x(k + 1 1k + 1) Q
objective minimization problem: (using 13).
min {y ( y, Q, Y, A and variables in the Thus the feasible solution of the optimization
LMIs for input and output constraints} problem at time k is also feasible at time k + 1.
Hence the optimization is feasible at time k + 1.
subject to (24) (29, (26), either (32) or (33)
This argument can be continued for times
depending on the input constraint to be imposed,
k + 2, k + 3, . . . to complete the proof. 0
and (35) with either C and T, or C, and T,,
I= 1,2, . . . ) n,, depending on the output con-
Theorem 3. (Robust stability). The feasible
straint to be imposed. receding horizon state feedback control law
obtained from Theorem 2 robustly asymptoti-
Proof. From Lemma 1, we know that (21) and
cally stabilizes the closed-loop system.
(24), (25) imply respectively for the polytopic
and structured uncertainties that 8 is an Proof. In what follows, we shall refer to the
invariant ellipsoid for the predicted states of the uncertainty set as R, since the proof is identical
uncertain system (1). Hence the arguments in for the two uncertainty descriptions.
Section 3.2.1 and 3.2.2 used to translate the input To prove asymptotic stability, we shall
and output constraints to sufficient LMI establish that V(x(k ) k)) = x(k I k)TP,x(k I k),
constraints hold true. The rest of the proof is where Pk >O is obtained from the optimal
similar to that of Theorem 1. q
solution at time k, is a strictly decreasing
Lyapunov function for the closed-loop.
In order to prove robust stability of the closed
First, let us assume that the optimization in
loop, we need to establish the following lemma.
Theorem 2 is feasible at time k = 0. Lemma 2
then ensures feasibility of the problem at all
Lemma 2. (Feasibility). Any feasible solution of
times k >O. The optimization being convex
the optimization in Theorem 2 at time k is also
therefore has a unique minimum and a
feasible for all times t > k. Thus if the
corresponding optimal solution (y, Q, Y) at each
optimization problem in Theorem 2 is feasible at
time k 2 0.
time k then it is feasible for all times t > k.
Next, we note from Lemma 2 that y, Q >O, Y
Proof Let us assume that the optimization
(or, equivalently, y, F = YQ-‘, P = rQ_’ > 0)
problem in Theorem 2 is feasible at sampling obtained from the optimal solution at time k are
time k. The only LMI in the problem that feasible (of course, not necessarily optimal) at
depends explicitly on the measured state time k + 1. Denoting the values of P obtained
from the optimal solutions at time k and k + 1
x(k 1k) = x(k) of the system is the following:
respectively by Pk and Pk+, (see Remark 3), we
1’
1 x(kIVT ,. must have
[ x(k Ik) Q - x(k + 1 1k + l)TPk+,x(k + 1 1k + 1)
Thus, to prove the lemma, we need only prove ~x(k + 1 1k + l)TPkx(k + 1 1k + 1). (36)
that this LMI is feasible for all future measured
states x(k + i 1k + i) = x(k + i), i 2 1. This is because Pk+, is optimal, whereas Pk is
Now, feasibility of the problem at time k only feasible at time k + 1.
implies satisfaction of (21) and (24), (29, which, And lastly, we know from Lemma 1 that if
using Lemma 1, in turn imply respectively for u(k + i I k) = F,x(k + i I k), i 20 (Fk is obtained
the two uncertainty descriptions that (30) is from the optimal solution at time k), then for
satisfied. Thus, for any [A(k + i) B(k + i)] E n, any [A(k) B(k)] E a, we must have
i 2 0 (where fi is the corresponding uncertainty x(k + 11 k)TP,x(k + 11 k)
set), we must have <x(k 1k)TP,x(k ( k) x(k 1k) ZO (37)
x(k+iIk)TQ-‘x(k+iIk)<l, irl.
(see (49) with i = 0).
Since the state measured at k + 1, that is, Since the measured state x(k + 11 k +l) =
x(k+lIk+l)=x(k+l), equals [A(k) + x(k + 1) equals (A(k) + B(k)F,)x(k 1k) for
B(k)F]x(k I k) for some [A(k) B(k)] E Sz, some [A(k) B(k)] E S& it must also satisfy the
1372 M. V. Kothare et al.
inequality (37). Combining this with the required to track the target vector y, by moving
inequality (36), we conclude that the system to the set-point x,,, u,<, where
<x(k 1k)-‘P&k 1k) .r(k / k) f 0. We assume that x,, U, and y, are feasible, i.e.
they satisfy the imposed constraints. The choice
Thus x(k 1k j’P,x(k 1k) is a strictly decreasing
of J,(k) for the robust set-point tracking
Lyapunov function for the closed loop. which is
objective in the optimization (15) is
bounded below by a positive-definite function of
x(k 1k) (see (17)). We therefore conclude that
J,(k) = c {[Cx(k + i ( k) -- Cx,]’
x(k)-+0 as k-+x. n
, 0
X Q,[Cx(k + i I k) - Cx,]
Remark 10. The proof of Theorem I (the
unconstrained case) is identical to the preceding + [u(k + i I k) - u,]‘R[u(k + i / k) - u,]},
proof if we recognize that Theorem 1 is only a Q, >C), R > 0. (3X)
special case of Theorem 2 without the LMIs
corresponding to input and output constraints. As discussed in Kwakernaak and Sivan (1972),
we can define a shifted state T(k) =x(k) -x,. a
shifted input ii(k) = u(k) - u, and a shifted
4. EXTENSIONS output y(k) = y(k) - y, to reduce the problem to
The presentation up to this point has been the standard form as in Section 3. Com-
restricted to the infinite horizon regulator with a ponentwise peak bounds on the control signal 11
zero target. In this section, we extend the can be translated to constraints on 6 as follows:
preceding development to several standard
problems encountered in practice.
e -ll,.m;,x - u,,, 5 fi, 5 U,.max - ll,,,
4.1. Reference trajectory tracking C’onstraints on the transient deviation of y(k)
In optimal tracking problems, the system from the steady-state value _v,, i.e. y(k), can be
output is required to track a reference trajectory incorporated in a similar manner.
y,(k) = C,x,(k), where the reference states X, are
computed from the equation 4.3. Disturbance rejection
Q, > 0, R > 0.
IA(k) B(k)1E Q.
A simple example of such a disturbance is any
As discussed in Kwakernaak and Sivan (1972). signal (C,“:,, e(i)‘e(i) < r).
energy-bounded
the plant dynamics can be augmented by the
Assuming that the state of the system x(k) is
reference trajectory dynamics to reduce the
measurable, we would like to solve the
robust trajectory tracking problem (with input
optimization problem (1.5). We shall assume that
and output constraints) to the standard form as the predicted states of the system satisfy the
in Section 3. Owing to space limitations. we shall equation
omit these ideals.
x(k + i + 1 1k) = A(k + i)x(k + i 1k)
4.2. Constant set-point tracking + B(k + i)u(k + i 1k), (40)
For uncertain linear time-invariant systems.
IA(k + i) B(k + i)] E Q.
the desired equilibrium state may be a constant
point x,, II, (called the set-point) in state-space. As in Section 3, we can derive an upper bound
different from the origin. Consider (1). which we on the robust performance objective (15). The
shall now assume to represent an uncertain LTI problem of minimizing this upper bound with a
system, i.e. [A B] E R are constant unknown state-feedback control law u(k + i 1k) = Fx(k +
matrices. Suppose that the system output y is i / k). i > 0, at the same time satisfying
Robust constrained MPC using LMI 1373
constraints on the control input and plant which is assumed to be measurable at each time
output, can then be reduced to a linear objective k 1 z, we can derive an upper bound on the
minimization as in Theorem 2. The following robust performance objective (42) as in Section
theorem establishes stability of the closed loop 3. The problem of minimizing this upper bound
for the system (39) with this receding horizon with the state-feedback control law u(k + i -
control law, in the presence of the disturbance z I k) = Fx(k + i - z I k), k 2 z, i 2 0, subject to
e(k). constraints on the control input and plant
output, can then be reduced to a linear objective
Theorem 4. Let x(k) = x(k 1k) be the state of minimization as in Theorem 2. These details can
the system (39) measured at sampling time k and be worked out in a straightforward manner, and
let the predicted states of the system satisfy (40). will be omitted here. Note, however, that the
Then, assuming feasibility at each sampling time appropriate choice of the function V(w(k))
k 2 0, the receding horizon state feedback satisfying an inequality of the form (16) is
control law obtained from Theorem 2 robustly
asymptotically stabilizes the system (39) in the
V(w(k)) = x(k)TP&k) + i x(k - i)TP,x(k - i)
presence of any asymptotically vanishing distur- i=l
bance e(k).
+ 2 x(k - i)TPT,x(k - i)
Proof It is easy to show that for sufficiently i=r+l
i=c 1WTPr&)
where P > 0 is obtained from the optimal + . . .
+
solution at time k, is a strictly decreasing r,,, _ 1 +
I
m
I
I
Target object
,
Goal: 0 = 8,
I
with IS(k)1 2 1, k ~0. The uncertainty
be described as in (3) with
can then
11
Antenna
H
1:= Given an initially disturbed state x(k), the
4
robust II-I-MPC optimization to be solved at
each time k is
1
of the antenna can be described by the following
0.05
discrete-time equations obtained from their s(0) = The control law is generated by
continuous-time counterparts by discretization, / 0
using a sampling time of 0.1 s and Euler’s minimizing a rzomird unconstrained intinitc
first-order approximation for the derivative horizon objective function using a nomind
model corresponding to a(k) = a,,,,,,,= 1 s ‘. The
response is unstable. Note that the optimization
is feasible at each time k 2 0, and hence the
1 controller cannot diagnose the unstable response
O.’ lx(k) + [ O,:KllG) via infeasibility. even though the horizon is
= 0 1 -O.la(k)
infinite (see Rawlings and Muske, 1993). This is
2 A(k)x(k) + Bu(k), not surprising, and shows that the prevalent
y(k) = [l 01x(k) b CL(k), notion that ‘feedback in the form of plant
measurements at each time step k is expected to
K = 0.787 rad-’ V ’ SC’, 0.1 s ’ 5 a(k) 5 10 s ‘.
compensate for unmeasured disturbances and
The parameter a(k) is proportional to the
model uncertainty’ is only an ad hoc fix in MPC
coefficient of viscous friction in the rotating parts
for model uncertainty without any guarantee of
of the antenna and is assumed to be arbitrarily
robust stability. Figure 5(b) shows the response
time-varying in the indicated range of variation.
using the control law derived from Theorem 1.
Since 0.1 5 a(k) 4 10, we conclude that A(k) t
Notice that the response is stable and the
R = Co {A,, A2}, where
performance is very good. Figure 6(a) shows the
closed-loop response of the system when a(k) is
randomly time-varying between 0.1 and 10s ‘_
The corresponding control signal is given in Fig.
Thus the uncertainty set R is a polytope, as in
6(b). A control constraint of IIl( ~2 V is
(2). Alternatively, if we define
imposed. The control law is synthesized accord-
a(k) - 5.05 ing to Theorem 2. WC see that the control signal
6(k) =
4.95 ’ stays close to the constraint boundary up to time
k = 3 s, thus shedding light on Remark 9. Also
included in Fig. 6 are the response and control
signal using a static state-feedback control law.
where the feedback matrix F computed from
Theorem 2 at time k = 0 is kept constant for all
times k > 0, i.e. it is not recomputed at each time
c,, = [O 4.951, /!I<,,,= 0
k. The response is about four times slower than
then 6(k) is time-varying and norm-bounded the response with the receding horizon statc-
Robust constrained MPC using LMI 1375
L I I’
--cl 1 2 3 4 -O-3 1 P 3
(4 (b)
Fig. 5. Unconstrained closed-loop responses for nominal plant (a(k) - 9 s-l): (a) using nominal MPC with a(k) = 1 s-l: (b)
using robust LMI-based MPC.
x,(k
+1)
still meeting the input constraint. This ‘optimal’ obtained by discretizing the continuous-time
x,(k
+1)
use of the control constraint is possible only if F equations of the system (see Wie and Bernstein,
x4k
+1)
1)
is recomputed at each time k, as in the receding 1992)
x&
+1
horizon controller. The static state-feedback
controller does not recompute F at each time
k 20, and hence shows a sluggish (though
stable) response.
For the computations, the solution at time k [
1
was used as an initial guess for solving the
optimization at time k + 1. The total times 1 0 0.1 0
x1
@)
required to compute the closed-loop responses in 0 1 0 0.1
=
Fig. 5(b) (40 samples) and Fig. 6 (100 samples) -O.lKlm, O.lK/m, 1 0
were about 27 and 77 s respectively (or,
O.lK/m, -O.lK/m, 0 1
I[ 1
equivalently, 0.68 and 0.77 s per sample), on a
SUN SPARCstation 20, using MATLAB code. 0
The actual CPU times were about 18 and 52 s x x2(k) + 0
(i.e., 0.45 and 0.52 s per sample) respectively. In u(k),
__-
x3(k) 0.1/m,
both cases, nearly 95% of the time was required
[ x&) 0
to solve the LMI optimization at each sampling
time. y(k) =X2(k).
0.6,
‘.i’
0
/AC
5c *-
2 -0.s _’
P ,’
s
-, ,‘
:
I’
--1.s ’
,*’
3’
-*.a s II D
time (SIX)
Fig. 6. Closed-loop responses for the time-varying system with input constraint: solid lines, using robust receding horizon state
feedback: dashed lines, using robust static state feedback.
M. V. Kothare et al.
time (set)
Fig. 7. Norm of the leedback matrix F as a functwn 01 time. subject to lu(k + i 1k)ls 1, i 2 0. Here J,(k) is
solid lines. using robust receding horizon state feedback:
dashed lines. using robust static state feedback.
given by (38) with Q, = I and R = 1. Figure 9
shows the output and control signal as functions
of time, as the spring constant K (assumed to be
Here. .x, and x2 are the positions of body I and constant but unknown) is varied between
2, and xj and x1 are their velocities respectively. K “,,” = 0.5 and K,,,, = 10. The control law is
nz, and m2 are the masses of the two bodies and synthesized using Theorem 2. An input con-
K is the spring constant. For the nominal system, straint of IuI 5 1 is imposed. The output tracks
m, = mz = K = 1 with appropriate units. The the set-point to within 10% in about 25 s for all
control force II acts on m,. The performance values of K. Also. the worst-case overshoot
specifications are defined in Problem 4 of Wie (corresponding to K = K,,, = 0.5) is about 0.2. It
and Bernstein (1992) as follows. Design a was found that asymptotic tracking is achievable
feedback/feedforward controller for a unit-step in a range as large as 0.01 i K 4 100. The
output command tracking problem for the response in that case was, as expected, much
output _vwith the following properties. more sluggish than that in Fig. Y.
1. A control input constraint of IuI 5 I must be The total time required to compute the
satisfied. closed-loop response in Fig. 9 (500 samples) for
2. Settling time and overshoot arc to be each fixed value of the spring constant K was
minimized. about 43X s (about 0.87 s per sample) on a SUN
3. Performance and stability robustness with SPARCstation 20, using MATLAB code. The
respect to m, , tt17 and K are to be maximized. CPU time was about 330s (about 0.66s per
We shall assume for this problem that exact sample). Of these times, nearly 94% was
measurement of the state of the system, that is, required to solve the LMI optimization at each
[X, s: _Yj XJ’ 1s available. We shall also assume sampling time.
that the masses ttz, and mz arc constant. equal to
1. and that K is an uncertain constant in the
6. (‘ON(‘LI.ISIONS
range K,,,,,, 5 K 5 K,,,,,. The uncertainty in K is
modeled as in (3) by defining Model predictive control (MPC) has gained
K - K,,,,,,, wide acceptance as a control technique in the
6= process industries. From a theoretical stand-
1
K,I,\ *
point. the stability properties of nominal MPC
0
I 0 0. I 0
have been studied in great detail in the past
0 1 0 0.1
()
seven or eight years. Similarly, the analysis of
A=
=I-0.
B,,1*
,,,, ’ robustness properties of MPC has also received
0.
1
1Km,,,, -().lK
().lK.,,,,,
~0. O.lK,,,,,o,,1 01 0I significant attention in the MPC literature.
=
C‘,,
However. robust synthesis for MPC has been
addressed only in a restrictive sense for
uncertain FIR models. In this article, we have
described a new theory for robust MPC synthesis
for two classes of very genera1 and commonly
[K,,c\ - Kd,, 0 0], D ,,,, = 0 encountered uncertainty descriptions. The de-
Robust constrained MPC using LMI 1377
Fig. 9. Position of body 2 and the control signal as functions of time for varying values of the spring constant.
velopment is based on the assumption of full Generalized predictive control-II. Extensions and inter-
pretations. Automatica, 23, 149-160.
state feedback. The on-line optimization involves Feron, E., V. Balakrishnan and S. Boyd (1992). Design of
solution of an LMI-based linear objective stabilizing state feedback for delay systems via convex
minimization. The resulting time-varying state- optimization. In Proc. 31st IEEE Conf on Decision and
Control, Tucson, AZ, Vol. 1, pp. 147-148.
feedback control law minimizes, at each time Gahinet, P., A. Nemirovski, A. J. Lamb and M. Chilali
step, an upper bound on the robust performance (1995). LMI Control Toolbox: For Use with MATLAB.
objective, subject to input and output con- The Mathworks, Inc., Natick, MA.
Garcia. C. E. and M. Morari (1982). Internal model control
straints. Several extensions such as constant 1. ti unifying review and &me hew results. fnd. Engng
set-point tracking, reference trajectory tracking, Chem. Process Des. Dev., 21, 308-232.
disturbance rejection and application to delay Garcia, C. E. and M. Morari (1985a). Internal model control
2. Design procedure for multivariable systems. Ind. Engng
systems complete the theoretical development. Chem. Process Des. Dev., 24,472-484.
Two examples serve to illustrate application of Garcia, C. E. and M. Morari (1985b). Internal model control
the control technique. 3. Multivariable control law computation and tuning
guidelines. Ind. Engng Chem. Process Des. Dev., 24,
484-494.
Acknowledgements-Partial financial support from the US Garcia, C. E., D. M. Prett and M. Morari (1989). Model
National Science Foundation is gratefully acknowledged. We predictive control: theory and practice-a survey.
would like to thank Pascal Gahinet for providing an initial Automatica, 25, 335-348.
version of the LMI-Lab software. Genceli, H. and M. Nikolaou (1993). Robust stability
analysis of constrained It-norm model predictive control.
AIChE J. 39,1954-1965.
Geromel, J. C., P. L. D. Peres and J. Bernussou (1991). On a
REFERENCES convex parameter space method for linear control design
of uncertain systems. SIAM J. Conrrol Optim., 29,
Alizadeh, F., J.-P. A. Haeberly and M. L. Overton (1994). A 381-402.
new primal-dual interior-point method for semidefinite Kwakemaak, H. and R. Sivan (1972). Linear Optimal
programming. In Proc. 5th SIAM Conf on Applied Linear Control Wiley-Interscience, New York.
Systems.
Algebra, Snowbird, UT, June 1994. Liu, R. W. (1%8). Convergent systems. IEEE Trans. Autom.
Allwright, J. C. and G. C. Papavasiliou (1992). On linear Control, AC-13,384-391.
programming and robust model-predictive control using Muske, K. R. and J. B. Rawlings (1993). Model predictive
impulse-responses. Syst. Control Let?., 18, 159-164. control with linear models. AIChe J. 39,262-287.
Bernussou, J., P. L. D. Peres and J. C. Geromel (1989). A Nesterov, Yu. and A. Nemirovsky (1994). Inrerior-point
linear programming oriented procedure for quadratic Polynomial Methods in Convex Programming. SIAM,
stabilization of uncertain systems. Cyst. Control Len., 13, Philadelphia.
65-72. Packard, A. and J. Doyle (1993). The complex structured
Bitmead, R. R., M. Gevers and V. Wertz (1990). Adapriue singular value. Automatica, 29, 71-109.
Optimal Control. Prentice-Hall, Englewood Cliffs, NJ. Polak, E. and T. H. Yang (1993a). Moving horizon control of
Boyd, S. and L. El Ghaoui (1993). Methods of centers for linear systems with input saturation and plant
minimizing generalized eigenvaiues. Lin. Algebra Appfics, uncertainly-1: robustness. int. J. Control, 53,613-638.
188,63-111. Polak. E. and T. H. Yane (1993b). Moving horizon control of
Boyd, S., L. El Ghaoui, E. Feron and V. Balakrishnan line& systems witTh. inpui saturation and plant
(1994). Linear Matrix Inequalities in System and Control uncertainty-2: disturbance rejection and tracking. fnr. J.
Theory. SIAM, Philadelphia. Control, 58,639-663.
Campo, P. J. and M. Morari (1986). m-norm formulation of Rawlings, J. B. and K. R. Muske (1993). The stability of
model predictive control problems. In Proc. American constrained receding horizon control. IEEE Trans. Aurom.
Controiconf, Seattle, WA; pp. 339-343. Conrrof, AC-38,1512-1516.
Camuo. P. J. and M. Morari (1987). Robust model predictive Tsirukis, A. G. and M. Morari (1992). Controller design with
co&ol. In Proc. American C&o1 Conf, Minneapolis, actuators constraints. In Proc. 31~1IEE Conf on Decision
MN, pp. 1021-1026. and Control, Tucson, AZ, pp. 2623-2628.
Clarke. D. W. and C. Mohtadi (1989). Properties of Vandenberghe, L. and S. Boyd (1995). A primal-dual
generalized predictive control. Automatica, 25,859-87X potential reduction method for problems involving linear
Clarke. D. W., C. Mohtadi and P. S. Tuffs (1987). matrix inequalities. Math. Program., 69, 205-236.
137x M. V. Kothare et ul.
Wie. B. and D. S. Bernstein (1992). Benchmark problems for if and only if there exist Q > 0. Y = FQ and y such that
robust control design. .I. Guidance, Control. Dvn., 15,
1057-1059. 42
Yakubovich, V. A. (1902). Nonconvex optimization problem: A,Q + B,Y
the infinite-horizon linear-quadratic control problem with
quadratic constraints. S)~st. Control Len., 19, 13-22.
Ql"Q
RI;?),
Zafiriou, E. (1990). Robust model predictive control ol
processes with hard constraints. Comput. Chem. Engng, ; = I, 2, 1..
14,x9-371. 1990.
Zafiriou, E. and A. Marchal (1991). Stability of SISO The feedback matrix is then given by F = YQ- ‘_ This
quadratic dynamic matrix control with hard output establishes (18) and (21).
constraints. AlChE J.. 37, ISSO-1.560. (b) Let R be described by (3) in terms of a structured
Zheng. A.. V. Balakrishnan and M. Morari (1995). uncertainty block A as in (4). As in (a), we substitute
Constrained stabilization of discrete-time systems. Int J. u(k + i 1k) = Fx(k + i 1k), i ~0. and the state space
Rohusr Nonlin. Control. 5, 461-485. equations (3) in (16) to get
Zheng. Z. Q. and M. Morari (1993). Robust stability of
constrained model predictive control. In Proc. American r(k i i 1k) ’
Conrrol ConJ, San Francisco. CA, pp. 379-383.
I.p(k + i 1k) 1
(A + BF)‘P(A t BF) P (A + BF)‘PB,,
APPENDIX A-PROOF OF THEOREM 1 x +F’RFtQ,
B;,P(A + BF) B:, PB,, 1
Minimization of l/(x(k / k)) =x(k 1k)‘P~(k 1k). P 10 IS
equivalent to
50. (A.3)
1
min y
y.l’
subject to
pi(k t I 1k)‘p,(k +i I k)sx(k +i 1k)‘(C,,,,
r(k 1k)‘Px(k 1k)r-y
+ D,,,,.,F)‘(C,,, + D,,,, ,F)*(k + i / k).
Defining Q = yP ’ A 0 and using ( 13).this is equivalent to 1 = I. 2.. r. (A.4)
min y It is easy to see that (A.3) and (A.4) arc satisfied if 3h;.
Y.V
h..’ ..A:>Osuch that
subject to
(AtBF)‘P(A+BF)-P+F’RF (A+BF)‘PB,,
tQ, + (C,, + D,,,,F)‘jj’(C,, + Ij,,,,F) 5 0.
B;P(A + AF) B:,PB,, .I’ I
v
A(k - r)Q t B(k + i))’ QC,: f Y’D<;,, VA’.+ Y’B’
0 0
QILQ
R1”Y 0 0 2 0
y.1’ ’ 0
1
QA(k t i)’ + Y’B(k + i)’ QQ/” Y’R”’ 0 Q - B,,YA’ ‘B;
Q 0 0
20. (A?) Defining A = yn’ ’ -yO and A, = yh: ‘>O, r-l.2 ,..., 1.
0 Yl 0
then gives (22). (25) and (26). and the proof is complete. G
0 0 Yl
[A(k + i) B(k + i)] F U (a) From the proof of Theorem I. Part (a), we know that
Y Ilv(k + i 1k)lh
It_“: Q’“(A + BF)TCTCB,
1
Q [A(k + i)Q + B(k + i)YITCT Q’“(A + BF)TCTC(A + EF)Q’”
1=o,
[ C[A(k + i)Q + B(k + i)Y] YLI +Q’R(Cq + Dq,,F)TT(Cq + D,,F)Q”’ - ~2,axl
20, ir0 [ E;TCTC(A + BF)Q’”
(multiplying on the left and right by Q”2 and using (13)). Q’“(A + BF)TCTC5p
Since the last inequality is affine in [A(k + i) B(k + i)], it is
satisfied for all BTCTCB
P P- T
[A(k + i) B(k + i)J E R or, equivalently,
1
= CoI[A, B,], [AZ B,], >[AL BL]}
~2,axQ (C,D + Dq,,Y)T (AQ + BY)TCT
if and only if T-’ 0 20
C,Q + D,,, Y
Q (A,Q + B, Y)TCT [ C(AQ+BY) 0 I - CB P T-‘BTCT
P
20, j=1,2 ,..., L.
C(A,Q + B,Y) YZmJ I (using (13) and after some simplification).
This establishes (34). This establishes (35).