0% found this document useful (0 votes)
20 views

1996 - Robust Constrained Model Predictive Control Using LMI

The primary disadvantage of current design techniques for model predictive control (MPC) is their inability to deal explicitly with plant model uncertainty. In this paper, we present a new approach for robust MPC synthesis that allows explicit incorporation of the description of plant uncertainty in the problem formulation. The uncertainty is expressed in both the time and frequency domains. The goal is to design, at each time step, a state-feedback control law that minimizes a ‘worst-case’ infi

Uploaded by

jemmyduc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

1996 - Robust Constrained Model Predictive Control Using LMI

The primary disadvantage of current design techniques for model predictive control (MPC) is their inability to deal explicitly with plant model uncertainty. In this paper, we present a new approach for robust MPC synthesis that allows explicit incorporation of the description of plant uncertainty in the problem formulation. The uncertainty is expressed in both the time and frequency domains. The goal is to design, at each time step, a state-feedback control law that minimizes a ‘worst-case’ infi

Uploaded by

jemmyduc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Auromoric~. Vol. 32, No. IO. pp. 1361-1379.

1996

Pergamon PII: sooo5-10!98(%)ooo63-5


Copyright 0 lW6 Elsevier Science Ltd
Prmted in Great Britain. All rights reserved
wo5-lG98/% $15.00 + 0.00

Robust Constrained Model Predictive Control using


Linear Matrix Inequalities*
MAYURESH V. KOTHARE,t VENKATARAMANAN BALAKRISHNANS
and MANFRED MORARIS

We present a new technique for the synthesis of a robust model predictive


control (MPC) law, using linear matrix inequalities (LMIs). The technique
allows incorporation of a large class of plant uncertainty descriptions, and
is shown to be robustly stabilizing.

Key Words-Model predictive control; linear matrix inequalities: convex optimization; multivariable
control systems: state-feedback; on-line operation; robust control; robust stability; time-varying
systems.

Abstract-The primary disadvantage of current design for the control of slow dynamical systems, such
techniques for model predictive control (MPC) is their
inability to deal explicitly with plant model uncertainty. In as those encountered in chemical process control
this paper, we present a new approach for robust MPC in the petrochemical, pulp and paper industries,
synthesis that allows explicit incorporation of the description and in gas pipeline control. At every time
of plant uncertainty in the problem formulation. The
uncertainty is expressed in both the time and frequency instant, MPC requires the on-line solution of an
domains. The goal is to design, at each time step, a optimization problem to compute optimal
state-feedback control law that minimizes a ‘worst-case’ control inputs over a fixed number of future time
infinite horizon objective function, subject to constraints on
the control input and plant output. Using standard instants, known as the ‘time horizon’. Although
techniques, the problem of minimizing an upper bound on more than one control move is generally
the ‘worst-case’ objective function, subject to input and calculated, only the first one is implemented. At
output constraints, is reduced to a convex optimization
involving linear matrix inequalities (LMIs). It is shown that the next sampling time, the optimization
the feasible receding horizon state-feedback control design problem is reformulated and solved with new
robustly stabilizes the set of uncertain plants. Several measurements obtained from the system. The
extensions, such as application to systems with time delays,
problems involving constant set-point tracking, trajectory on-line optimization can be typically reduced to
tracking and disturbance rejection, which follow naturally either a linear program or a quadratic program.
from our formulation, are discussed. The controller design is Using MPC, it is possible to handle inequality
illustrated with two examples. Copyright 0 1996 Elsevier
Science Ltd. constraints on the manipulated and controlled
variables in a systematic manner during the
1. INTRODUCTION
design and implementation of the controller.
Moreover, several process models as well as
Model predictive control (MPC), also known as many performance criteria of significance to the
moving horizon control (MHC) or receding process industries can be handled using MPC. A
horizon control (RHC), is a popular technique fairly complete discussion of several design
*Received 23 March 1995; revised 5 October 1995; techniques based on MPC and their relative
received in final form 5 February 1996. This paper was not merits and demerits can be found in the review
presented at any IFAC meeting. This paper was recom- article by Garcia et al. (1989).
mended for publication in revised form by Associate
Editor Y. Yamamoto under the direction of Editor Ruth F. Perhaps the principal shortcoming of existing
Curtain. Corresponding author Professor Manfred Morari. MPC-based control techniques is their inability
Tel. +41 1 632 7626; Fax +41 1 632 1211; E-mail to explicitly incorporate plant model uncertainty.
[email protected].
t Chemical Engineering, 210-41, California Institute of Thus nearly all known formulations of MPC
Technology, Pasadena, CA 91125, U.S.A. minimize, on-line, a nominal objective function,
$ School of Electrical Engineering, Purdue University, using a single linear time-invariant (LTI) model
West Lafayette, IN 47907-1285, USA. This work was
initiated when this author was affiliated with Control and to predict the future plant behaviour. Feedback,
Dynamical Systems, California Institute of Technology, in the form of plant measurement at the next
Pasadena, CA 91125, U.S.A. sampling time, is expected to account for plant
$ Institut ftlr Automatik, Swiss Federal Institute of
Technology (ETH), Physikstrasse 3, ETH-Zentrum, 8092 model uncertainty. Needless to say, such control
Zurich, Switzerland. systems that provide ‘optimal’ performance for a
1361
1362 M. V. Kothare it al

particular model may perform very poorly when response coefficients. For certain choices of
implemented on a physical system that is not the objective function, the on-line problem is
exactly described by the model (see e.g. Zheng shown to be reducible to a linear program.
and Morari. 1993). Similarly, the extensive One of the problems with this linear
amount of literature on stability analysis of MPC programming approach is that to simplify the
algorithms is by and large restricted to the on-line computational complexity, one must
nominal case, with no plant-model mismatch choose simplistic, albeit unrealistic, model
(Garcia and Morari, 1982: Clarke rt rrf. 1987: uncertainty descriptions, for, e.g., fewer FIR
Clarke and Mohtadi. 1989: Zafiriou. 1990: coefficients. Secondly. this approach cannot be
Zafiriou and Marchal, 1991; Tsirukis and Morari. extended to unstable systems.
1992; Muska and Rawlings. 1993; Rawlings and
Muske, 1993; Zheng rt al.. 1995): the issue of the From the preceding review, we see that there
behavior of MPC algorithms in the face of has been progress in the mnlysis of robustness
uncertainty. i.e. ‘robustness’, has been addressed properties of MPC. But robust .synthesis. i.e. the
to a much lesser extent. Broadly, the existing explicit incorporation of realistic plant uncer-
literature on robustness in MPC can be tainty description in the problem formulation.
summarized as follows. has been addressed only in a restrictive
framework for FIR models. There is a need for
l Anulysis of robustness properties o,f’ MPC’. computationally inexpensive techniques for rob-
Garcia and Morari (1982. 198Sa, b) have ust MPC synthesis that arc suitable for on-line
analyzed the robustness of unconstrained implementation and that allow incorporation ot
MPC in the framework of internal model a broad class of model uncertainty descriptions.
control (IMC), and have developed tuning In this paper, we present one such MPC-based
guidelines for the IMC filter to guarantee technique for the control of plants with
robust stability. Zafiriou ( 1990) and Zafiriou uncertainties. This technique is motivated by
and Marchal (1991) have used the contraction recent developments in the theory and applica-
properties of MPC to develop necessary/ tion (to control) of optimization involving linear
sufficient conditions for robust stability of matrix inequalities (LMIs) (Boyd rt rrl., 1994).
MPC with input and output constraints. Given There are two reasons why LMI optimization is
upper and lower bounds on the impulse relevant to MPC. First, LMI-based optimization
response coefficients of a single-input single- problems can be solved in polynomial time,
output (SISO) plant with finite impulse often in times comparable to that required for
responses (FIR), Gencelli and Nikolaou the evaluation of an analytical solution for a
(1993) have presented robustness analysis of similar problem. Thus LMI optimization can he
constrained [,-norm MPC algorithms. Polak implemented on-line. Secondly, it is possible to
and Yang (1993a. b) have analyzed robust recast much of existing robust control theory in
stability of their MHC algorithm for the framework of LMIs. The implication is that
continuous-time linear systems with variable wc can devise an MPC scheme where, at each
sampling times by using a contraction time instant. an LMI optimization problem (as
constraint on the state. opposed to conventional linear or quadratic
l MPC with explicit uncertuitlty description. The programs) is solved that incorporates input and
basic philosophy of MPC-based design algo- output constraints and a description of the plant
rithms that account explicitly for plant uncertainty and guarantees certain robustness
uncertainty is the following (Camp0 and properties.
Morari, 1987: Allwright and Papavasiliou. The paper is organized as follows. In Section
1992: Zheng and Morari. 1993). 7 we discuss
-1 background material such as
models of systems with uncertainties. LMIs and
Modify the on-line constrained minimiza- MPC. In Section 3. we formulate the robust
tion problem to a min-max problem unconstrained MPC problem with state feedback
(mininimizing the worst-case value of the 21s an LMI problem. We then extend the
objective function, where the worst case is formulation to incorporate input and output
taken over the set of uncertain plants). constraints, and show that the feasible receding
horizon control law that we obtain is robustly
Based on this concept, Campo and Morari stabilizing. In Section 4. we extend our
(19X7), Allwright and Papavasiliou ( 1992) and formulation to systems with time delays and to
Zheng and Morari (1993) have presented problems involving trajectory tracking, constant
robust MPC schemes for SISO FIR plants. set-point tracking and disturbance rejection. In
given uncertainty bounds on the impulse Section 5. we present two examples to illustrate
Robust constrained MPC using LMI 1363

the design procedure. Finally, in Section 6, we FIR plants can be translated to a polytopic
present concluding remarks. uncertainty description on the state-space mat-
rices. Thus this polytopic uncertainty description
2. BACKGROUND is suitable for several problems of engineering
significance.
2.1. Models for uncertain systems
We present two paradigms for robust control,
Structured feedback uncertainty. A second, more
which arise from two different modeling and
common, paradigm for robust control consists of
identification procedures. The first is a ‘multi-
an LTI system with uncertainties or perturba-
model’ paradigm, and the second is the more
tions appearing in the feedback loop (see Fig.
popular ‘linear system with a feedback uncer-
lb):
tainty’ robust control model. Underlying both
these paradigms is a linear time-varying (LTV) x(k + 1) = Ax(k) + Bu(k) + &p(k),
system y(k) = Wk),
x(k + 1) = A(k)x(k) + B(k)u(k), q(k) = C+(k) + D+(k),
Y(k) = CX(k), (1) p(k) = W)(k).
[A(k) WI1 E Q The operator A is block-diagonal:
where u(k) E lFY1is the control input, x(k) E R”l
is the state of the plant and y(k) E R”) is the
rAl AZ 1
plant output, and Q is some prespecified set. A= (4)
. I
Polytopic or multi-model paradigm. For poly- L 41
topic systems, the set S2 is the polytope with A,: KY”‘+Iw”l. A can represent either a
memoryless time-varying matrix with
Q = Co&t, B,l, [AZ &I,. . . 7[A, BrJt IlAi(k)ll, z G(Ai(k)) 5 1, i = 1,2,. . . , r, k ~0,
(2) or a convolution operator (for, e.g., a stable LTI
where Co devotes to the convex hull. In other dynamical system), with the operator norm
words, if [A B] E R then, for some nonnegative induced by the truncated &norm less than 1,
A,, A*, . , AL summing to one, we have i.e.,

[A B] = i A;[A; B;]. 5 Pi(i)TPi(i)


j=O
i=l
L = 1 corresponds to the nominal LTI system. i=l,..., r, Vk 10. (5)
Polytopic system models can be developed as
follows. Suppose that for the (possibly non- Each Ai is assumed to be either a repeated scalar
linear) system under consideration, we have block or a fill block, and models a number of
input/output data sets at different operating factors, such as nonlinearities, dynamics or
points, or at different times. From each data set, parameters, that are unknown, unmodeled or
we develop a number of linear models (for neglected. A number of control systems with
simplicity, we assume that the various linear uncertainties can be recast in this framework
models involve the same state vector). Then it is (Packard and Doyle, 1993). For ease of
reasonable to assume that any analysis and reference, we shall refer to such systems as
design methods for the polytopic system (l), (2) systems with structured uncertainty. Note that in
with vertices given by the linear models will this case, the uncertainty set R is defined by (3)
apply to the real system. and (4).
Alternatively, suppose the Jacobian When Ai is a stable LTI dynamical system, the
[c?flax Jflau] of a nonlinear discrete time- quadratic sum constraint (5) is equivalent to the
varying system x(k + 1) = f (x(k), u(k), k) is following frequency-domain specification on the
known to lie in the polytope R. Then it can be z-transform A;(z):
shown that every trajectory (x, u) of the original
nonlinear system is also a trajectory of (1) for II&II, E e SLlq=, G($(e’“))Il.
some LTV system in R (Liu, 1968). Thus the E .
original nonlinear system can be approximated Thus the structured uncertainty description is
(possibly conservatively) by a polytopic uncer- allowed to contain both LTI and LTV blocks,
tain LTV system. Similarly, it can be shown that with frequency-domain and time-domain con-
bounds on impulse response coefficients of SISO straints respectively. We shall, however, only
1364 M. V. Kothare et al.

Fig

consider the LTV case, since the results wc the output y(k + i 1k). i = 1.2,. . . ,p. Here WC
obtain are identical for the general mixed USC the following notation:
uncertainty case, with one exception, as pointed
r(k+i/k), state and output respectively. at
out in Section 3.2.2. The details can be found in
y(k + i 1k) time k + i, predicted based on the
Boyd rt rrl. (1994, Section X.2), and will he
measurements at time k: x(k I k)
omitted here for brevity. For the L-TV case. it is
and y(k ) k) refer respectively to
easy to show through routine algebraic man-
the state and output measured at
ipulations that the system (3) corresponds to the
time k:
system (1) with
lr(k + i / k) control move at time k + I,
R = {[A + B,,K,, B + B,,hD ,,,,I : computed by the optimization
Asatisfies (4) with *(A,) 5 I}. (6) problem (7) at time k: u(k I k) is
the control move to be imple-
A = 0, p(k) - 0. k 2 0, corresponds to the mented at time k;
nominal LTI system. output or prediction horizon;
I)
The issue of whether to model a system as a 111 input or control horizon.
polytopic system or a system with structured
uncertainty depends on a number of factors. It is assumed that there is no control action
such as the underlying physical model of the after time k + ttr - 1, i.e. u(k + i I k) = 0, i 2 n7.
system, available model identification an d In the receding horizon framework, only the first
validation techniques. etc. For example. non- computed control move u(k ( k) is implemented.
linear systems can be modeled tither as At the next sampling time. the optimization (7)
polytopic systems or as systems with structured is resolved with new measurements from the
perturbations. We shall not concern ourselves plant. Thus both the control horizon m and the
with such issues here: instead we shall assume prediction horizon p move or recede ahead by
that one of the two models discussed thus far is one step as time moves ahead by one step. This
available. i$ the reason why MPC is also sometimes
referred to as receding horizon control (RHC) or
2.2. Model predictive control moving horizon control (MHC). The purpose of
Model predictive control is an open-loop taking new measurements at each time step is to
control design procedure where at each sampling compensate for unmeasured disturbances and
time k, plant measurements are obtained and a model inaccuracy, both of which cause the
model of the process is used to predict future system output to be different from the one
outputs of the system. Using these predictions, ttl predicted by the model. We assume that exact
control moves u(k + i 1k), i = 0, I, , tt7 ~ I, measurement of the state of the system is
are computed by minimizing a tzottzitd cost available at each sampling time k, i.e.
J,,(k) over a prediction horizon p as follows: .r(k 1k) =x(k). (8)
Several choices of the objective function J,,(k) in
the optimization (7) have been reported (Garcia
subject to constraints on the control input ot rrl., 1989; Zafiriou and Marchal, 1991; Muske
u(k + i 1k). i = 0, 1. , m - 1, and possiblv and Rawlings, 1993; Genceli and Nikolaou.
alsoonthestate.w(k+iIk), i=O.l,,.., p, ani 1993) and have been compared in Campo and
Robust constrained MPC using LMI 1365

Morari (1986). In this paper, we consider the imposed strictly


over the future horizon (i.e.
following quadratic objective i 2 1) and not at the current time (i.e. i = 0).
This is because the current output cannot be
J,(k) = 5 [x(k + i 1l~)~Q,x(k + i 1 k) influenced by the current or future control
i=O action, and hence imposing any constraints on y
+ u(k + i 1k)TRu(k + i 1k)], at the current time is meaningless. Note also that
(11) and (12) specify ‘worst-case’ output
where Q, > 0 and R > 0 are symmetric weighting constraints. In other words, (11) and (12) must
matrices. In particular, we shall consider the case be satisfied for any time-varying plant in R used
p =m, which is referred to as infinite horizon as a model for predicting the output.
MPC (IH-MPC). Finite horizon control laws
have been known to have poor nominal stability
properties (Bitmead et al., 1990; Rawlings and Remark 1. Constraints on the input are typically
hard constraints, since they represent limitations
Muske, 1993). Nominal stability of finite horizon
MPC requies imposition of a terminal state on process equipment (such as valve saturation),
constraint (x(k + i 1k) = 0, i = m) and/or use of and as such cannot be relaxed or softened.
the contraction mapping principle (Zafiriou, Constraints on the output, on the other hand,
1990; Zafiriou and Marchal, 1991) to tune Qi, R, are often performance goals; it is usually only
m and p for stability. But the terminal state required to make ymax and y;,,,, as small as
constraint is somewhat artificial, since only the possible, subject to the input constraints.
first control move is implemented. Thus, in the
closed loop, the states actually approach zero 2.3. Linear matrix inequalities
only asymptotically. Also, the computation of We give a brief introduction to linear matrix
the contraction condition (Zafiriou, 1990; Zafi- inequalities and some optimization problems
riou and Marchal, 1991) at all possible based on LMIs. For more details, we refer the
combinations of active constraints at the reader to Boyd et al. (1994).
optimum of the on-line optimization can be A linear matrix inequality or LMI is a matrix
extremely time consuming, and, as such, this inequality of the form
issue remains unaddressed. On the other hand,
infinite horizon control laws have been shown to F(x) = & + i xjl;; > 0,
guarantee nominal stability (Muske and Raw- i=l

lings, 1993; Rawlings and Muske, 1993). We


where xl, x2,..., x, are the variables, F; =
therefore believe that, rather than using the
FT E Iw”“” are given, and F(x) > 0 means that
above methods to ‘tune’ the parameters for
F(x) is positive-definite. Multiple LMIs F,(x) >
stability, it is preferable to adopt the infinite
0 F,,(x) > 0 can be expressed as the single
horizon approach to guarantee at least nominal
I&I ’
stability.
In this paper, we consider Euclidean norm diag (F,(X), . . . , F,(x)) > 0.
bounds and componentwise peak bounds on the
input u(k + i 1k), given respectively as Therefore we shall make no distinction between
a set of LMIs and a single LMI, i.e. ‘the LMI
II@ +i I ~)llz~u,,,, k i?O, (9) F,(x) >o, . . . ) F,(x) >O’ will mean the ‘LMI
and diag (F,(x), . . . , F,(x)) > 0’.
Convex quadratic inequalities are converted to
l”j(k + i I k)l s Ul.maxy LMI form using Schur complements. Let
k, i ~0, j = 1,2,. . . , n,,. (10) Q(x) = Q(x)~, R(x) = R(x)~, and S(x) depend
affinely on x. Then the LMI
Similarly, for the output, we consider the
Euclidean norm constraint and componentwise Q(x) S(x) >.
peak bounds on y(k + i I k), given respectively (13)
[ wT R(x)1
as
is equivalent to the matrix inequalities
IIytk+i~k)l12~y,,,, k?O, irk (11)
R(x) > 0, Q(x) - S(x)R(x)-‘So > 0
and
or, equivalently,
IYj(k + i I k)ls Yj,maxr
k?O, irl, j=l,2 ,..., rzY. (12) Q(x) > 0, R(x) - S(x)‘Q(x)-‘S(x) > 0.

Note that the output constraints have been We often encounter problems in which the
1366 M. V. Kothare et ul.

variables are matrices, for example the con- 3.1. Robust unconstrained IH-MPC
straint P > 0, where the entries of P arc the The system is described by (1) with the
optimization variables. In such cases, we shall associated uncertainty set Q (either (2) or (6)).
not write out the LMI explicitly in the form Analogously to the familiar approach from linear
F(x) > 0, but instead make clear which matrices robust control, we replace the minimization, at
are the variables. each sampling time k. of the nominal perfor-
The LMI-based problem of central importance mance objective (given in (7)). by the minimiza-
to this paper is that of minimizing a linear tion of a rnhust performance objective as
subject to LMI constraints: follows:

minimize (.I.\. min max J,(k).


( 14) I,(k 1,{A).,-o.l. .,,I [/\(Ai,lHlh ir)ltO.,--11
subject to F(.r ) .-, 0.
where
Here F is a symmetric matrix that depends
affinely on the optimization variable X, and c is a
real vector of appropriate size. This is a convex .I,(k)=~[.r(k+i~k)‘Q,.r(k+i~k)
, -0
nonsmooth optimization problem. For more on
this and other LMI-based optimization prob- +u(k+iIk)‘Ru(k+iIk)]. (15)
lems, we refer the reader to Boyd et cd. (1994). This is a ‘min-max’ problem. The maximization
The observation about LMI-based optimization is over the set R, and corresponds to choosing
tht is most relevant to us is that that time-varying plant [A(k + i) B(k + i)] t 11.
LMI problems are tractable. i 2 0, which, if uses as a ‘model’ for predictions.
would lead to the largest or ‘worst-case’ value of
LMI problems can be solved in polynomial time, J,(k) among all plants in Q. This worst-case
which means that they have low computational value is minimized over present and future
complexity: from a practical standpoint, there control moves u(k + i 1k), i = 0, 1. . , nz. This
are effective and powerful algorithms for the min-max problem, though convex for finite HI, is
solution of these problems, that is, algorithms not computationally tractable, and as such has
that rapidly compute the global optimum, with not been addressed in the MPC literature. We
non-heuristic stopping criteria. Thus, on exit. the address the problem (15) by first deriving an
algorithms can prove that the global optimum upper bound on the robust performance
has been obtained to within some prespecified objective. We then minimize this upper bound
accuracy (Boyd and El Ghaoui, 1993: Alizadeh with a constant state-feedback control law
rt al., 1994: Nesterov and Nemirovsky, 1904: u(k + i / k) = Fx(k + i 1k), i ~0.
Vandenberghe and Boyd, 1905). Numerical
experience shows that these algorithms solve Dc~rivn~iot~ of rlw upper hound. Consider a
LMI problems with extreme efficiency. quadratic function V(X) = .r’.P.r, P > 0 of the
The most important implication from the state .u(k I k) =x(k) (see (8)) of the system (1)
foregoing discussion is that LMI-based optimiza- with V(0) = 0. At sampling time k, suppose I/
tion is well suited for on-line implementation. satisfies the following inequality for all x(k +
which is essential for MPC. i / k). u(k + i I k), i 2 0 satisfying (l), and for
any [A(k + i) B(k + i)] E Q, i 10:
.J MODEL F’REDIC‘TIVE (‘ON~I‘ROL I!SIN(; LINEAR
MATRIX INEQlJALlTIES V(s(k + i + I I k)) - V(.u(k + i I k))

In this section. we discuss the problem :Z -[.y(k + i / k)‘Q,x(k + i I k)


formulation for robust MPC. In particular. we + u(k + i 1k)‘Ktr(k + i / k)]. (16)
modify the minimization of the nominal
objective function, discussed in Section 2.2, to a For the robust performance objective function to
minimization of the worst-cnsr objective func- be finite, we must have x(x I k) = 0. and hence
tion. Following the motivation in Section 2.2. we V(x(x I k)) = 0. Summing (16) from i = 0 to
consider the IH-PMC problem. We begin with i L x, we get
the robust IH-MPC problem without input and
- V(x(k j k) i -J,(k).
output constraints. and reduce it to a linear
objective minimization problem. WC then in- Thus
corporate input and output constraints. Finally.
we show that the feasible receding horizon max J,(k) 5 V(.v(k / k)). (17)
l.,\(h+o II(h I ,)]FSLl .o
state-feedback control law robustly stabilizes the
set of uncertain plants SZ. This gives an upper bound on the robust
Robust constrained MPC using LMI 1367

performance objective. Thus the goal of our objective minimization problem with variables y,
robust MPC algorithm has been redefined to Q, Y and A:
synthesize, at each time step k, a constant
state-feedback control law u(k + i 1k) = Fx(k + min y (23)
v.9.Y.A
i ( k) to minimize this upper bound V(x(k 1k)).
As is standard in MPC, only the first computed subject to
input u(k 1k) = Fx(k 1k) is implemented. At the

1
1 x(kI)k’ >.
next sampling time, the state x(k + 1) is (24)
measured, and the optimization is repated to [ XV+) Q - ’
recompute F. The following theorem gives us and
conditions for the existence of the appropriate
P > 0 satisfying (16) and the corresponding state Q YTR’” QQt"
feedback matrix F. R”*Y Yl 0

20,
Q:“Q 0 YI

Theorem 1. Let x(k) =x(k I k) be the state of C,Q + D,,,Y 0 0


the uncertain system (1) measured at sampling AQ+BY 0 0

1
time k. Assume that there are no constraints on
the control input and plant output. QC; + YTD& QA’ + YTBT
(a) Suppose that the uncertainty set Q is 0 0
defined by a polytope as in (2). Then the state 0 0 (25)
feedback matrix F in the control law u(k +
i ) k) = Fx(k + i ) k), i r0 that minimizes the
upper bound V(x(k I k)) on the robust perfor- 0 Q - B/JIB,
A 0
mance objective function at sampling time k is
given by where

Al&l,
F = YQ-‘,

I
(18)
h2Zn2
A= >O. (26)
where Q >O and Y are obtained from the
solution (if it exists) of the following linear u,
objective minimization problem (this problem is I
of the same form as the problem (14)): Proof. See Appendix A.

min y (19) Remark 2. Part (a) of Theorem 1 can be derived


v.Q.Y
from the results in Bemusson et al. (1989) for
subject to quadratic stabilization of uncertain polytopic
continuous-time systems and their extension to

[ xV+) 1
1 #VT
Q
>.
-
(20) the discrete-time case (Geromel et al., 1991).
Part (b) can be derived using the same basic
techniques in conjunction with the S-procedure
and (see Yakubovich, 1992, and the references
therein).
Q QA; + YTB; QQ;” YTR”*
Remark 3. Strictly speaking, the variables in the
AjQ + BiY Q 0 0
above optimization should be denoted by Qk, Fk,
QQ
l/2
0 Yl 0
Y, etc. to emphasize that they are computed at
R’j2Y 0 0 YI time k. For notational convenience, we omit the
I 1
subscript here and in the next section. We shall,
20, j=l,2 ,..., L. (21)
however, briefly utilize this notation in the
robust stability proof (Theorem 3). Closed-loop
(b) Suppose the uncertainty set n is defined
stability of the receding horizon state-feedback
by a structured norm-bounded perturbation A as
control law given in Theorem 1 will be
in (6). In this case, F is given by
established in Section 3.2.
F = YQ-‘, (22)
Remark 4. For the nominal case, (L = 1 or
where Q > 0 and Y are obtained from the A(k) = 0, p(k) = 0, k 2 0), it can be shown that
solution (if it exists) of the following linear we recover the standard discrete-time linear
1368 M. V. Kothare et ul.

quadratic regulator (LQR) solution (see Kwa- conservatism in our worst-case MPC synthesis by
kernaak and Sivan (1972) for the standard LQR recomputing F using new plant measurements.
solution).
Remark 7. The speed of the closed-loop
Remark 5. The previous remark establishes that response can be influenced by specifying a
for the nominal case. the feedback matrix F minimum decay rate on the state x(llx(k)jl 5
computed from Theorem 1 is constant, indepen- cp I/x(O) 11,0 < p < 1) as follows:
dent of the state of the system. However, in the
_u(k + i + 1 1kj“Px(k + i + 1 / k)
presence of uncertainty, even without constraints
on the control input or plant output, F can show 5 p2x(k + i ( k)“‘Px(k + i 1k), i 2 0, (27)
a strong dependence on the state of the system. for any [A(k + i) B(k + i)] E R, i 20. This
In such cases, using a receding horizon approach implies that
and recomputing F at each sampling time shows
significant improvement in performance as il.r(k t i + 1 ( k)ll i [~~~“‘p Ilx(k + i 1k)ll,
opposed to using a static state feedback control
i 20.
law. This, we believe, is one of the key ideas in
this paper. and is illustrated with the following Following the steps in the proof of Theorem 1, it
simple example. Consider the polytopic system can be shown that the requirement (27) reduces
(l), n being defined by (2) with to the following LMIs for the two uncertainty
0.0347 0.5 1Y4 descriptions:

I
A, -= 0.3835 0.8310 1 ’ fbr polytopic uncrrtuinty.

A? =
IO.0591 0.2641 1 P2Q (A,Q + KY)’ ,.
1 I.7971 0.x717 1, I A,Q + B,Y Q 1 -’
_ [ .- 1.4462 I
i=l.....L; (2X)
fijr .structw-ed uncertuinty,

1
Figure 2(a) shows the initial state response of a
(C,Q + D<,,,Y)’ (AQ + BYI’

20,
time-varying system in the set <1, using the p?Q
receding horizon control law of Theorem 1 C‘,,Q + ~<,!,y 1 0
(Q, = R, R = 1). Also included is the static I AQ+BY 0 Q - B,,AB;,
state-feedback control law from Theorem 1.
(2Y)
where the feedback matrix F is not recomputed
at each time k. The response with the receding where A > 0 is of the form (26).
horizon controller is about five times faster. Thus an additional tuning parameter p E (0, 1)
Figure 2(b) shows the norm of F as a function of is introduced in the MPC algorithm to influence
time for the two schemes, and thus explains the the speed of the closed-loop response. Note that
significantly better performance of the receding with p = 1. the above two LMIs are trivially
horizon scheme. satisfied if (21) and (25) arc satisfied.

Remark 6. Traditionally. feedback in the form of 3.2. Robust cnnstrctinerl IH-MPC


plant measurement at each sampling time k is In the previous section. we formulated the
interpreted as accounting for model uncertainty robust MPC problem without input and output
and unmeasured disturbances (see Section 2.2). constraints, and derived an upper bound on the
In our robust MPC setting. this feedback can robust performance objective. In this section, we
now be reinterpreted as potentially reducing the show how input and output constraints can be

time

(b)
Fig. 2. (a) Clnconstraincd closed-loop responses and (b) norm ot the teedback matrix F: solid lines. using receding horizon state
feedback: dashed lines. using robust static state feedback.
Robust constrained MPC using LMI 1369

incorporated as LMI constraints in the robust u(k). In this section, we show how limits on the
MPC problem. As a first step, we need to control signal can be incorporated into our
establish the following lemma, which will also be robust MPC algorithm as sufficient LMI con-
required to prove robust stability. straints. The basic idea of the discussion that
Lemma 1. (Invariant ellipsoid). Consider the follows can be found in Boyd et al. (1994) in the
system (1) with the associated uncertainty set R. context of continuous-time systems. We present
(a) Let SL be a polytope described by (2). At it here to clarify its application in our
sampling time k, suppose there exist Q > 0, y (discrete-time) robust MPC setting and also for
and Y = FQ such that (21) holds. Also suppose completeness of exposition. We shall assume for
thatu(k+iIk)=Fx(k+iIk),irO.Thenif the rest of this section that the postulates of
Lemma 1 are satisfied, so that 8 is an invariant
x(k 1k)‘Q-‘x(k 1k) I 1
ellipsoid for the predicted states of the uncertain
(or, equivalently, x(k I l~)~Px(k I k) system.
I y with P = yQ_‘), At sampling time k, consider the Euclidean
norm constraint (9):
then
Ilu(k + i I k)1125 u,,,, i 2 0.
max x(k + i 1k)‘Q-‘x(k + i I k) < 1,
[A(k+j) i?(k+j)]~R,j~O The constraint is imposed on the present and the
entire horizon of future manipulated variables,
irl, (30)
although only the first control move u(k I k) =
or, equivalently, u(k) is implemented. Following Boyd et al.
(1994) we have
max x(k+iIk)TPx(k+iIk)<y,
[A(k+j) B(k+j)]eQ.j?O ~2; IW + i I WI’2 = yz; IIYQ-‘x(k + i I k)lli
irl. (31)
smax lIYQ-‘zll:
Thus, %={(z IzTQ-‘z~l}={z IzTPzsy} is an ZEX
invariant ellipsoid for the predicted states of the = A,,,(Q-“‘YTYQ-I”).
uncertain system (see Fig. 3).
(b) Let R be described by (6) in terms of a Using (13), we see that Ilu(k + i ) k)ll~~ uf,_
structured A block as in (4). At sampling time k, i 2 0, if
suppose there exist Q > 0, y, Y = FQ and A > 0
such that (25) and (26) hold. If u(k + i I k) =
Fx(k + i I k), i 2 0, then the result in (a) holds as [“f’eyl 2 0. (32)

well for this case. This is an LMI in Y and Q. Similarly, let us


consider peak bounds on each component of
Remark 8. The maximization in (30) and (31) is u(k + i I k) at sampling time k, (10):
over the set Q of time-varying models that can
be used for prediction of the future states of the luj(k + i I k)l I uj,,,,, i 2 0, j = 1, 2, . . . , nrr.
system. This maximization leads to the ‘worst- Now
case’ value of x(k + i I k)‘Q-‘x(k + i I k) (or,
equivalently, x(k + i I k)TPx(k + i I k) at every m:.x luj(k + i I k)12 = ~zt I(YQ-‘x(k + i I k))j12
instant of time k + i, i 2 1.
5 y:; I(YQ-‘Z)jl’
Proof. See Appendix B.
3.2.1. Input constraints.
Physical limitations 5 ll<YQ-“‘>jIl$
inherent in process equipment invariably impose
(using the Cauchy-Schwarz inequality)
hard constraints on the manipulated variable
= (YQ-‘YT)jj.
Thus the existence of a symmetric matrix X such
that

1
x Y
2 0, with X, % ufmax,
[ YT Q
j = 1, 2, . . . , n,,, (33)

Fig. 3. Graphical representation of the state-invariant guarantees that luj(k + i I k)ls uj.,,,, i 2 0, j =
ellipsoid 8 in two dimensions. 1, 2, . . . , ncr. These are LMIs in X, Y and Q.
1370 M. V. Kothare et ~1.

Note that (33) is a slight generalization of the The condition (35) is an LMI in Y, Q > 0 and
result derived in Boyd et al. (1994). T-‘>O.

In a similar manner, componentwise peak


Remark 9. The
inequalities (32) and (33)
bounds on the output (see (12)) can be
represent sufficient LMI constraints that guar-
translated to sufficient LMI constraints. The
antee that the specified constraints on the
development is identical to the preceding
manipulated variables are satisfied. In practice,
development for the Euclidean norm constraint
these constraints have been found to be not too
ifwereplaceCbyC,andTbyT,,f=1,2 ,..., n,.
conservative, at least in the nominal case.
in (34) and (35). where
3.2.2. Output constraints. Performance specifi-
cations impose constraints on the process output
y(k). As in Section 3.2.1, we derive sufficient
LMI constraints for both the uncertainty y(k)=[;i]=Cx(k)=[ j(k).
descriptions (see (2) and (3) (4)) that guarantee
that the output constraints are satisfied.
At sampling time k, consider the Euclidean 7; is in general different for each I= 1, 2, , n,..
norm constraint (11): Note that for the case with mixed A blocks, we
can satisfy the output constraint over the current
max lly(k +i 1k)llZ~ymaxr iz- 1. and future horizon max,,c, /Iy(k + i 1k)/* 4 y,;,,
[A(k+j) /3(k+j)]~Q./zO
and not over the (strict) future horizon (i 2 1) as
As discussed in Section 2.2, this is a worst-case in (11). The corresponding LMI is derived as
constraint over the set Sz, and is imposed strictly follows:
over the future prediction horizon (i 2 1).
m,ax lIWk+i [k)ll:~lrp: IlCzll:
Polytopic uncertainty. In this case, i2 is given by = hmax(Q”*C7CQ”*)
(2). As shown in Appendix C, if
Thus, CQC-’ ~yf,,d+ II_0 + i 1k>l12~Y,~,,,

1
Q (A,Q + V’)‘C’ >o i 1) 0. For componentwise peak bounds on the
[ C(A)& + BjJ’) .v;,,1 - * output, we replace C by C,, I = 1, . . , n,..
3.2.3. Robust stability. We are now ready to
;=1,2 (..., I>, (34) state the main theorem for robust MPC synthesis
then with input and output constraints and establish
robust stability of the closed loop.
[A(k+/)
Theorem 2. Let x(k) =x(k ) k) be the state of
The condition (34) represents a set of LMIs in Y the uncertain system (1) measured at sampling
and Q >O. time k.
(a) Suppose the uncertainty set G is defined
Structured uncertainty. In this case, L(1 is by a polytope as in (2). Then the state feedback
described by (3) and (4) in terms of a structured matrix F in the control law u(k + i 1k) =
A block. As shown in Appendix C, if F?c(k + i / k), i 20, that minimizes the upper
bound V(x(k I k)) on the robust performance
Y$B~Q (C,Q + &Y)’ objective function at sampling time k and
c,Q + DqrtY T- ’ satisfies a set of specified input and output

20(35)
[ C(AQ + BY) 0 constraints is given by

I- 1
(AQ + BY)‘.C’ F=YQ ‘,

0 where Q > 0 and Y are obtained from the


CB,, T ’ B;‘;C-‘- solution (if it exists) of the following linear
objective minimization problem:
with
min {y / y, Q, Y and variables in the
I I
LMIs for input and output constraints},
> 0,
subject to (20), (21), either (32) or (33),
t,L,, depending on the input constraint to be imposed,
and (34) with either C and T, or C, and T/,
1 = 1,2, , n,, depending on the output con-
max IIy(k + i 1k)llz~y,,x, i 2 1.
straint to be imposed.
Robust constrained MPC using LMI 1371

(b) Suppose the uncertainty set n is defined it must also satisfy this inequality, i.e.
by (6) in terms of a structured perturbation A as
x(k + 1 1k + l)TQ-‘x(k + 1 1k + 1) < 1,
in (4). In this case, F is given by
or
F = YQ-‘,

1
1 x(k+l[k+l)T >.
where, Q ~0 and Y are obtained from the
solution (if it exists) of the following linear [ x(k + 1 1k + 1) Q
objective minimization problem: (using 13).
min {y ( y, Q, Y, A and variables in the Thus the feasible solution of the optimization
LMIs for input and output constraints} problem at time k is also feasible at time k + 1.
Hence the optimization is feasible at time k + 1.
subject to (24) (29, (26), either (32) or (33)
This argument can be continued for times
depending on the input constraint to be imposed,
k + 2, k + 3, . . . to complete the proof. 0
and (35) with either C and T, or C, and T,,
I= 1,2, . . . ) n,, depending on the output con-
Theorem 3. (Robust stability). The feasible
straint to be imposed. receding horizon state feedback control law
obtained from Theorem 2 robustly asymptoti-
Proof. From Lemma 1, we know that (21) and
cally stabilizes the closed-loop system.
(24), (25) imply respectively for the polytopic
and structured uncertainties that 8 is an Proof. In what follows, we shall refer to the
invariant ellipsoid for the predicted states of the uncertainty set as R, since the proof is identical
uncertain system (1). Hence the arguments in for the two uncertainty descriptions.
Section 3.2.1 and 3.2.2 used to translate the input To prove asymptotic stability, we shall
and output constraints to sufficient LMI establish that V(x(k ) k)) = x(k I k)TP,x(k I k),
constraints hold true. The rest of the proof is where Pk >O is obtained from the optimal
similar to that of Theorem 1. q
solution at time k, is a strictly decreasing
Lyapunov function for the closed-loop.
In order to prove robust stability of the closed
First, let us assume that the optimization in
loop, we need to establish the following lemma.
Theorem 2 is feasible at time k = 0. Lemma 2
then ensures feasibility of the problem at all
Lemma 2. (Feasibility). Any feasible solution of
times k >O. The optimization being convex
the optimization in Theorem 2 at time k is also
therefore has a unique minimum and a
feasible for all times t > k. Thus if the
corresponding optimal solution (y, Q, Y) at each
optimization problem in Theorem 2 is feasible at
time k 2 0.
time k then it is feasible for all times t > k.
Next, we note from Lemma 2 that y, Q >O, Y
Proof Let us assume that the optimization
(or, equivalently, y, F = YQ-‘, P = rQ_’ > 0)
problem in Theorem 2 is feasible at sampling obtained from the optimal solution at time k are
time k. The only LMI in the problem that feasible (of course, not necessarily optimal) at
depends explicitly on the measured state time k + 1. Denoting the values of P obtained
from the optimal solutions at time k and k + 1
x(k 1k) = x(k) of the system is the following:
respectively by Pk and Pk+, (see Remark 3), we

1’
1 x(kIVT ,. must have
[ x(k Ik) Q - x(k + 1 1k + l)TPk+,x(k + 1 1k + 1)
Thus, to prove the lemma, we need only prove ~x(k + 1 1k + l)TPkx(k + 1 1k + 1). (36)
that this LMI is feasible for all future measured
states x(k + i 1k + i) = x(k + i), i 2 1. This is because Pk+, is optimal, whereas Pk is
Now, feasibility of the problem at time k only feasible at time k + 1.
implies satisfaction of (21) and (24), (29, which, And lastly, we know from Lemma 1 that if
using Lemma 1, in turn imply respectively for u(k + i I k) = F,x(k + i I k), i 20 (Fk is obtained
the two uncertainty descriptions that (30) is from the optimal solution at time k), then for
satisfied. Thus, for any [A(k + i) B(k + i)] E n, any [A(k) B(k)] E a, we must have
i 2 0 (where fi is the corresponding uncertainty x(k + 11 k)TP,x(k + 11 k)
set), we must have <x(k 1k)TP,x(k ( k) x(k 1k) ZO (37)
x(k+iIk)TQ-‘x(k+iIk)<l, irl.
(see (49) with i = 0).
Since the state measured at k + 1, that is, Since the measured state x(k + 11 k +l) =
x(k+lIk+l)=x(k+l), equals [A(k) + x(k + 1) equals (A(k) + B(k)F,)x(k 1k) for
B(k)F]x(k I k) for some [A(k) B(k)] E Sz, some [A(k) B(k)] E S& it must also satisfy the
1372 M. V. Kothare et al.

inequality (37). Combining this with the required to track the target vector y, by moving
inequality (36), we conclude that the system to the set-point x,,, u,<, where

x(k + I 1k + i jVk,,_+k+ I / k + 1) X, =Ax, +Bu,, v = cx,.


_,

<x(k 1k)-‘P&k 1k) .r(k / k) f 0. We assume that x,, U, and y, are feasible, i.e.
they satisfy the imposed constraints. The choice
Thus x(k 1k j’P,x(k 1k) is a strictly decreasing
of J,(k) for the robust set-point tracking
Lyapunov function for the closed loop. which is
objective in the optimization (15) is
bounded below by a positive-definite function of
x(k 1k) (see (17)). We therefore conclude that
J,(k) = c {[Cx(k + i ( k) -- Cx,]’
x(k)-+0 as k-+x. n
, 0
X Q,[Cx(k + i I k) - Cx,]
Remark 10. The proof of Theorem I (the
unconstrained case) is identical to the preceding + [u(k + i I k) - u,]‘R[u(k + i / k) - u,]},
proof if we recognize that Theorem 1 is only a Q, >C), R > 0. (3X)
special case of Theorem 2 without the LMIs
corresponding to input and output constraints. As discussed in Kwakernaak and Sivan (1972),
we can define a shifted state T(k) =x(k) -x,. a
shifted input ii(k) = u(k) - u, and a shifted
4. EXTENSIONS output y(k) = y(k) - y, to reduce the problem to
The presentation up to this point has been the standard form as in Section 3. Com-
restricted to the infinite horizon regulator with a ponentwise peak bounds on the control signal 11
zero target. In this section, we extend the can be translated to constraints on 6 as follows:
preceding development to several standard
problems encountered in practice.
e -ll,.m;,x - u,,, 5 fi, 5 U,.max - ll,,,
4.1. Reference trajectory tracking C’onstraints on the transient deviation of y(k)
In optimal tracking problems, the system from the steady-state value _v,, i.e. y(k), can be
output is required to track a reference trajectory incorporated in a similar manner.
y,(k) = C,x,(k), where the reference states X, are
computed from the equation 4.3. Disturbance rejection

x,(k + 1) = A,x,(k), x,(O) = .Y,,,. In all practical applications, some disturbance


invariably enters the system and hence it is
The choice of J,(k) for the robust trajectory meaningful to study its effect on the closed-loop
tracking objective in the optimization (15) is response. Let an unknown disturbance e(k),
I having the property lim, ., e(k) = 0 enter the
J,(k) = c {[Cx(k + i 1k) --- C,x,(k + i)]’ system (1) as follows:
,=,I
x(k + 1) = A(k)x(k) + B(k)u(k) + e(k),
X Q,[Cx(k + i / k) - C,.r,.(k + i)]
y(k) = Cx(k), (39)
+ u(k + i / k)‘Ru(k + i I k)}.

Q, > 0, R > 0.
IA(k) B(k)1E Q.
A simple example of such a disturbance is any
As discussed in Kwakernaak and Sivan (1972). signal (C,“:,, e(i)‘e(i) < r).
energy-bounded
the plant dynamics can be augmented by the
Assuming that the state of the system x(k) is
reference trajectory dynamics to reduce the
measurable, we would like to solve the
robust trajectory tracking problem (with input
optimization problem (1.5). We shall assume that
and output constraints) to the standard form as the predicted states of the system satisfy the
in Section 3. Owing to space limitations. we shall equation
omit these ideals.
x(k + i + 1 1k) = A(k + i)x(k + i 1k)
4.2. Constant set-point tracking + B(k + i)u(k + i 1k), (40)
For uncertain linear time-invariant systems.
IA(k + i) B(k + i)] E Q.
the desired equilibrium state may be a constant
point x,, II, (called the set-point) in state-space. As in Section 3, we can derive an upper bound
different from the origin. Consider (1). which we on the robust performance objective (15). The
shall now assume to represent an uncertain LTI problem of minimizing this upper bound with a
system, i.e. [A B] E R are constant unknown state-feedback control law u(k + i 1k) = Fx(k +
matrices. Suppose that the system output y is i / k). i > 0, at the same time satisfying
Robust constrained MPC using LMI 1373

constraints on the control input and plant which is assumed to be measurable at each time
output, can then be reduced to a linear objective k 1 z, we can derive an upper bound on the
minimization as in Theorem 2. The following robust performance objective (42) as in Section
theorem establishes stability of the closed loop 3. The problem of minimizing this upper bound
for the system (39) with this receding horizon with the state-feedback control law u(k + i -
control law, in the presence of the disturbance z I k) = Fx(k + i - z I k), k 2 z, i 2 0, subject to
e(k). constraints on the control input and plant
output, can then be reduced to a linear objective
Theorem 4. Let x(k) = x(k 1k) be the state of minimization as in Theorem 2. These details can
the system (39) measured at sampling time k and be worked out in a straightforward manner, and
let the predicted states of the system satisfy (40). will be omitted here. Note, however, that the
Then, assuming feasibility at each sampling time appropriate choice of the function V(w(k))
k 2 0, the receding horizon state feedback satisfying an inequality of the form (16) is
control law obtained from Theorem 2 robustly
asymptotically stabilizes the system (39) in the
V(w(k)) = x(k)TP&k) + i x(k - i)TP,x(k - i)
presence of any asymptotically vanishing distur- i=l
bance e(k).
+ 2 x(k - i)TPT,x(k - i)
Proof It is easy to show that for sufficiently i=r+l

large time k > 0, V(x(k 1k)) = x(k 1k)TPx(k 1k), G#

i=c 1WTPr&)
where P > 0 is obtained from the optimal + . . .
+
solution at time k, is a strictly decreasing r,,, _ 1 +

Lyapunov function for the closed loop. Owing to = w(k)TPw(k),


lack of space, we shall skip these details. q
where P is appropriately defined in terms of PO,
4.4. Systems with delays P,, P,, . . f 3 Pro,. The motivation for this modified
Consider the following uncertain discrete-time choice of V comes from Feron et al. (1992),
linear time-varying system with delay elements, where such a V is defined for continuous time
described by the equations systems with delays, and is referred to as a
modified Lyapunov-Krasovskii (MLK)
x(k + 1) = A,(k)x(k) + 2 Ai(k)x(k - Zi) functional.
i=l
+ B(k)u(k - z), (41)
5. NUMERICAL EXAMPLES
y(k) = Cx(k)
In this section, we present two examples that
with
illustrate the implementation of the proposed
[A,(k) A,(k) . . . An(k) B(k)1E Q. robust MPC algorithm. The examples also serve
to highlight some of the theoretical results in the
We shall assume, without loss of generality, that
paper. For both these examples, the software
the delays in the system satisfy 0 < r < 71 < . . . <
LMI Control Toolbox (Gahinet et al., 1995) in
z,,,. At sampling time k 2 2, we would like to
the MATLAB environment was used to
design a state-feedback control law u(k + i -
compute the solution of the linear objective
z I k) = Fx(k + i - z I k), i 2 0, to minimize the
minimization problem. No attempt was made to
following modified infinite horizon robust per-
optimize the computation time. Also, it should
formance objective
be noted that the times required for computation
max J&h (42) of the closed-loop responses, as indicated at the
[A(k+i)E(k+i)]sR.iZO
end of each example, only reflect the state-of-
where the-art of LMI solvers. While these solvers are
significantly faster than classical convex op-
J,(k) = 2 [x(k + i ) k)TQ,x(k + i I k) timization algorithms, research in LMI optimiza-
i=o tion is still very active and substantial speed-ups
+u(k+i-zIk)TRu(k+i-z(k)], can be expected in the future.

subject to input and output constraints. Defining


an augmented state 5.1. Example 1
The first example is a classical angular
w(k) = [x(k)T x(k - 1)’ ... x(k - z)T positioning system adapted from Kwakernaak
... x(k - z,)’ ... x(k - z,)‘]‘, and Sivan (1972). The system (see Fig. 4)
1374 M. V. Kothare et al.

I
m
I
I
Target object

,
Goal: 0 = 8,
I
with IS(k)1 2 1, k ~0. The uncertainty
be described as in (3) with
can then

11
Antenna
H
1:= Given an initially disturbed state x(k), the
4
robust II-I-MPC optimization to be solved at
each time k is

Fig. 4. Angular positioning system


X {J,(k) = i: [y(k + i / k)’ + Ku(k + i 1k)‘j,
I I -,I

consists of a rotating antenna at the origin of the K = 0.00002,


plane, driven by an electric motor. The control
subject to lu(k + i 1k)ls 2 V, i 2 0.
problem is to use the input voltage to the motor
No existing MPC synthesis technique can
(U V) to rotate the antenna so that it always
address this robust synthesis problem. If the
points in the direction of a moving object in the
problem is formulated without explicitly taking
plane. We assume that the angular positions 01
into account plant uncertainty, the output
the antenna and the moving object (0 and 8, rad
response could be unstable. Figure 5(a) shows
respectively) and the angular velocity of the
the closed-loop response of the system corres-
antenna (brads-‘) are measurable. The motion
ponding to a(k) = 9 s ‘, given an initial state ot

1
of the antenna can be described by the following
0.05
discrete-time equations obtained from their s(0) = The control law is generated by
continuous-time counterparts by discretization, / 0
using a sampling time of 0.1 s and Euler’s minimizing a rzomird unconstrained intinitc
first-order approximation for the derivative horizon objective function using a nomind
model corresponding to a(k) = a,,,,,,,= 1 s ‘. The
response is unstable. Note that the optimization
is feasible at each time k 2 0, and hence the
1 controller cannot diagnose the unstable response
O.’ lx(k) + [ O,:KllG) via infeasibility. even though the horizon is
= 0 1 -O.la(k)
infinite (see Rawlings and Muske, 1993). This is
2 A(k)x(k) + Bu(k), not surprising, and shows that the prevalent
y(k) = [l 01x(k) b CL(k), notion that ‘feedback in the form of plant
measurements at each time step k is expected to
K = 0.787 rad-’ V ’ SC’, 0.1 s ’ 5 a(k) 5 10 s ‘.
compensate for unmeasured disturbances and
The parameter a(k) is proportional to the
model uncertainty’ is only an ad hoc fix in MPC
coefficient of viscous friction in the rotating parts
for model uncertainty without any guarantee of
of the antenna and is assumed to be arbitrarily
robust stability. Figure 5(b) shows the response
time-varying in the indicated range of variation.
using the control law derived from Theorem 1.
Since 0.1 5 a(k) 4 10, we conclude that A(k) t
Notice that the response is stable and the
R = Co {A,, A2}, where
performance is very good. Figure 6(a) shows the
closed-loop response of the system when a(k) is
randomly time-varying between 0.1 and 10s ‘_
The corresponding control signal is given in Fig.
Thus the uncertainty set R is a polytope, as in
6(b). A control constraint of IIl( ~2 V is
(2). Alternatively, if we define
imposed. The control law is synthesized accord-
a(k) - 5.05 ing to Theorem 2. WC see that the control signal
6(k) =
4.95 ’ stays close to the constraint boundary up to time
k = 3 s, thus shedding light on Remark 9. Also
included in Fig. 6 are the response and control
signal using a static state-feedback control law.
where the feedback matrix F computed from
Theorem 2 at time k = 0 is kept constant for all
times k > 0, i.e. it is not recomputed at each time
c,, = [O 4.951, /!I<,,,= 0
k. The response is about four times slower than
then 6(k) is time-varying and norm-bounded the response with the receding horizon statc-
Robust constrained MPC using LMI 1375

L I I’
--cl 1 2 3 4 -O-3 1 P 3

time (set) time (set)

(4 (b)
Fig. 5. Unconstrained closed-loop responses for nominal plant (a(k) - 9 s-l): (a) using nominal MPC with a(k) = 1 s-l: (b)
using robust LMI-based MPC.

feedback control law. This sluggishness can be 5.2. Example 2


unserstood if we consider Fig. 7, which shows the The second example is adapted from Problem
norm of F as a function of time for the receding 4 of the benchmark problems described in Wie
horizon controller and for the static state- and Bernstein (1992). The system consists of a
feedback controller. To meet the constraint two-mass-spring system as shown in Fig. 8.
[u(k)1 = IFx(k)l~2 for small k, F must be Using Euler’s first-order approximation for the
‘small’, since x(k) is large for small k. But as derivative and a sampling time of 0.1 s, the
x(k) approaches 0, F can be made larger while following discrete-time state-space equations are

x,(k
+1)
still meeting the input constraint. This ‘optimal’ obtained by discretizing the continuous-time

x,(k
+1)
use of the control constraint is possible only if F equations of the system (see Wie and Bernstein,

x4k
+1)
1)
is recomputed at each time k, as in the receding 1992)

x&
+1
horizon controller. The static state-feedback
controller does not recompute F at each time
k 20, and hence shows a sluggish (though
stable) response.
For the computations, the solution at time k [

1
was used as an initial guess for solving the
optimization at time k + 1. The total times 1 0 0.1 0

x1
@)
required to compute the closed-loop responses in 0 1 0 0.1
=
Fig. 5(b) (40 samples) and Fig. 6 (100 samples) -O.lKlm, O.lK/m, 1 0
were about 27 and 77 s respectively (or,
O.lK/m, -O.lK/m, 0 1

I[ 1
equivalently, 0.68 and 0.77 s per sample), on a
SUN SPARCstation 20, using MATLAB code. 0
The actual CPU times were about 18 and 52 s x x2(k) + 0
(i.e., 0.45 and 0.52 s per sample) respectively. In u(k),

__-
x3(k) 0.1/m,
both cases, nearly 95% of the time was required
[ x&) 0
to solve the LMI optimization at each sampling
time. y(k) =X2(k).

0.6,

‘.i’
0

/AC
5c *-
2 -0.s _’

P ,’
s
-, ,‘
:
I’
--1.s ’
,*’
3’
-*.a s II D

time (SIX)

Fig. 6. Closed-loop responses for the time-varying system with input constraint: solid lines, using robust receding horizon state
feedback: dashed lines, using robust static state feedback.
M. V. Kothare et al.

Fig. 8. Coupled spring-mass system.

where K,,,,, = i(K,,, + Km,,) and Kde. c=


;(K,;,, -- K,,,).
For unit-step output tracking of ~1, WC: must
have at steady state x,, =x2, = 1, xj, =x4, = 0
and 11, = 0. As in Section 4.2, we can shift the
origin to the steady state. The problem we would
like to solve at each sampling time k is
_____________----------------
0
i
0 12 3 4 5 6 7 8 9 10

time (set)
Fig. 7. Norm of the leedback matrix F as a functwn 01 time. subject to lu(k + i 1k)ls 1, i 2 0. Here J,(k) is
solid lines. using robust receding horizon state feedback:
dashed lines. using robust static state feedback.
given by (38) with Q, = I and R = 1. Figure 9
shows the output and control signal as functions
of time, as the spring constant K (assumed to be
Here. .x, and x2 are the positions of body I and constant but unknown) is varied between
2, and xj and x1 are their velocities respectively. K “,,” = 0.5 and K,,,, = 10. The control law is
nz, and m2 are the masses of the two bodies and synthesized using Theorem 2. An input con-
K is the spring constant. For the nominal system, straint of IuI 5 1 is imposed. The output tracks
m, = mz = K = 1 with appropriate units. The the set-point to within 10% in about 25 s for all
control force II acts on m,. The performance values of K. Also. the worst-case overshoot
specifications are defined in Problem 4 of Wie (corresponding to K = K,,, = 0.5) is about 0.2. It
and Bernstein (1992) as follows. Design a was found that asymptotic tracking is achievable
feedback/feedforward controller for a unit-step in a range as large as 0.01 i K 4 100. The
output command tracking problem for the response in that case was, as expected, much
output _vwith the following properties. more sluggish than that in Fig. Y.
1. A control input constraint of IuI 5 I must be The total time required to compute the
satisfied. closed-loop response in Fig. 9 (500 samples) for
2. Settling time and overshoot arc to be each fixed value of the spring constant K was
minimized. about 43X s (about 0.87 s per sample) on a SUN
3. Performance and stability robustness with SPARCstation 20, using MATLAB code. The
respect to m, , tt17 and K are to be maximized. CPU time was about 330s (about 0.66s per
We shall assume for this problem that exact sample). Of these times, nearly 94% was
measurement of the state of the system, that is, required to solve the LMI optimization at each
[X, s: _Yj XJ’ 1s available. We shall also assume sampling time.
that the masses ttz, and mz arc constant. equal to
1. and that K is an uncertain constant in the
6. (‘ON(‘LI.ISIONS
range K,,,,,, 5 K 5 K,,,,,. The uncertainty in K is
modeled as in (3) by defining Model predictive control (MPC) has gained
K - K,,,,,,, wide acceptance as a control technique in the
6= process industries. From a theoretical stand-

1
K,I,\ *
point. the stability properties of nominal MPC

0
I 0 0. I 0
have been studied in great detail in the past
0 1 0 0.1

()
seven or eight years. Similarly, the analysis of
A=

=I-0.
B,,1*
,,,, ’ robustness properties of MPC has also received

0.
1
1Km,,,, -().lK
().lK.,,,,,
~0. O.lK,,,,,o,,1 01 0I significant attention in the MPC literature.

=
C‘,,
However. robust synthesis for MPC has been
addressed only in a restrictive sense for
uncertain FIR models. In this article, we have
described a new theory for robust MPC synthesis
for two classes of very genera1 and commonly
[K,,c\ - Kd,, 0 0], D ,,,, = 0 encountered uncertainty descriptions. The de-
Robust constrained MPC using LMI 1377

Fig. 9. Position of body 2 and the control signal as functions of time for varying values of the spring constant.

velopment is based on the assumption of full Generalized predictive control-II. Extensions and inter-
pretations. Automatica, 23, 149-160.
state feedback. The on-line optimization involves Feron, E., V. Balakrishnan and S. Boyd (1992). Design of
solution of an LMI-based linear objective stabilizing state feedback for delay systems via convex
minimization. The resulting time-varying state- optimization. In Proc. 31st IEEE Conf on Decision and
Control, Tucson, AZ, Vol. 1, pp. 147-148.
feedback control law minimizes, at each time Gahinet, P., A. Nemirovski, A. J. Lamb and M. Chilali
step, an upper bound on the robust performance (1995). LMI Control Toolbox: For Use with MATLAB.
objective, subject to input and output con- The Mathworks, Inc., Natick, MA.
Garcia. C. E. and M. Morari (1982). Internal model control
straints. Several extensions such as constant 1. ti unifying review and &me hew results. fnd. Engng
set-point tracking, reference trajectory tracking, Chem. Process Des. Dev., 21, 308-232.
disturbance rejection and application to delay Garcia, C. E. and M. Morari (1985a). Internal model control
2. Design procedure for multivariable systems. Ind. Engng
systems complete the theoretical development. Chem. Process Des. Dev., 24,472-484.
Two examples serve to illustrate application of Garcia, C. E. and M. Morari (1985b). Internal model control
the control technique. 3. Multivariable control law computation and tuning
guidelines. Ind. Engng Chem. Process Des. Dev., 24,
484-494.
Acknowledgements-Partial financial support from the US Garcia, C. E., D. M. Prett and M. Morari (1989). Model
National Science Foundation is gratefully acknowledged. We predictive control: theory and practice-a survey.
would like to thank Pascal Gahinet for providing an initial Automatica, 25, 335-348.
version of the LMI-Lab software. Genceli, H. and M. Nikolaou (1993). Robust stability
analysis of constrained It-norm model predictive control.
AIChE J. 39,1954-1965.
Geromel, J. C., P. L. D. Peres and J. Bernussou (1991). On a
REFERENCES convex parameter space method for linear control design
of uncertain systems. SIAM J. Conrrol Optim., 29,
Alizadeh, F., J.-P. A. Haeberly and M. L. Overton (1994). A 381-402.
new primal-dual interior-point method for semidefinite Kwakemaak, H. and R. Sivan (1972). Linear Optimal
programming. In Proc. 5th SIAM Conf on Applied Linear Control Wiley-Interscience, New York.
Systems.
Algebra, Snowbird, UT, June 1994. Liu, R. W. (1%8). Convergent systems. IEEE Trans. Autom.
Allwright, J. C. and G. C. Papavasiliou (1992). On linear Control, AC-13,384-391.
programming and robust model-predictive control using Muske, K. R. and J. B. Rawlings (1993). Model predictive
impulse-responses. Syst. Control Let?., 18, 159-164. control with linear models. AIChe J. 39,262-287.
Bernussou, J., P. L. D. Peres and J. C. Geromel (1989). A Nesterov, Yu. and A. Nemirovsky (1994). Inrerior-point
linear programming oriented procedure for quadratic Polynomial Methods in Convex Programming. SIAM,
stabilization of uncertain systems. Cyst. Control Len., 13, Philadelphia.
65-72. Packard, A. and J. Doyle (1993). The complex structured
Bitmead, R. R., M. Gevers and V. Wertz (1990). Adapriue singular value. Automatica, 29, 71-109.
Optimal Control. Prentice-Hall, Englewood Cliffs, NJ. Polak, E. and T. H. Yang (1993a). Moving horizon control of
Boyd, S. and L. El Ghaoui (1993). Methods of centers for linear systems with input saturation and plant
minimizing generalized eigenvaiues. Lin. Algebra Appfics, uncertainly-1: robustness. int. J. Control, 53,613-638.
188,63-111. Polak. E. and T. H. Yane (1993b). Moving horizon control of
Boyd, S., L. El Ghaoui, E. Feron and V. Balakrishnan line& systems witTh. inpui saturation and plant
(1994). Linear Matrix Inequalities in System and Control uncertainty-2: disturbance rejection and tracking. fnr. J.
Theory. SIAM, Philadelphia. Control, 58,639-663.
Campo, P. J. and M. Morari (1986). m-norm formulation of Rawlings, J. B. and K. R. Muske (1993). The stability of
model predictive control problems. In Proc. American constrained receding horizon control. IEEE Trans. Aurom.
Controiconf, Seattle, WA; pp. 339-343. Conrrof, AC-38,1512-1516.
Camuo. P. J. and M. Morari (1987). Robust model predictive Tsirukis, A. G. and M. Morari (1992). Controller design with
co&ol. In Proc. American C&o1 Conf, Minneapolis, actuators constraints. In Proc. 31~1IEE Conf on Decision
MN, pp. 1021-1026. and Control, Tucson, AZ, pp. 2623-2628.
Clarke. D. W. and C. Mohtadi (1989). Properties of Vandenberghe, L. and S. Boyd (1995). A primal-dual
generalized predictive control. Automatica, 25,859-87X potential reduction method for problems involving linear
Clarke. D. W., C. Mohtadi and P. S. Tuffs (1987). matrix inequalities. Math. Program., 69, 205-236.
137x M. V. Kothare et ul.

Wie. B. and D. S. Bernstein (1992). Benchmark problems for if and only if there exist Q > 0. Y = FQ and y such that
robust control design. .I. Guidance, Control. Dvn., 15,
1057-1059. 42
Yakubovich, V. A. (1902). Nonconvex optimization problem: A,Q + B,Y
the infinite-horizon linear-quadratic control problem with
quadratic constraints. S)~st. Control Len., 19, 13-22.
Ql"Q
RI;?),
Zafiriou, E. (1990). Robust model predictive control ol
processes with hard constraints. Comput. Chem. Engng, ; = I, 2, 1..
14,x9-371. 1990.
Zafiriou, E. and A. Marchal (1991). Stability of SISO The feedback matrix is then given by F = YQ- ‘_ This
quadratic dynamic matrix control with hard output establishes (18) and (21).
constraints. AlChE J.. 37, ISSO-1.560. (b) Let R be described by (3) in terms of a structured
Zheng. A.. V. Balakrishnan and M. Morari (1995). uncertainty block A as in (4). As in (a), we substitute
Constrained stabilization of discrete-time systems. Int J. u(k + i 1k) = Fx(k + i 1k), i ~0. and the state space
Rohusr Nonlin. Control. 5, 461-485. equations (3) in (16) to get
Zheng. Z. Q. and M. Morari (1993). Robust stability of
constrained model predictive control. In Proc. American r(k i i 1k) ’
Conrrol ConJ, San Francisco. CA, pp. 379-383.
I.p(k + i 1k) 1
(A + BF)‘P(A t BF) P (A + BF)‘PB,,
APPENDIX A-PROOF OF THEOREM 1 x +F’RFtQ,
B;,P(A + BF) B:, PB,, 1
Minimization of l/(x(k / k)) =x(k 1k)‘P~(k 1k). P 10 IS
equivalent to
50. (A.3)
1
min y
y.l’

subject to
pi(k t I 1k)‘p,(k +i I k)sx(k +i 1k)‘(C,,,,
r(k 1k)‘Px(k 1k)r-y
+ D,,,,.,F)‘(C,,, + D,,,, ,F)*(k + i / k).
Defining Q = yP ’ A 0 and using ( 13).this is equivalent to 1 = I. 2.. r. (A.4)
min y It is easy to see that (A.3) and (A.4) arc satisfied if 3h;.
Y.V
h..’ ..A:>Osuch that
subject to
(AtBF)‘P(A+BF)-P+F’RF (A+BF)‘PB,,
tQ, + (C,, + D,,,,F)‘jj’(C,, + Ij,,,,F) 5 0.
B;P(A + AF) B:,PB,, .I’ I

which establishes (19) (20). (23) and (24). It remains to (A.5)


prove (18). (21). (22). (25) and (26). We shall prove these by
considering (a) and (b) separately.
(a) The quadratic function V is required to satisfy (16).
Substituting u(k + i / k) = Fx(k + i ( k). i ~0, and the state
space (1). inequality (16) becomes
I’
.r(k + i 1k)‘([A(k + r) + B(k + i)Fj’

X P[A(k t i) t B(k + i)F] ~ P + F’RF + Q,}x(k + 1/k) -_I).


Substituting P = yQ ’ with Q > 0, using (13) and after some
That is satisfied tor all I -- 0 if straightforward manipulations, we see that this is equivalent
to the existence of Q > 0. Y = FQ and .I’ > 0 such that
[A(k + r) + B(k + /)F]‘P[A(k + i)
v Y’R’!’ QQ;:'
+B(kti)F]pPi F’RF+Q,-0. (A.]) RI’-‘y
Yl 0
Substituting P = yQ ‘, Q >O and Y = FQ. prc- and QI'Q 0 Yl
post-multiplying by Q (which leaves the inequality (‘,,Q + DC,,,
Y 1’ 0
unaffected). and using (13). we see that this is equivalent to AQ+BY 0 0

v
A(k - r)Q t B(k + i))’ QC,: f Y’D<;,, VA’.+ Y’B’
0 0
QILQ
R1”Y 0 0 2 0
y.1’ ’ 0

1
QA(k t i)’ + Y’B(k + i)’ QQ/” Y’R”’ 0 Q - B,,YA’ ‘B;
Q 0 0
20. (A?) Defining A = yn’ ’ -yO and A, = yh: ‘>O, r-l.2 ,..., 1.
0 Yl 0
then gives (22). (25) and (26). and the proof is complete. G
0 0 Yl

The inequality (A.2) is aftine in IA(k t i) B(k + r)]. Hence it


is satislicd for all APPENDIX B-PROOF OF LEMMA I

[A(k + i) B(k + i)] F U (a) From the proof of Theorem I. Part (a), we know that

-C‘o((A, B,], [A, BL],.. .[A, B,]) (2l)o(A.2)e(A.l)+(16).


Robust constrained MPC using LMI 1379

Thus Structured uncertairtty


For any admissible A(k + i), i 2 0, we have
x(k + i + 1 1k)*fx(k + i +l t k)
- x(k + i 1/~)~Px(k+ i 1k) 5 -x(k + i ) k)TQ14k + i I k) ~~~lly(k+i/k)H,
-u(k+iI k)TRu(k+i)k ) co, since Q I > 0. = IIII; llC(A + BF)x(k + i I k) + CB,,p(k + i ) k)llz
Therefore
~~~;IIC(A+BF)z+CB,,p(k+iIk)llz, ir0
x(k + i + 1 1k)TPx(k + i + 1 1k)
<x(k+i(k)TPx(k+i/k), ir0, = ,?;I, fIC(A + BF)Q”‘z + CB,,p(k + i ( k)//z i 20.
x(k + i ( k) #O (B.l)
We want IlC(A + BF)Q’12z + CB,,p(k + i I k)l12 I y,,,, i 5
Thus if x(k / k)TPx(k 1 k) % y then x(k + 1 1k)=Px(k + 0, for all p(k + i 1k) and z satisfying
1 ( k)
< y. This argument can be continued for x(k + 2 I k),
x(k + 3 1k), . . . , and this completes the proof of Part (a). pj(k + i 1k)Tpj(k + i 1k)
(b) From the proof of Theorem I, Part (b), we know that
5 ZTQ”‘(Cq.j + Dy,.,F)T(Cq.j + Dy~r.,F)Q”*r~
(29, (26)H(AS), (A.6)=$(A.3), (A.4)@(16).
j = 1.2, , r,
Arguments identical to those in Part (a) then establish the
0 and zTz 5 1. This is satisfied if 3t,, t2,. . , t,, t,+, >O such
result.
that for all z and p(k + i I k),

APPENDIX C-OUTPUT CONSTRAINTS AS LMIs


[p(k:iIk)l
As in Section 3.2.1, we shall assume that the postulates of
Lemma 1 are satisfied, so that $ is an invariant ellipsoid for Q”2(A + BF)TC’C(A + BF)Q”*
the predicted states of the uncertain system (1). x +Q’“(C, + D,,,VTT(C, + D,,,F)Qm
-?,+I1
Polytopic uncertainty B;fCTC(A + BF)Q’”
For any plant [A(k +j) B(k +j)] E ll, j 20, we have

Y Ilv(k + i 1k)lh
It_“: Q’“(A + BF)TCTCB,

= T.: IIC(A(k + i) + B(k + i)F)x(k + i ( k)llz 1


BTCTCB
P P- T 1
syz; IIC(A(k +i)+B(k +i)F)z112, i?O

= c[C(A(k + i) + B(k + i)F)Q”*], i ~0.

Thus IIYV + i I k)ll~~~~,,, i? 1, for any [A(k +j) B(k + where


j)] E fi, j 2 0 if

c{C[A(k + i) + E(k + i)F]Q’“} Cr y,,,, i 2 0,


or
Q’“[A(k + i) + B(k + i)FJTC’C[A(k + i)
+ B(k + i)F]Q”’ 5 y$,,I, i s 0, Without loss of generality, we can choose t,,, = Y’,,~. Then
which in turn is equivalent to the last inequality is satisfied for all z and p(k + i ) k) if

1
Q [A(k + i)Q + B(k + i)YITCT Q’“(A + BF)TCTC(A + EF)Q’”

1=o,
[ C[A(k + i)Q + B(k + i)Y] YLI +Q’R(Cq + Dq,,F)TT(Cq + D,,F)Q”’ - ~2,axl
20, ir0 [ E;TCTC(A + BF)Q’”

(multiplying on the left and right by Q”2 and using (13)). Q’“(A + BF)TCTC5p
Since the last inequality is affine in [A(k + i) B(k + i)], it is
satisfied for all BTCTCB
P P- T
[A(k + i) B(k + i)J E R or, equivalently,

1
= CoI[A, B,], [AZ B,], >[AL BL]}
~2,axQ (C,D + Dq,,Y)T (AQ + BY)TCT
if and only if T-’ 0 20
C,Q + D,,, Y
Q (A,Q + B, Y)TCT [ C(AQ+BY) 0 I - CB P T-‘BTCT
P
20, j=1,2 ,..., L.
C(A,Q + B,Y) YZmJ I (using (13) and after some simplification).
This establishes (34). This establishes (35).

You might also like