0% found this document useful (0 votes)
28 views

4F3 - Predictive Control

This document discusses unconstrained predictive control. It introduces the concepts of receding horizon control, linear quadratic regulation, and finite horizon optimal control. The goal is to compute an optimal finite input sequence to regulate system states around the origin while minimizing a cost function, without consideration of constraints. Prediction matrices Φ and Γ are constructed to relate the state and input sequences.

Uploaded by

samandondon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

4F3 - Predictive Control

This document discusses unconstrained predictive control. It introduces the concepts of receding horizon control, linear quadratic regulation, and finite horizon optimal control. The goal is to compute an optimal finite input sequence to regulate system states around the origin while minimizing a cost function, without consideration of constraints. Prediction matrices Φ and Γ are constructed to relate the state and input sequences.

Uploaded by

samandondon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

4F3 - Predictive Control

Lecture 2 - Unconstrained Predictive Control


Jan Maciejowski
[email protected]

4F3 Predictive Control - Lecture 2 – p. 1/23


References
Predictive Control
J. M. Maciejowski. Predictive Control with Constraints,
Prentice Hall UK, 2002.
E. F. Camacho. Model Predictive Control,
Springer UK, 2nd ed., 2003.

Discrete Time Control


G. F. Franklin et. al. Digital Control of Dynamic Systems,
Adison Wesley. 3rd ed., 2003.
K. J. Åstrom and B. Wittenmark. Computer Controlled
Systems, Prentice Hall. 3rd ed., 2003.

4F3 Predictive Control - Lecture 2 – p. 2/23


Standing Assumptions
From now on, we will deal with the DT system:

x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k)
z(k) = Hx(k)

Assumptions:
(A, B) is stabilizable and (C, A) detectable
C = I ⇒ state feedback
H = C ⇒ all outputs/states are controlled variables
Goal is to regulate the states around the origin
No delays, disturbances, model errors, noise etc.
Final two assumptions will be relaxed in final section of
course.
4F3 Predictive Control - Lecture 2 – p. 3/23
Linear Quadratic Regulator (LQR) Problem
Problem: Given an initial state x(0) at time k = 0, compute and
implement an input sequence

{u(0), u(1), . . . , }

that minimizes the infinite horizon cost function


∞ 
X 
x(k)T Qx(k) + u(k)T Ru(k)
k=0

The state weight Q  0 penalizes non-zero states.


The input weight R  0 penalizes non-zero inputs.
Usually Q and R are diagonal and positive definite.

4F3 Predictive Control - Lecture 2 – p. 4/23


LQR Problems - Infinite Horizon
The infinite horizon LQR problem has an infinite number of
decision variables

{u(0), u(1), . . . , }.

A simple and closed form solution exists if


Q is positive semidefinite (Q  0).
R is positive definite (R ≻ 0).
1
The pair (Q , A) is detectable.
2

(See Mini-tutorial 7 in JMM or the 4F2 course notes)

We will solve a finite horizon version of the LQR problem with


the same assumptions as above for use in RHC.

4F3 Predictive Control - Lecture 2 – p. 5/23


Receding Horizon Control

set point
output

time
input

k k+1 time

1. Obtain measurement of current state x.


2. Compute optimal finite horizon input sequence {u 0∗ (x), u∗1 (x), . . . , u∗N −1 (x)}.
3. Implement first part of optimal input sequence κ(x) := u ∗0 (x).
4. Return to step 1.

4F3 Predictive Control - Lecture 2 – p. 6/23


Receding Horizon Control

set point
output

time
input

k k+1 time

1. Obtain measurement of current state x.


2. Compute optimal finite horizon input sequence {u 0∗ (x), u∗1 (x), . . . , u∗N −1 (x)}.
3. Implement first part of optimal input sequence κ(x) := u ∗0 (x).
4. Return to step 1.

4F3 Predictive Control - Lecture 2 – p. 6/23


Receding Horizon Control

set point
output

time
input

k k+1 time

1. Obtain measurement of current state x.


2. Compute optimal finite horizon input sequence {u 0∗ (x), u∗1 (x), . . . , u∗N −1 (x)}.
3. Implement first part of optimal input sequence κ(x) := u ∗0 (x).
4. Return to step 1.

4F3 Predictive Control - Lecture 2 – p. 6/23


Receding Horizon Control

set point
output

time
input

k k+1 time

1. Obtain measurement of current state x.


2. Compute optimal finite horizon input sequence {u 0∗ (x), u∗1 (x), . . . , u∗N −1 (x)}.
3. Implement first part of optimal input sequence κ(x) := u ∗0 (x).
4. Return to step 1.

4F3 Predictive Control - Lecture 2 – p. 6/23


Finite Horizon Optimal Control
Problem: Given an initial state x = x(k), compute a finite horizon
input sequence
{u0 , u1 , . . . , uN −1 }
that minimizes the finite horizon cost function
N
X −1 
V (x, (u0 , . . . , uN −1 )) = xTN P xN + xTi Qxi + uTi Rui
i=0

where
x0 = x
xi+1 = Axi + Bui , i = 0, 1, . . . , N − 1

Important: V (·) is a function of the initial state x and the first N


inputs ui , and not the time index k or the predicted states xi .

4F3 Predictive Control - Lecture 2 – p. 7/23


Finite Horizon Optimal Control
Some terminology:

The vector xi is the prediction of x(k + i) given the current


state x(k) and the inputs u(k + i) = ui for all i = 0, 1, . . . , N − 1

The integer N is the control horizon

The matrix P ∈ Rn×n is the terminal weight, with P  0.

The stability and performance of a receding horizon control


law based on this problem is determined by the parameters
Q, R, P and N .

4F3 Predictive Control - Lecture 2 – p. 8/23


Some Notation
Define the stacked vectors U ∈ RN m and X ∈ RN n as:
   
u0 x1
u1  x2 
   
 
   
U :=  u2 , X :=  x 3 ,
..  .. 
   
.  . 
 
 
uN −1 xN

Note that ui ∈ Rm and xi ∈ Rn , and that x = x0 = x(k) is known.

Can define stacked outputs Y ∈ RN p and controlled variables


Z ∈ RN q in a similar way.

4F3 Predictive Control - Lecture 2 – p. 9/23


More Notation
The cost function is defined as
N
X −1 
V (x, U ) := xTN P xN + xTi Qxi + uTi Rui
i=0

The value function is defined as

V ∗ (x) := min V (x, U )


U

The optimal input sequence is defined as

U ∗ (x) := argmin V (x, U )


U
=: {u∗0 (x), u∗1 (x), . . . , u∗N −1 (x)}

4F3 Predictive Control - Lecture 2 – p. 10/23


Derivation of RHC Law
Compute prediction matrices Φ and Γ such that

X = Φx + ΓU

Rewrite the cost function V (·) in terms of x and U


Compute the gradient ∇U V (x, U )
Set ∇U V (x, U ) = 0 and solve for U ∗ (x)
The RHC control law is the first part of this optimal sequence:
 
u∗0 (x) = Im 0 . . . 0 U ∗ (x)

When there are no constraints, it is possible to do this analytically.

4F3 Predictive Control - Lecture 2 – p. 11/23


Constructing the Prediction Matrices
Want to find matrices Φ and Γ such that X = Φx + ΓU :

x1 = Ax0 + Bu0
x2 = Ax1 + Bu1
x3 = Ax2 + Bu2
..
.

4F3 Predictive Control - Lecture 2 – p. 12/23


Constructing the Prediction Matrices
Want to find matrices Φ and Γ such that X = Φx + ΓU :

x1 = Ax0 + Bu0
x2 = Ax1 + Bu1
x3 = Ax2 + Bu2
..
.

x1 = Ax0 + Bu0
x2 = A(Ax0 + Bu0 ) + Bu1 = A2 x0 + ABu0 + Bu1
x3 = A3 x0 + A2 Bu0 + ABu1 + Bu2
..
.
xN = AN x0 + AN −1 Bu0 + · · · + ABuN −2 + BuN −1
4F3 Predictive Control - Lecture 2 – p. 12/23
Constructing the Prediction Matrices
Collect terms to get:
      
x1 A B 0 ··· 0 u0
 2
 x2  A   AB B · · · 0   u1 
    
 .  :=  .  x0 +  . . .  . 
 .   .   . . . . . .  . 
  .   .  . . .  . 
xN AN AN −1 B AN −2 B · · · B uN −1

Recalling that x := x0 , the prediction matrices Φ and Γ are:


   
A B 0 ··· 0
 2
A  AB B ··· 0
 

Φ := 
 ..  ,
 Γ :=  .. .. ... .. 
 .  . . .
 

AN AN −1 B AN −2 B · · · B

4F3 Predictive Control - Lecture 2 – p. 13/23


Constructing the Cost Function
Recall that the cost function is
N
X −1 
V (x, U ) := xTN P xN + xTi Qxi + uTi Rui
i=0
 T    T  
x x1 u0 u0
 1  Q     
R
 
x2   Q x u u
       
T
  2   1  R  1 
= x0 Qx0 +  .    .  +  .  
    
    . 
 ..   Q  ..   ..   R  .. 
       
xN P xN uN −1 R uN −1

Recalling x := x0 , these can be rewritten in matrix form as


V (x, U ) = xT Qx + X T ΩX + U T ΨU
Note that: P  0 and Q  0 ⇒ Ω  0
R ≻ 0 ⇒ Ψ ≻ 0.

4F3 Predictive Control - Lecture 2 – p. 14/23


Constructing the Cost Function
Recall that: V (x, U ) = xT Qx + X T ΩX + U T ΨU
X = Φx + ΓU

V (x, U ) = xT Qx + (Φx + ΓU )T Ω(Φx + ΓU ) + U T ΨU

= xT Qx + xT ΦT ΩΦx + U T ΓT ΩΓU + U T ΨU
+ xT ΦT ΩΓU + U T ΓT ΩΦx

= xT (Q + ΦT ΩΦ)x + U T (Ψ + ΓT ΩΓ)U + 2U T ΓT ΩΦx

4F3 Predictive Control - Lecture 2 – p. 15/23


Solving for ∗
U (x)
Recall that:
1 T
V (x, U ) = U GU + U T F x + xT (Q + ΦT ΩΦ)x
2
where
G := 2(Ψ + ΓT ΩΓ) ≻ 0, (∵ Ψ ≻ 0 and Ω  0)
F := 2ΓT ΩΦ.

Important: This is a convex and quadratic function of U .

The unique and global minimum occurs at the point where


∇VU (x, U ) = GU + F x = 0.

The optimal input sequence is therefore U ∗ (x) = −G−1 F x.

4F3 Predictive Control - Lecture 2 – p. 16/23


Receding Horizon Control Law
The optimal input sequence is:

U ∗ (x) = −G−1 F x.

The RHC law is defined by the first part of U ∗ (x):


 
u∗0 (x) = Im 0 . . . 0 U ∗ (x).

Define:  
Krhc = − Im 0 . . . 0 G−1 F

so that u = Krhc x.
• This is a time invariant linear control law.
• It approximates the optimal infinite horizon control law.

4F3 Predictive Control - Lecture 2 – p. 17/23


Alternative Formulations

Many variations on our predictive control formulation exist:

Could optimize over predicted input changes ∆ui = ui − ui−1


V (·) is then a function of x(k) and uk−1 .
The decision variables are {∆u0 , ∆u1 , . . . , ∆uN −1 }.
Allows penalties on rapid control fluctuations.

Could allow a larger horizon for prediction than for control


Inputs are constrained at the end of the control horizon,
e.g. ui = Kxi or ∆ui = 0 for all i ≥ N .

4F3 Predictive Control - Lecture 2 – p. 18/23


Stability in Predictive Control
Warning: The RHC law u(k) = Krhc x(k) might not be stabilizing.

Stability of the RHC law depends on the proper choice of the


parameters Q, R, P and N .
It is not obvious how to do this.

Example:

Consider the following open-loop unstable system


with 2 states and 1 input:
! !
1.216 −0.055 0.02763
x(k + 1) = x(k) + u(k)
0.221 0.9947 0.002763

4F3 Predictive Control - Lecture 2 – p. 19/23


Stability in Predictive Control

• Fix Q = P = I
• Compute ρ(A + BKrhc ) for
various R and N :
• Vary N = 1, 2, . . . , 50
• Vary R = 0, 0.02, . . . , 10
• Plot:
• Black — stable
• White — unstable

Not all values of R and N guarantee a stable closed loop –


we will return to this in Lecture 5.

4F3 Predictive Control - Lecture 2 – p. 20/23


Equivalence between RHC and LQR
The solution to the infinite horizon LQR problem is:

u(k) = Klqr x(k)

where
Klqr = −(B T P B + R)−1 B T P A
and P is the solution to the Algebraic Riccati Equation (ARE):

P = AT P A − AT P B(B T P B + R)−1 B T P A + Q

See 4F2 notes or JMM mini-tutorial 7


If the terminal weight P in the finite horizon cost function V (·)
is a solution to the ARE above, then:

Krhc = Klqr

4F3 Predictive Control - Lecture 2 – p. 21/23


Implementation of the RHC Law

u(k) DT x(k)
Krhc
System

The matrix Krhc is a time-invariant, linear state feedback gain.


The RHC law is given by

u(k) = Krhc x(k)

Important: For our unconstrained problem, the control law


can be written in closed-form.

4F3 Predictive Control - Lecture 2 – p. 22/23


Output Feedback Predictive Control

u(k) DT
Krhc System
y(k)

Observer
x̂(k|k)

Controller

If C 6= I ⇒ output feedback.
Use an observer to provide estimates x̂(k|k).
RHC law is the same with x(k) replaced by x̂(k|k).

4F3 Predictive Control - Lecture 2 – p. 23/23

You might also like