0% found this document useful (0 votes)
88 views

Skript Control Systems II V1

This document provides notes from lectures on Control Systems II taught by Dr. Gregor Ochsner and others. It covers topics like feedback control loops, stability analysis, loop shaping synthesis, digital control systems, introduction to MIMO systems, analysis of MIMO systems including norms, singular value decomposition, stability, controllability, and observability. The document is intended to help students review and practice concepts from the course through examples and exercises. The author provides the document for feedback to improve accuracy.

Uploaded by

sarath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Skript Control Systems II V1

This document provides notes from lectures on Control Systems II taught by Dr. Gregor Ochsner and others. It covers topics like feedback control loops, stability analysis, loop shaping synthesis, digital control systems, introduction to MIMO systems, analysis of MIMO systems including norms, singular value decomposition, stability, controllability, and observability. The document is intended to help students review and practice concepts from the course through examples and exercises. The author provides the document for feedback to improve accuracy.

Uploaded by

sarath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 248

Control Systems II

Gioele Zardini
[email protected]
August 2, 2018

1
Gioele Zardini Control Systems II FS 2018

Abstract
This ”Skript” is made of my notes from the lecture Control Systems II of Dr. Gregor
Ochsner (literature of Prof. Dr. Lino Guzzella) and from my lectures as teaching assistant
in 2017 for the lecture of Dr. Guillaume Ducard and in 2018 for the lecture of Dr. Jacopo
Tani.
This document, should give the chance to repeat one more time the contents of the lecture
Control Systems II and practice them through many examples and exercises.

The updated version of the Skript is available on n.ethz.ch/∼gzardini/.

I cannot guarantee on the correctness of what is included in this Skript: it is possible


that small errors occur. For this reason I am very grateful to get feedbacks and correc-
tions, in order to improve the quality of the literature. An errata version of these notes
will always be available on my homepage
Enjoy your Control Systems II!

Cheers!

Gioele Zardini

Version Update:

Version 1: June 2018

2
Gioele Zardini Control Systems II FS 2018

Contents
1 Recapitulation from Control Systems I 7
1.1 Loop Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 Standard Feedback Control Loop . . . . . . . . . . . . . . . . . . . 7
1.1.2 The Gang of Six . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.3 The Gang of Four . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.4 Relations to Performance . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.5 Feed Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 General Control Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Nominal Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.3 Synthesis: Loop Shaping . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2.5 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 The Bode’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2 Digital Control 30
2.1 Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.2 Discrete-Time Control Systems . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.1 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.2 Discrete-time Control Loop Structure . . . . . . . . . . . . . . . . . 32
2.3 Controller Discretization/Emulation . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 State Space Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.5 Discrete-time Systems Stability . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6 Discrete Time Controller Synthesis . . . . . . . . . . . . . . . . . . . . . . 46
2.6.1 Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.6.2 Discrete-Time Synthesis . . . . . . . . . . . . . . . . . . . . . . . . 47
2.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3 Introduction to MIMO Systems 65


3.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1.1 State Space Description . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1.2 Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2 Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.1 Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.2 Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2.3 Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4 Analysis of MIMO Systems 76


4.1 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.1.1 Vector Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.1.2 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.3 Signal Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.4 System Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.1.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

3
Gioele Zardini Control Systems II FS 2018

4.2 Singular Value Decomposition (SVD) . . . . . . . . . . . . . . . . . . . . . 83


4.2.1 Preliminary Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . 83
4.2.3 Intepretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2.4 Directions of poles and zeros . . . . . . . . . . . . . . . . . . . . . . 94
4.2.5 Frequency Responses . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.3 MIMO Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.1 External Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.2 Internal Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.3 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4 MIMO Controllability and Observability . . . . . . . . . . . . . . . . . . . 115
4.4.1 Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.5 MIMO Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.5.1 Output Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.5.2 Input Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.5.3 Reference Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.5.4 Useful Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.5.5 Towards Clearer Bounds . . . . . . . . . . . . . . . . . . . . . . . . 126
4.5.6 Is this the whole Story? Tradeoffs . . . . . . . . . . . . . . . . . . . 128
4.5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
4.6 MIMO Robust Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6.1 MIMO Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6.2 SISO Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.6.3 Linear Fractional Transform (LFT) . . . . . . . . . . . . . . . . . . 131
4.6.4 Unstructured Small Gain Theorem . . . . . . . . . . . . . . . . . . 132
4.6.5 From the Block-Diagram to the LFT . . . . . . . . . . . . . . . . . 134
4.6.6 Recasting Performance in a Robust Stability Problem . . . . . . . . 137
4.7 MIMO Robust Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.7.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.7.2 M-Delta Approach: from RP to RS . . . . . . . . . . . . . . . . . . 138
4.7.3 Structured Singular Value . . . . . . . . . . . . . . . . . . . . . . . 139

5 MIMO Control Fundamentals 143


5.1 Decentralized Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.1.1 Idea and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.1.2 Relative-Gain Array (RGA) . . . . . . . . . . . . . . . . . . . . . . 144
5.1.3 Q Parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
5.2 Internal Model Control (IMC) . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.2.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.2.2 Example: Predictive Control . . . . . . . . . . . . . . . . . . . . . . 160
5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

6 State Feedback 169


6.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.2 Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.2.1 Reachable Canonical Form . . . . . . . . . . . . . . . . . . . . . . . 170
6.3 Pole Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

4
Gioele Zardini Control Systems II FS 2018

6.3.1 Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174


6.3.2 Ackermann Formula . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.4 LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.3 General Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.4.4 Weighted LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.4.5 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.4.6 Direct Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.4.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

7 State Estimation 193


7.1 Preliminary Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.3 The Luenberger Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
7.3.1 Duality of Estimation and Control . . . . . . . . . . . . . . . . . . 195
7.3.2 Putting Things Together . . . . . . . . . . . . . . . . . . . . . . . . 195
7.4 Linear Quadratic Gaussian (LQG) Control . . . . . . . . . . . . . . . . . . 196
7.4.1 LQR Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . 196
7.4.2 LQR Problem Solution . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.4.3 Simplified Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.4.4 Stady-state Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 197
7.4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8 H∞ Control 208
8.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
8.2 Mixed Sensitivity Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 209
8.2.1 Transfer Functions Recap . . . . . . . . . . . . . . . . . . . . . . . 209
8.2.2 How to ensure Robustness? . . . . . . . . . . . . . . . . . . . . . . 210
8.2.3 How to use this in H∞ Control? . . . . . . . . . . . . . . . . . . . . 210
8.3 Finding Tzw (s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
8.3.1 General Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
8.3.2 Applying Mixed Sensitivity Approach . . . . . . . . . . . . . . . . . 212
8.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
8.4.1 State Space Representation . . . . . . . . . . . . . . . . . . . . . . 213
8.4.2 H∞ Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.4.3 Feasibility Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 216

9 Elements of Nonlinear Control 223


9.1 Equilibrium Point and Linearization . . . . . . . . . . . . . . . . . . . . . . 223
9.2 Nominal Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
9.2.1 Internal/Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . 224
9.2.2 External/BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . 224
9.2.3 Stability for LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . 224
9.3 Local Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.3.1 Region of Attraction . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.4 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.4.1 Lyapunov Principle - General Systems . . . . . . . . . . . . . . . . 226

5
Gioele Zardini Control Systems II FS 2018

9.5 Gain Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228


9.6 Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
9.6.1 Input-State Feedback Linearization . . . . . . . . . . . . . . . . . . 228
9.6.2 Input-State Linearizability . . . . . . . . . . . . . . . . . . . . . . . 229
9.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

A Linear Algebra 242


A.1 Matrix-Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
A.2 Differentiation with Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 242
A.3 Matrix Inversion Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

B Rules 243
B.1 Trigo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.2 Euler-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.3 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.4 Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
B.5 Magnitude and Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
B.6 dB-Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

C MATLAB 245
C.1 General Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
C.2 Control Systems Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 246
C.3 Plot and Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

6
Gioele Zardini Control Systems II FS 2018

1 Recapitulation from Control Systems I


1.1 Loop Transfer Functions
1.1.1 Standard Feedback Control Loop
The standard feedback control system structure1 is depicted in Figure 1. This represen-

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) C(s) P (s)

Figure 1: Standard feedback control system structure.

tation will be the key element for your further control systems studies.
The plant P (s) represents the system you want to control: let’s imagine a Duckiebot.
The variable u(t) represents the real input that is given to the system and d(t) some
disturbance that is applyied to it. These two elements toghether, can be resumed into
an actuator for the Duckiebot example. The signal v(t) represents the disturbed input.
The signal η(t) describes the real output of the system. The variable y(t), instead,
describes the measured output of the system, which can be eventually measured by a
sensor with some noise n(t). In the case of the Duckiebot, this can correspond to the
position of the vehicle and its orientation (pose). The feedback controller C(s) makes
sure that the tracking error e(t) between the measured output and the reference r(t)
approaches zero. F (s) represents the feedforward controller of the system.
Remark. Note the notation: signals in time domain are written with small letters, such as
n(t). Transfer functions in frequency domain (Laplace/Fourier transformed) are written
with capital letters, such as P (s).
Why transfer functions? In order to make the analysis of such a system easier, the loop
transfer functions are defined. In fact, it is worth transforming a problem from the time
domain into the frequency domain, solve it, and back transform it into time domain.
The main reason behind this is that convolutions (computationally complex operations
which relate signals) are multiplications (through Laplace/Fourier transformation) in the
frequency domain.

1
Note that multiple versions of this loop exist

7
Gioele Zardini Control Systems II FS 2018

1.1.2 The Gang of Six


The loop gain L(s) is the open-loop transfer function defined by
L(s) = P (s)C(s). (1.1)
The sensitivity S(s) is the closed-loop transfer function defined by
1
S(s) =
1 + L(s)
(1.2)
1
= .
1 + P (s)C(s)
Remark. Note that the sensitivity gives measure of the influence of disturbances d on the
output y.
The complmentary sensitivity T (s) is the closed-loop transfer function defined by
L(s)
T (s) =
1 + L(s)
(1.3)
P (s)C(s)
= .
1 + P (s)C(s)
It can be shown that

S(s) + T (s) = 1. (1.4)

Recalling that a performant controller minimizes the difference between the reference R(s)
and the output Y (s), one can write this difference as an error E(s). This can be computed
as
E(s) = F (s)R(s) − Y (s)
= F (s)R(s) − (η(s) + N (s))
= F (s)R(s) − (P (s)V (s) + N (s))
(1.5)
= F (s)R(s) − (P (s)(D(s) + U (s)) + N (s))
= F (s)R(s) − P (s)D(s) − N (s) − P (s)U (s)
= F (s)R(s) − P (s)D(s) − N (s) − P (s)C(s)E(s).
Furthermore, recalling that we started from E(s), one gets the new equation
E(s) = F (s)R(s) − P (s)D(s) − N (s) − P (s)C(s)E(s)
(1 + P (s)C(s))E(s) = F (s)R(s) − P (s)D(s) − N (s)
F (s) P (s) 1
E(s) = R(s) − D(s) − N (s).
1 + P (s)C(s) 1 + P (s)C(s) 1 + P (s)C(s)
(1.6)
This procedure can be applied to each pair of signals of the feedback loop depicted in
Figure 1. The following equations can be derived:
   
Y (s) P (s)C(s)F (s) P (s) 1  
 η(s)  P (s)C(s)F (s) P (s) −P (s)C(s) R(s)
  1  
V (s) =
  1 + P (s)C(s)  C(s)F (s)
 1 −C(s)   · D(s) . (1.7)
 
U (s)  C(s)F (s) −P (s)C(s) −C(s)  N (s)
E(s) F (s) −P (s) −1

8
Gioele Zardini Control Systems II FS 2018

Exercise 1. A good exercise to practice this procedure could be to derive all the other
relations reported in Equation (1.7) on your own.
As you can notice, many terms in the relations introduced in Equation (1.7), are repeated.
Using the defined sensitivity function S(s) (Equation 1.2) and the complementary sensi-
tivity function T (s) (Equation 1.3), one can define four new important transfer functions.
The load sensitivity function is defined as
P (s)
P (s)S(s) = , (1.8)
1 + P (s)C(s)
and gives us an intuition on how does the disturbance affect the output. The noise
sensitivity function is defined as
C(s)
C(s)S(s) = , (1.9)
1 + P (s)C(s)
and gives us an intuition on how does the noise affect the input. Moreover, one can define
two more useful transfer functions:
C(s)F (s) P (s)C(s)F (s)
C(s)F (s)S(s) = , T (s)F (s) = . (1.10)
1 + P (s)C(s) 1 + P (s)C(s)
The new introduced four transfer functions together with the sensitivity and the comple-
mentary sensitivity functions, describe the so called gang of six.

1.1.3 The Gang of Four


The special case where F (s) = 1 (i.e., no presence feedforward), leads to the equivalence
of some of the defined transfer functions. In particular, we are left with four transfer
functions:
1
S(s) = sensitivity function,
1 + P (s)C(s)
P (s)C(s)
T (s) = complementary sensitivity function,
1 + P (s)C(s)
(1.11)
P (s)
P (s)S(s) = load sensitivity function,
1 + P (s)C(s)
C(s)
C(s)S(s) = noise sensitivity function.
1 + P (s)C(s)
At this point one may say: I can define these new transfer functions, but why are they
necessary? Let’s illustrate this through an easy example.
1
Example 1. Imagine to deal with a plant P (s) = s−1 and that you control it through a
(s−1)
PID controller of the form C(s) = k · s . You can observe that the plant has a pole at
s = 1, which makes it unstable. If one computes the classic transfer functions learned in
Control Systems I (Equations (1.1), (1.2), (1.3)), one gets
1 (s − 1) k
L(s) = C(s)P (s) = ·k· = ,
s−1 s s
1 s
S(s) = = , (1.12)
1 + L(s) s+k
L(s) k
T (s) = = .
1 + L(s) s+k

9
Gioele Zardini Control Systems II FS 2018

You may notice that none of these transfer functions contains the important information
about the unstable pole of the plant. However, this information is crucial: if one computes
the rest of the gang of four, one gets
1
s−1 1
P (s)S(s) = = ,
1 + ks (s − 1)(s + k)
(1.13)
k · (s−1)
s k(s − 1)
C(s)S(s) = = .
1 + ks s+k

These two transfer functions still contain the problematic term and are extremely useful
to determine the influence of the unstable pole on the system, because they explicitly
show it.
Exercise 2. Which consequence does the application of a small disturbance d on the
system have?

1.1.4 Relations to Performance


By looking at the feedback loop in Figure 1, one can introduce a new variable

ε(t) = r(t) − η(t), (1.14)

which represents the error between the reference signal and the real plant output. One
can show, that this error can be written as

ε(s) = S(s)R(s) − P (s)S(s)D(s) + T (s)N (s). (1.15)

From this equation one can essentially read:


• For a good reference tracking and disturbance attenuation, one needs a small S(s)
(or high L(s)).
• For a good noise rejection, one needs a small T (s) (or small L(s)).
Exercise 3. Derive Equation (1.15) with what you learned in this chapter.

1.1.5 Feed Forward


The feedforward technique complements the feedback one. If on one hand feedback is
error based and tries to compensate unexpected or unmodeled phenomena, such as dis-
turbances, noise and model uncertainty, the feedforward technique works well if we have
some knowledge of the system (i.e. disturbances, plant, reference). Let’s illustrate the
main idea behind this concept through an easy example.
Example 2. The easiest example for this concept, is the one of perfect control. Imagine
to have a system as the one depicted in Figure 2.
This is also known as perfect control/plant inversion, where we want to find an input u(t),
such that y(t) = r(t). One can write

Y (s) = P (s)U (s) (1.16)

and hence
R(s) = P (s)U (s) ⇒ U (s) = P (s)−1 R(s). (1.17)

10
Gioele Zardini Control Systems II FS 2018

u(t) y(t)
P (s)

Figure 2: Standard perfect control system structure.

This is not possible when:

• The plant P (s) has right-hand side poles (unstable inverse).

• There are time delays, (non causal inverse): how much of the future output trajectory
information we need in order to perform the desired output tracking?

• More poles than zeros, (unrealizable inverse).

• Model uncertainty, (unknown inverse).

But what does it mean for a system to be realizable or causal? Let’s illustrate this with
an example.

Example 3. If one has a transfer function with a number of zeros bigger than the number
of poles, this represents pure differentiators, which are not causal. Imagine to deal with
the transfer function
(s + 2)(s + 3)
P (s) = . (1.18)
s+1
This transfer function has two zeros and one pole. This can be rewritten as

(s + 2)(s + 3)
P (s) =
s+1
s2 + 5s + 6
=
s+1 (1.19)
s(s + 1) + 4s + 6
=
s+1
4s + 6
=s+ ,
s+1
where s is a pure differentiator. A pure differentiator’s transfer function can be written
as the ratio of an output and an input:

Y (s)
G(s) = s = , (1.20)
U (s)

which describes the time domain equation

y(t) = u̇(t)
u(t + δt ) − u(t) (1.21)
= lim ,
δt →0 δt
which confirms us that this transfer function must have knowledge of future values of the
input u(t) (from u(t + δt )) in order to react with the current output y(t). This is per
definition not physical and hence not realizable, not causal.

11
Gioele Zardini Control Systems II FS 2018

1.2 General Control Objectives


In this section we are going to present the standard control objectives and relate them to
what you learned in the course Control Systems I.
But what are the real objectives of a controller? We can subdivide them into four specific
needs:

1. Nominal Stability: Is the closed-loop interconnection of a nominal plant and a


controller stable?

2. Nominal Performance: Does the closed-loop interconnection of a nominal plant


and a controller achieve specific performance objectives?

3. Robust Stability: Is the closed-loop interconnection of any disturbed nominal


plant and a controller stable?

4. Robust Performance: Does the closed-loop interconnection of any plant and a


controller achieve specific perfomance objectives?

One can essentially subdivide the job of a control engineer into two big tasks:

(I) Analysis: Given a controller, how can we check that the objectives above are
satisfied?

(II) Synthesis: Given a plant, how can we design a controller that achieves the objec-
tives above?

Let’s analyse the objectives of a controller with respect to their relation to these two
tasks.

1.2.1 Nominal Stability


During the course Control Systems I, you learned about different stability concepts. More-
over, you have learned the differences between internal and external stability: let’s recall
them here. Consider a generic nonlinear system defined by the dynamics

ẋ(t) = f (x(t)), t ∈ R, x(t) ∈ Rn , f : Rn × R → Rn . (1.22)

Definition 1. A state x̂ ∈ Rn is called an equilibrium of system (1.22) if and only if


f (x̂) = 0 ∀t ∈ R.

Internal/Lyapunov Stability
Internal stability, also called Lyapunov stability, characterises the stability of the trajec-
tories of a dynamic system subject to a perturbation near the to equilibrium. Let now
x̂ ∈ Rn be an equilibrium of system (1.22).

Definition 2. An equilibrium x̂ ∈ Rn is said to be Lyapunov stable if

∀ε > 0, ∃δ > 0 s.t. kx(0) − x̂k < δ ⇒ kx(t) − x̂k < ε. (1.23)

In words, an equilibrium is said to be Lyapunov stable if for any bounded initial condition
and zero input, the state remains bounded.

12
Gioele Zardini Control Systems II FS 2018

Definition 3. An equilibrium x̂ ∈ Rn is said to be asymptotically stable in Ω ⊆ Rn if it


is Lyapunov stable and attractive, i.e. if

lim (x(t) − x̂) = 0, ∀x(0) ∈ Ω. (1.24)


t→∞

In words, an equilibrium is said to be asymptotically stable if, for any bounded initial
condition and zero input, the state converges to the equilibrium.
Definition 4. An equilibrium x̂ ∈ Rn is said to be unstable if it is not stable.
Remark. Note that stability is a property of the equilibrium and not of the system in
general.

External/BIBO Stability
External stability, also called BIBO stability (Bounded Input-Bounded Output), charac-
terises the stability of a dynamic system which for bounded inputs gives back bounded
outputs.
Definition 5. A signal s(t) is said to be bounded, if there exists a finite value B > 0
such that the signal magnitude never exceeds B, that is

|s(t)| ≤ B ∀t ∈ R. (1.25)

Definition 6. A system is said to be BIBO-stable if

ku(t)k ≤ ε ∀t ≥ 0, and x(0) = 0 ⇒ ky(t)k < δ ∀t ≥ 0, ε, δ ∈ R. (1.26)

In words, for any bounded input, the output remains bounded.

Stability for LTI Systems


Above, we focused on general nonlinear system. However, in Control Systems I you
learned that the output y(t) for a LTI system of the form

ẋ(t) = Ax(t) + Bu(t)


(1.27)
y(t) = Cx(t) + Du(t),

can be written as
Z t
At
y(t) = Ce x(0) + C eA(t−τ ) Bu(τ )dτ + Du(t). (1.28)
0

The transfer function relating input to output is a rational function

P (s) = C(sI − A)−1 B + D


bn−1 sn−1 + bn−2 sn−2 + . . . + b0 (1.29)
= + d.
sn + an−1 sn−1 + . . . + a0
Furthermore, it holds:
• The zeros of the numerator of Equation (1.29) are the zeros of the system, i.e. the
values si which fulfill
P (si ) = 0. (1.30)

13
Gioele Zardini Control Systems II FS 2018

• The zeros of the denominator of Equation (1.29) are the poles of the system, i.e.
the values si which fulfill det(si I − A) = 0, or, in other words, the eigenvalues of A.

One can show, that the following Theorem holds:

Theorem 1. The equilibrium x̂ = 0 of a linear time invariant system is stable if and only
if the following two conditions are met:

1. For all λ ∈ σ(A), Re(λ) ≤ 0.

2. The algebraic and geometric multiplicity of all λ ∈ σ(A) such that Re(λ) = 0 are
equal.

Remark. We won’t go into the proof of this theorem, because beyond the scope of the
course. As an intuition, however, one can look at Equation (1.28). As you learned in
Linear Algebra, the matrix exponential computation can be simplified with help of the
diagonalization of a matrix. Moreover, if matrix A is diagonalizable, you can derive a
form where you are left with exponential terms of the eigenvalues of A on the diagonal.
If these eigenvalues are bigger than 0, the exponentials, which depends on time, diverge.
If these eigenvalues are smaller than zero, the exponentials converge to 0 (asymptotically
stable behaviour). In the case of zero eigenvalues, the exponentials converge, but not
to 0 (stable behaviour). If the matrix A is not diagonalizable, i.e. the algebraic and
the geometric multiplicity of an eigenvalue do not coincide, one should recall the Jordan
form. In this case, some polynomial terms may be multiplied with the exponential ones
in the diagonal: this could lead to unstable behaviour (stable vs. unstable because of 0
eigenvalue). For the rigorous proof of the Theorem, go to https://en.wikibooks.org/
wiki/Control_Systems/State-Space_Stability.

1.2.2 Analysis
Which tools do we already know in order to analyze nominal stability? In the course
Control Systems I you learned about

• Root locus. In order to recall the root locus method, have a look at Example 4.15,
page 123 in [2].

• Bode diagram: the Bode diagram is a frequency explicit representation of the


magnitude |L(jω)| and the phase ∠ (L(jω)) of a complex number L(jω). Because
of graphic reasons, one uses decibel (dB) as unit for the amplitude and degrees as
unit for the phase. As a reminder, the conversion reads
XdB
XdB = 20 · log10 (X), X = 10 20 . (1.31)

Moreover, stable and unstable poles and zeros have specific consequences on the
Bode diagram:
dB
– Poles cause a gradient of −20 decade in the amplitude:

Pole stable unstable


dB dB
Magnitude −20 decade −20 decade
Phase −90◦ 90◦

14
Gioele Zardini Control Systems II FS 2018

dB
– Zeros cause a gradient of 20 decade in the amplitude:

Zero stable unstable


dB dB
Magnitude 20 decade 20 decade
Phase 90◦ −90◦

An example of a Bode diagram is depicted in Figure 3.

Figure 3: Example of a Bode diagram.

• Nyquist diagram: the Nyquist diagram is a frequency implicit representation of


the complex number L(jω) in the complex plane. An example of a Nyquist diagram
is shown in Figure 4.
Remark. In order to draw the Nyquist diagram, some useful limits can be computed:

lim L(jω), lim L(jω), lim ∠L(jω). (1.32)


ω→0 ω→∞ ω→∞

• Nyquist theorem: a closed-loop system T (s) is asymptotically stable if


n0
nc = n+ + (1.33)
2
holds, where

– nc : Number of mathematical positive encirclements of L(s) about critical point


−1 (counterclockwise).
– n+ : Number of unstable poles of L(s) (Re(π) > 0).
– n0 : Number marginal stable poles of L(s) (Re(π) = 0).

15
Gioele Zardini Control Systems II FS 2018

Figure 4: Example of a Nyquist diagram.

1.2.3 Synthesis: Loop Shaping


Plant inversion
As seen in the feedforward example, this method isn’t indicated for non-minimum phase
plants and for unstable plants: in those cases this would lead to non-minimum phase or
unstable controllers. This method is indicated for simple systems for which it holds

• Plant is asymptotically stable.

• Plant is minimum phase.

The method is then based on a simple step:

L(s) = C(s) · P (s) ⇒ C(s) = L(s) · P (s)−1 . (1.34)

The choice of the loop gain is free: it can be chosen such that it meets the desired
specifications.

Loop shaping for Non-minimum Phase systems


A non-minimum phase system shows a wrong response: a change in the input results in a
change in sign, that is, the system initially lies. Our controller should therefore be patient
and for this reason one should use a slow control system. This is obtained by a crossover
frequency that is smaller than the non-minimum phase zero. One begins to design the
controller with a PI-Controller, which has the form
Ti · s + 1
C(s) = kp · . (1.35)
Ti · s

16
Gioele Zardini Control Systems II FS 2018

The parameters kp and Ti can be chosen such that the loop gain L(s) meets the known
specifications. One can reach better robustness with Lead/Lag elements of the form

T ·s+1
C(s) = . (1.36)
α·T ·s+1
where α, T ∈ R+ . One can understand the Lead and the Lag elements as

• α < 1: Lead-Element:

– Phase margin increases.


– Loop gain increases.

• α > 1: Lag-Element:

– Phase margin decreases.


– Loop gain decreases.

As one can see in Figure 5 and Figure 6, the maximal benefits are reached at frequencies
(ω̂), where the drawbacks are not yet fully developed.

Figure 5: Bodeplot of the Lead Element

The element’s parameters can be calculated as


q 2
2 1 − sin(ϕ̂)
α= tan (ϕ̂) + 1 − tan(ϕ) = (1.37)
1 + sin(ϕ̂)

and
1
T = √ . (1.38)
ω̂ · α

17
Gioele Zardini Control Systems II FS 2018

Figure 6: Bodeplot of the Lag Element

where ω̂ is the desired center frequency and ϕ̂ = ϕnew − ϕ is the desired maximum
phase shift (in rad).

The classic loop-shaping method reads:


1. Design of a PI(D) controller.

2. Add Lead/Lag elements where needed2

3. Set the gain of the controller kp such that we reach the desired crossover frequency.

Loop shaping for unstable systems


Since the Nyquist theorem should always hold, if it isn’t the case, one has to design the
controller such that nc = n+ + n20 is valid. To remember is: stable poles decrease the
phase by 90◦ and minimum phase zeros increase the phase by 90◦ .

Realizability
Once the controller is designed, one has to look if this is really feasible and possible to
implement. That is, if the number of poles is equal or bigger than the number of zeros
of the system. If that is not the case, one has to add poles at high frequencies, such that
they don’t affect the system near the crossover frequency. One could e.g. add to a PID
controller a Roll-Off Term as
 
1 1
C(s) = kp · 1 + + Td · s · . (1.39)
Ti · s (τ · s + 1)2
| {z } | {z }
PID Controller Roll-Off Term

2
L(jω) often suits not the learned requirements

18
Gioele Zardini Control Systems II FS 2018

1.2.4 Performance
Under performance, one can understand two specific tasks:
• Regulation/disturbance rejection: Keep a setpoint despite disturbances, i.e.
keep y(t) at r(t). As an example, you can imagine you try to keep driving your
Duckiebot at a constant speed towards a cooling fan.
• Reference Tracking: Reference following, i.e. let y(t) track r(t). As an example,
imagine a luxury Duckiebot which carries Duckiecustomers: a temperature con-
troller tracks the different temperatures which the different Duckiecustomers may
want to have in the Duckiebot.

1.2.5 Robustness
All models are wrong, but some of them are useful. (1.40)
A control system is said to be robust when it is insensive to model uncertainties. But
why should a model have uncertainties? Essentially, for the following reasons:
• Aging: the model that was good a year ago, maybe is not good now. As an example,
think of the wheel deterioration which could cause slip in a Duckiebot.
• Poor system identification: there are entire courses dedicated to the art of
system modeling. It is not possible not to come to assumptions, which simplify
your real system to something that does not perfectly describe that.

1.3 The Bode’s Integral Formula


As we have learned in the previous section, a control systems must satisfy specific perfor-
mance conditions on the sensitivity functions (also called Gang of Four). As we have seen,
the sensitivity function S refers to the disturbance attenuation and relates the tracking
error e to the reference signal. As stated in the previous section, one wants the sensitivity
to be small over the range of frequencies where small tracking error and good disturbance
rejection are desired. Let’s introduce the next concepts with an example:
Example 4. (11.10 Murray) We consider a closed loop system with loop transfer func-
tion
k
L(s) = P (s)C(s) = , (1.41)
s+1
where k is the gain of the controller. Computing the sensitivity function for this loop
transfer function results in
1
S(s) =
1 + L(s)
1
= k (1.42)
1 + s+1
s+1
= .
s+1+k
By looking at the magnitude of the sensitivity function, one gets
r
1 + ω2 (1.43)
|S(jω)| = .
1 + 2k + k 2 + ω 2

19
Gioele Zardini Control Systems II FS 2018

One notes, that this magnitude |S(jω)| < 1 for all finite frequencies and can be made as
small as desired by choosing a sufficiently large k.
Theorem 2. Bode’s integral formula. Assume that the loop transfer function L(s)
of a feedback system goes to zero faster than 1s as s → ∞, and let S(s) be the sensitivity
function. If the loop transfer function has poles pk in the right-half-plane, then the
sensitivity function satisfies the following integral:
Z ∞ Z ∞
1
log |S(jω)|dω = log dω
0 0 |1 + L(jω)| (1.44)
X
=π pk .

This is usually called the principle of conservation of dirt.


What does this mean?
• Low sensitivity is desirable across a broad range of frequencies. It implies distur-
bance rejection and good tracking.

• So much dirt we remove at some frequency, that much we need to add at some other
frequency. This is also called the waterbed effect.
This can be resumed with Figure 7

Figure 7: Waterbed Effect.

Theorem 3. (Second waterbed formula) Suppose that L(s) has a single real RHP-zero z
or a complex conjugate pair of zeros z = x ± jy and has Np RHP-poles pi . Let p̄i denote
the complex conjugate of pi . Then for closed-loop stability, the sensitivity function must
satisfy
Z ∞ Np
Y pi + z
ln |S(jω)| · w(z, ω)dω = π | |, (1.45)
0 I=1
p̄i − z

20
Gioele Zardini Control Systems II FS 2018

where (
2z
w(z, ω) = z 2 +ω 2
, if real zero
x x (1.46)
w(z, ω) = x2 +(y−ω)2
+ x2 +(y+ω)2
, if complex zero.
Summarizing, unstable poles close to RHP-zeros make a plant difficult to control. These
weighting functions make the argument of the integral negligible at ω > z. A RHP-zero
reduces the frequency range where we can distribute dirt, which implies a higher peak for
S(s) and hence disturbance amplification.

1.4 Examples
Example 5. The dynamic equations of a system are given as

ẋ1 (t) = x1 (t) − 5x2 (t) + u(t),


ẋ2 (t) = −2x1 (t),
(1.47)
ẋ3 (t) = −x2 (t) − 2x3 (t),
y(t) = 3x3 (t).

(a) Draw the Signal Diagram of the system.

(b) Find the state space description of the above system.

21
Gioele Zardini Control Systems II FS 2018

Solution.

(a) The Signal Diagram reads

Figure 8: Signal Diagram of the system.

(b) The state space description has the following matrices:


   
1 −5 0 1 
A = −2 0
 0 ,
 b = 0 ,
 c= 0 0 3 , d = 0. (1.48)
0 −1 −2 0

22
Gioele Zardini Control Systems II FS 2018

Example 6. Some friends from a new startup of ETH called SpaghETH and want you to
help them linearizing the system which describes their ultimate invention. The company
is active in the market of the food trucks and sells pasta on the Polyterrasse everyday.
They decided to automate the blending of the pasta in the water by carefully optimizing
its operation. A sketch of the revolutionary system is shown in Figure 63.

~ey

x
m1

k k ~ex

m2 , L, Θ
θ

Figure 9: Sketch of the system.

The blender is modeled as a bar of mass m2 , length L, and moment of inertia (w.r.t.
1
center of mass) Θ = 12 m2 L2 . The blender is attached to a point mass m1 . In order to
deal with possible vibrations that might occur in the system, the mass is attached to two
springs with spring constant k.
The equations of motion of the system are given by
1  
(m1 + m2 )ẍ + m2 L −θ̇2 sin(θ) + 2kx = 0
2
 2 (1.49)
L 1 L
m2 θ̈ + m2 ẋLθ̇ sin(θ) + m2 g sin(θ) = 0.
12 2 2

a) How would you choose the state space vector in order to linearize this system? Write
the system in a new form, with the chosen state space vector.

b) Linearize this system around the equilibrium point where all states are zero except
ẋ(0) = t ∈ R is constat in order to find the matrix A.
Hint: Note that no input and no output are considered here, i.e. just the computa-
tions for the matrix A are required.

23
Gioele Zardini Control Systems II FS 2018

Solution.

a) The variables x and θ arise from the two equations of motion. Since the equation
of motion have order two with respect to these variables, we augment the state and
consider also ẋ and θ̇. Hence, the state space vector s(t) reads
   
s1 (t) x(t)
s2 (t)  θ(t) 
s(t) = 
s3 (t) = ẋ(t) .
   (1.50)
s4 (t) θ̇(t)

By re-writing the equations of motion with the state space vector one gets
   
ẋ(t) s3 (t)
 θ̇(t)   s4 (t) 
ṡ(t) = 
ẍ(t) =  (m +m
  1 1 2
 := f (1.51)
1 2)
· 2 m2 Ls4 (t) sin(s2 (t)) − 2ks1 (t) 
θ̈(t) − L1 · (6s3 (t)s4 (t) sin(s2 (t)) + 6g sin(s2 (t)))

b) The linearization of the system reads


 
0 0 1 0
∂f  0 0 0 1 
A= =
 1 2

2k m Ls4 (t) cos(s2 (t))
2 2 m2 Ls4 sin(s2 ) 

∂s seq.  (m1 +m2 ) (m1 +m2 )
0 (m1 +m2 )  seq
6s3 (t)s4 (t) cos(s2 (t))+6g cos(s2 (t)) 6s4 (t) sin(s2 (t)) 6s3 (t) sin(s2 (t))
0 − − −
  L L L
0 0 1 0
 0 0 0 1
= − (m 2k
.
1 +m2 )
0 0 0 
6g
0 −L 0 0
(1.52)

24
Gioele Zardini Control Systems II FS 2018

Example 7. You are given the following matrix:


 
−3 4 −4
A =  0 5 −8 . (1.53)
0 4 −7

a) Find the eigenvalues of A.

b) Find the eigendecomposition of matrix A, i.e. compute its eigenvectors and deter-
mine T and D s.t. A = T DT −1 .

c) You are given a system of the form

x(t) = Ax(t) + Bu(t)


(1.54)
y(t) = Cx(t) + Du(t).

Can you conclude something about the stability of the system?

d) What if you have  


−1 0 0
A = −1 0 0 (1.55)
−1 1 0
instead?

25
Gioele Zardini Control Systems II FS 2018

Solution.

a) The eigenvalues of A should fulfill

det(A − λI) = 0. (1.56)

It follows
 
−3 − λ 4 −4
det(A − λI) = det  0 5−λ −8 
0 4 −7 − λ
 
5−λ −8 (1.57)
= (−3 − λ) det
4 −7 − λ
= −(3 + λ) · [(5 − λ) · (−7 − λ) − (−24)]
= −(3 + λ)2 · (λ − 1).

This means that matrix A has eigenvalues

λ1,2 = −3, λ3 = 1. (1.58)

b) In order to compute the eigendecomposition of A, we need to compute its eigenvec-


tors with respect to its eigenvalues. The eigenvectors vi should fulfill

(A − λI)vi = 0. (1.59)

It holds:

• Eλ1 = E−3 : From (A − λ1 I) · x = 0 one gets the linear system of equations


 
0 4 −4 0
 0 8 −8 0 .
0 4 −4 0
Using the first row as reference and subtracting it from the other two rows,
one gets the form
 
0 4 −4 0
 0 0 0 0 .
0 0 0 0
Since one has two zero rows, one can introduce two free parameters. Let x1 = s,
x2 = t, s, t ∈ R. Using the first row, one can recover x3 = x2 = t. This defines
the first eigenspace, which reads
   
n 1 0 o
E−3 = 0 , 1 (1.60)
0 1

Note that since we have introduced two free parameters, the geometric multi-
plicity of λ1,2 = −3 is 2.

• Eλ2 = E1 : From (A − λ2 I) · x = 0, one gets the linear system of equations

26
Gioele Zardini Control Systems II FS 2018

 
−4 4 −4 0
 0 4 −8 0 .
0 4 −8 0
Subtracting the second row from the third row results in the form
 
−4 4 −4 0
 0 4 −8 0 .
0 0 0 0
Since one has a zero row, one can introduce a free parameter. Let x3 = u,
u ∈ R. It follows x2 = 2t and x1 = t. The second eigenspace hence reads
 
n 1 o
E1 = 2 (1.61)
1
Note that since we have introduced one free parameter, the geometric multi-
plicity of λ3 = 1 is 1.

Since the algebraic and geometric multiplicity concide for every eigenvalue of A, the
matrix is diagonalizable. With the computed eigenspaces, one can build the matrix
T as  
1 0 1
T = 0 1 2 , (1.62)
0 1 1
and D as a diagonal matrix with the eigenvalues on the diagonal:
 
−3 0 0
D =  0 −3 0 . (1.63)
0 0 1
These T and D are guaranteed to satisfy A = T DT −1 .
c) Because of λ3 = 1 > 0 one can conclude that the system is unstable in the sense of
Lyapunov.
d) Because the new matrix A contains only 0 elements above the diagonal, one can
clearly see that its eigenvalues are
λ1 = −1, λ2,3 = 0. (1.64)
The eigenvalue λ1 = −1 leads to asymptotically stable behaviour. The eigenvalue
λ2,3 = 0 has algebraic multiplicity of 2, which means that, in order to have a
marginally stable system, its geometric multiplicity should be 2 as well. It holds
• Eλ2,3 = E0 : From (A − λ2,3 I) · x = 0, one gets the linear system of equations
 
−1 0 0 0
 −1 0 0 0 ,
−1 1 0 0
which clearly has x1 = 0, x2 = x2 = 0 and x3 = t ∈ R. Since we introduced
only one free parameter, the geometric multiplicity of this eigenvalue is only 1,
which means that the system is unstable.

27
Gioele Zardini Control Systems II FS 2018

Example 8. Assume that the standard feedback control system structure depicted in
Figure 10 is given.

w d
r e u y
C(s) P (s)

Figure 10: Standard feedback control system structure (simplified w.r.t. the lecture
notes).

a) Assume that R(s) = 0. What T (s) and S(s) would you prefer? Is this choice
possible?

28
Gioele Zardini Control Systems II FS 2018

Solution.

a) If the reference signal R(s) is 0 for all times, the desired output signal Y (s) should
be zero for all times as well. Thus, referring to the closed loop dynamics

Y (s) = S(s) · (D(s) + P (s) · W (s)) + T (s) · (R(s) − N (s)), (1.65)

the preferred choice would be

T (s) = S(s) = 0. (1.66)

In this case, all the disturbances and noise would be suppressed. However, this
choice is not possible. This can be easily checked by looking at the constraint

1 L(s) 1 + L(s)
S(s) + T (s) = + = = 1. (1.67)
1 + L(s) 1 + L(s) 1 + L(s)

This result has a key importance for the following discussions. In fact, at a fixed
frequency s, either S(s) or T (s) can be 0 but not both. In other words, it is not
possible to suppress both disturbances and noise in the same frequency band.

29
Gioele Zardini Control Systems II FS 2018

2 Digital Control
2.1 Signals and Systems
A whole course is dedicated to this topic (see Signals and Systems of professor D’Andrea).
A signal is a function of time that represents a physical quantity.
Continuous-time signals are described by a function x(t) such that this takes continuous
values.
Discrete-time Signals differ from continuous-time ones because of a sampling procedure.
Computers don’t understand the concept of continuous-time and therefore sample the
signals, i.e. measure signal’s informations at specific time instants. Discrete-time systems
are described by a function
x[n] = x(n · Ts ), (2.1)
where Ts is the sampling time. The sampling frequency is defined as fs = T1s .
One can understand the difference between the two descriptions by looking at Figure 11.

x(t)

Figure 11: Continuous-Time versus Discrete-Time representation

Advantages of Discrete-Time analysis


• Calculations are easier. Moreover, integrals become sums and differentiations be-
come finite differences.

• One can implement complex algorithms.

Disadvantages of Discrete-Time analysis


sTs
• The sampling introduces a delay in the signal (≈ e− 2 ).

• The informations between two samplings, that is between x[n] and x[n + 1], are lost.

Every controller which is implemented on a microprocessor is a discrete-time system.

30
Gioele Zardini Control Systems II FS 2018

2.2 Discrete-Time Control Systems


Nowadays, controls systems are implemented in microcontrollers or in microprocessors
in discrete-time and really rarely (see the lecture Elektrotechnik II for an example) in
continuous-time. As defined, although the processes are faster and easier, the informations
are still sampled and there is a certain loss of data. But how are we flexible about
information loss? What is acceptable and what is not? The concept of aliasing will help
us understand that.

2.2.1 Aliasing
If the sampling frequency is chosen too low, i.e. one measures less times pro second, the
signal can become poorly determined and the loss of information is too big to reconstruct
it uniquely. This situation is called aliasing and one can find many examples of that in
the real world. Let’s have a look to an easy example:

Example 9. You are finished with your summer’s exam session and you are flying to
Ibiza, to finally enjoy the sun after a summer spent at ETH. You decide to film the
turbine of the plane because, although you are on holiday, you have an engineer’s spirit.
You land in Ibiza and, as you get into your hotel room, you want you have a look at your
film. The rotation of the turbine’s blades you observe looks different to what it is supposed
to be, and since you haven’t drunk yet, there must be some scientific reason. In fact, the
sampling frequency of your phone camera is much lower than the turning frequency of
the turbine: this results in a loss of information and hence in a wrong perception of what
is going on.

Let’s have a more mathematical approach. Let’s assume a signal

x1 (t) = cos(ω · t). (2.2)

After discretization, the sampled signal reads

x1 [n] = cos(ω · Ts · n) = cos(Ω · n), Ω = ω · Ts . (2.3)

Let’s assume a second signal


  

x2 (t) = cos ω+ ·t , (2.4)
Ts

where the frequency



ω2 = ω + . (2.5)
Ts
is given. Using the periodicity of the cos function, the discretization of this second signal
reads
  

x2 [n] = cos ω+ · Ts · n
Ts
= cos (ω · Ts · n + 2π · n) (2.6)
= cos(ω · Ts · n)
= x1 [n].

31
Gioele Zardini Control Systems II FS 2018

Although the two signals have different frequencies, they are equal when discretized. For
this reason, one has to define an interval of good frequencies, where aliasing doesn’t occur.
In particular it holds
π
|ω| < (2.7)
Ts
or
1
f< ⇔ fs > 2 · fmax . (2.8)
2 · Ts
The maximal frequency accepted is f = 2·T1 s and is called Nyquist frequency. In order
to ensure good results, one uses in practice a factor of 10.
1
f< ⇔ fs > 10 · fmax . (2.9)
10 · Ts
For control systems the crossover frequency should be
ωc
fs ≥ 10 · . (2.10)

2.2.2 Discrete-time Control Loop Structure


The discrete-time control loop structure is depicted in Figure 12. This is composed of

Figure 12: Control Loop with AAF.

different elements, which we list and describe in the following paragraphs.

Anti Aliasing Filter (AAF)


In order to solve this problem, an Anti Aliasing Filter (AAF) is used. The Anti
Aliasing Filter is an analog filter and not a discrete one. In fact, we want to eliminate
unwanted frequencies before sampling, because after that is too late (refer to Figure 12).
But how can one define unwanted frequencies? Those frequencies are normally the higher
frequencies of a signal 3 . Because of that, as AAF one uses normally a low-pass filter.
This type of filter lets low frequencies pass and blocks higher ones4 . The mathematic
formulation of a first-order low-pass filter is given by
k
lp(s) = . (2.11)
τ ·s+1
where k is the gain and τ is the time constant of the system. The drawback of such a filter
is problematic: the filter introduces additional unwanted phase that can lead to unstable
behaviours.
3
Keep in mind: high signal frequency means problems by lower sampling frequency!
4
This topic is exhaustively discussed in the course Signals and Systems, offered in the fifth semester
by Prof. D’Andrea.

32
Gioele Zardini Control Systems II FS 2018

Analog to Digital Converter (ADC)


At each discrete time step t = k · T the ADC converts a voltage e(t) to a digital number
following a sampling frequency.

Microcontroller (µP )
This is a discrete-time controller that uses the sampled discrete-signal and gives back a
dicrete output.

Digital to Analog Converter (DAC)


In order to convert back the signal, the DAC applies a zero-order-hold (ZOH). This
introduces an extra delay of T2 (refer to Figure 13).

Figure 13: Zero-Order-Hold.

33
Gioele Zardini Control Systems II FS 2018

2.3 Controller Discretization/Emulation


In order to understand this concept, we have to introduce the concept of z-transform.

2.3.1 The z-Transform


From the Laplace Transform to the z−transform
The Laplace transform is an integral transform which takes a function of a real variable
t to a function of a complex variable s. Intuitively, for control systems t represents time
and s represents frequency.

Definition 7. The one-sided Laplace transform of a signal x(t) is defined as

L (x(t)) = X(s)
= x̃(s)
Z ∞ (2.12)
= x(t)e−st dt.
0

Because of its definition, the Laplace transform is used to consider continuous-time sig-
nals/systems. In order to deal with discrete-time system, one must derive its discrete
analogon.

Example 10. Consider x(t) = cos(ωt). The Laplace transform of such a signal reads
Z ∞
L(cos(ωt)) = e−st cos(ωt)dt
0
ω ∞ −st

Z
1 −st
= − e cos(ωt) − e sin(ωt)dt
s 0 s 0
(2.13)
ω ∞ −st
 Z 
1 ω 1 −st ∞
= − − e sin(ωt) + e cos(ωt)dt
s s s 0 s 0
1 ω2
= − 2 L(cos(ωt)).
s s
From this equation, one has
s
L(cos(ωt)) = (2.14)
s2 + ω2
Some of the known Laplace transforms are listed in Table 1.
Laplace transforms receive as inputs functions, which are defined in continuous-time. In
order to analyze discrete-time system, one must derive its discrete analogue. Discrete
time signals x(kT ) = x[k] are obtained by sampling a continuous-time function x(t). A
sample of a function is its ordinate at a specific time, called the sampling instant, i.e.

x[k] = x(tk ), tk = t0 + kT, (2.15)

where T is the sampling period. A sampled function can be expressed through the mul-
tiplication of a continuous funtion and a Dirac comb (see reference), i.e.

x[k] = x(t) · D(t), (2.16)

with D(t) which is a Dirac comb.

34
Gioele Zardini Control Systems II FS 2018

Definition 8. A Dirac comb, also known as sampling function, is a periodic distribution


constructed from Dirac delta functions and reads

X
D(t) = δ(t − kT ). (2.17)
k=−∞

Remark. An intuitive explanation of this, is that this function is 1 for t = kT and 0 for all
other cases. Since k is a natural number, i.e. k = −∞, . . . , ∞, applying this function to
a continuous-time signal consists in considering informations of that signal spaced with
the sampling time T .

Imagine to have a continuous-time signal x(t) and to sample it with a sampling period T .
The sampled signal can be described with the help of a Dirac comb as

X
xm (t) = x(t) · δ(t − kT )
k=−∞

X
= x(kT ) · δ(t − kT ) (2.18)
k=−∞
X∞
= x[k] · δ(t − kT ),
k=−∞

where we denote x[k] as the k-th sample of x(t). Let’s compute the Laplace transform of
the sampled signal:

Xm (s) = L (xm (t))


Z ∞
(a) = xm (t)e−st dt
0
Z ∞ ∞
X
= x[k] · δ(t − kT )e−st dt
0 k=−∞ (2.19)

X Z ∞
(b) = x[k] · δ(t − kT )e−st dt
k=−∞ 0

X∞
(c) = x[k]e−ksT ,
k=−∞

where we used

(a) This is an application of Definition 7.

(b) The sum and the integral can be switched because the function f (t) = δ(t − kT )e−st
is non-negative. This is a direct consequence of the Fubini/Tonelli’s theorem. If you
are interested in this, have a look at https://en.wikipedia.org/wiki/Fubini%
27s_theorem.

(c) This result is obtained by applying the Dirac integral property, i.e.
Z ∞
δ(t − kT )e−st dt = e−ksT . (2.20)
0

35
Gioele Zardini Control Systems II FS 2018

By introducing the variable z = esT , one can rewrite Equation 2.19 as



X
Xm (z) = x[k]z −k , (2.21)
k=−∞

which is defined as the z−transform of a discrete time system. We have now found the
relation between the z transform and the Laplace transform and are able to apply the
concept to any discrete-time signal.
Definition 9. The bilateral z-transform of a discrete-time signal x[k] is defined as

X
X(z) = Z ((x[k]) = x[k]z −k . (2.22)
k=−∞

Some of the known z-transforms are listed in Table 1.

x(t) L(x(t))(s) x[k] X(z)


1 1
1 s
1 1−z −1

e−at 1
s+a
e−akT 1
1−e−aT z −1
1 T z −1
t s2
kT (1−z −1 )2

2 T 2 z −1 (1+z −1 )
t2 s3
(kT )2 (1−z −1 )3
ω z −1 sin(ωT )
sin(ωt) s2 +ω 2
sin(ωkT ) 1−2z −1 cos(ωT )+z −2
s 1−z −1 cos(ωT )
cos(ωt) s2 +ω 2
cos(ωkT ) 1−2z −1 cos(ωT )+z −2

Table 1: Known Laplace and z−transforms.

Properties
In the following we list some of the most important properties of the z−transform. Let
X(z), Y (z) be the z−transforms of the signals x[k], y[k].
1. Linearity
Z (ax[k] + by[k]) = aX(z) + bY (z). (2.23)

Proof. It holds

X
Z (ax[k] + by[k]) = (ax[k] + by[k]) z −k
k=−∞
∞ ∞
X X (2.24)
= ax[k]z −k + by[k]z −k
k=−∞ k=−∞

= aX(z) + bY (z).

36
Gioele Zardini Control Systems II FS 2018

2. Time shifting
Z (x[k − k0 ]) = z −k0 X(z). (2.25)

Proof. It holds

X
Z (x[k − k0 ]) = x[k − k0 ]z −k . (2.26)
k=−∞

Define m = k − k0 . It holds k = m + k0 and



X ∞
X
−k
x[k − k0 ]z = x[m]z −m z −k0
k=−∞ k=−∞ (2.27)
−k0
=z X(z).

3. Convolution ∗
Z (x[k] ∗ y[k]) = X(z)Y (z). (2.28)

Proof. Follows directly from the definition of convolution.

4. Reverse time  
1
Z (x[−k]) = X . (2.29)
z

Proof. It holds

X
Z (x[−k]) = x[−k]z −k
k=−∞
∞  −r
X 1 (2.30)
= x[r]
r=−∞
z
 
1
=X .
z

5. Scaling in z domain z 
k

Z a x[k] = X . (2.31)
a

Proof. It holds

X  z −k
k

Z a x[k] = x[k]
a
k=−∞ (2.32)
z 
=X .
a

37
Gioele Zardini Control Systems II FS 2018

6. Conjugation
Z (x∗ [k]) = X ∗ (z ∗ ). (2.33)

Proof. It holds

!∗
X
X ∗ (z) = x[k]z −k
k=−∞

(2.34)
X
∗ ∗ −k
= x [k](z ) .
k=−∞

Replacing z by z ∗ one gets the desired result.

7. Differentiation in z domain

Z (kx[k]) = −z X(z). (2.35)
∂z

Proof. It holds

∂ ∂ X
X(z) = x[k]z −k
∂z ∂z k=−∞

X ∂ −k
linearity of sum/derivative = x[k] z
k=−∞
∂z
∞ (2.36)
X
−k−1
= x[k](−k)z
k=−∞

1 X
=− kx[k]z −k ,
z k=−∞

from which the statement follows.

Approximations
In order to use this concept, often the exact solution is too complicated to compute and
not needed for an acceptable result. In practice, approximations are used. Instead of
considering the derivative as it is defined, one tries to approximate this via differences.
Given y(t) = ẋ(t), the three most used approximation methods are

• Euler forward:
x[k + 1] − x[k]
y[k] ≈ (2.37)
Ts
• Euler backward:
x[k] − x[k − 1]
y[k] ≈ (2.38)
Ts
• Tustin method:
y[k] − y[k − 1] x[k] − x[k − 1]
≈ (2.39)
2 Ts

38
Gioele Zardini Control Systems II FS 2018

1
Exact s= · ln(z) z = es·Ts
Ts
z−1
Euler forward s= z = s · Ts + 1
Ts
z−1 1
Euler backward s= z=
z · Ts 1 − s · Ts
Ts
2 z−1 1+s· 2
Tustin s= · z= Ts
Ts z + 1 1−s· 2

Table 2: Discretization methods and substitution.

The meaning of the variable z can change with respect to the chosen discretization ap-
proach. Here, just discretization results are presented. You can try derive the following
rules on your own. A list of the most used transformations is reported in Table 2. The
different approaches are results of different Taylor’s approximations5 :

• Euler Forward:
z = es·Ts ≈ 1 + s · Ts . (2.40)

• Euler Backward:
1 1
z = es·Ts = ≈ . (2.41)
e−s·Ts 1 − s · Ts
• Tustin: Ts Ts
es· 2 1+s· 2
z= Ts ≈ Ts
. (2.42)
e−s· 2 1−s· 2

In practice, the most used approach is the Tustin transformation, but there are cases
where the other transformations could be useful.
Example 11. You are given the differential relation
d
y(t) = x(t), x(0) = 0. (2.43)
dt
One can rewrite the relation in the frequency domain using the Laplace transform. Using
the property for derivatives
 
d
L f (t) = sL(f (t)) − f (0). (2.44)
dt
By Laplace transforming both sides of the relation and using the given initial condition,
one gets
Y (s) = sX(s). (2.45)
In order to discretize the relation, we sample with a generic sampling time T the signals.
Forward Euler’s method for the approximation of differentials reads
x((k + 1)T ) − x(kT )
ẋ(kT ) ≈ . (2.46)
T
5
As reminder: ex ≈ 1 + x.

39
Gioele Zardini Control Systems II FS 2018

The discretized relation reads


x((k + 1)T ) − x(kT )
y(kT ) = . (2.47)
T
In order to compute the z-transform of the relation, one needs to use its time shift property,
i.e.
Z (x((k − k0 )T )) = z −k0 Z(x(kT )). (2.48)
In this case, the shift is of -1 and transforming both sides of the relation results in

zX(z) − X(z) z−1


Y (z) = = X(z). (2.49)
T T
By using the relations of Equation (2.45) and Equation (2.49), one can write
z−1
s= . (2.50)
T

2.4 State Space Discretization


Starting from the continuous-time state space form

ẋ(t) = Ax(t) + Bu(t)


(2.51)
y(t) = Cx(t) + Du(t),

one wants to obtain the discrete-time state space representation

x[k + 1] = Ad x[k] + Bd u[k]


(2.52)
y[k] = Cd x[k] + Dd u[k].

By recalling that x[k + 1] = x((k + 1)T ), one can start from the solution derived for
continuous-time systems
Z t
At
x(t) = e x(0) + eAt
e−Aτ Bu(τ )dτ. (2.53)
0

By plugging into this equation t = (k + 1)T , one gets


Z (k+1)T
x((k + 1)T ) = e A(k+1)T
x(0) + e A(k+1)T
e−Aτ Bu(τ )dτ (2.54)
0

and hence Z kT
x(kT ) = e AkT
x(0) + eAkT
e−Aτ Bu(τ )dτ. (2.55)
0

Since we want to write x((k + 1)T ) in terms of x(kT ), we multiply all terms of Equation
(2.55) by eAT and rearrange the equation as
Z kT
e A(k+1)T
x(0) = e AT
x(kT ) − e A(k+1)T
e−Aτ Bu(τ )dτ. (2.56)
0

40
Gioele Zardini Control Systems II FS 2018

Substituting this result into Equation (2.54), one gets


Z kT Z (k+1)T
−Aτ
AT
x((k + 1)T ) = e x(kT ) − e A(k+1)T
e Bu(τ )dτ + e A(k+1)T
e−Aτ Bu(τ )dτ
0 0
Z (k+1)T
= eAT x(kT ) + eA(k+1)T e−Aτ Bu(τ )dτ
kT
Z (k+1)T
= eAT x(kT ) + eA[(k+1)T −τ ] Bu(τ )dτ
ZkT0
(a) = eAT x(kT ) − eAα Bdαu(kT ).
T
Z T
eAT x[k] +
= |{z} eAα Bdα u[k],
Ad | 0 {z }
Bd
(2.57)
where we used
(a) α = (k + 1)T − τ, dα = −dτ .
It follows that
Ad = eAT ,
Z T
Bd = eAα Bdα,
0 (2.58)
Cd = C,
Dd = D.

Example 12. Given the general state space for in Equation (2.51), the forward Euler
approach for differentials reads
x[k + 1] − x[k]
ẋ ≈ . (2.59)
Ts
Applying this to the generic state space formulation
ẋ(t) = Ax(t) + Bu(t)
(2.60)
y(t) = Cx(t) + Du(t),
one gets
x[k] − x[k − 1]
= Ax[k] + Bu[k]
Ts (2.61)
y[k] = Cx[k] + Du[k],
which results in
x[k + 1] = (I + Ts A) x[k] + Ts B u[k]
| {z } |{z}
Ad,f Bd,f
(2.62)
y[k] = |{z}
C x[k] + |{z}
D u[k].
Cd,f Dd,f

41
Gioele Zardini Control Systems II FS 2018

Example 13. You are given the system


   
1 −1 1
ẋ(t) = x(t) + u(t)
2 4 0 (2.63)

y(t) = 1 1 x(t).

(a) Find the discrete-time state space representation of the system using a sampling
time Ts = 1s, i.e. find Ad , Bd , Cd , Dd

42
Gioele Zardini Control Systems II FS 2018

Solution. In order to compute the exact discretization, we use the formulas derived in
class. For Ad , one has
Ad = eATs = eA . (2.64)
In order to compute the matrix exponential, one has to compute its eigenvalues, store
them in a matrix D, find its eigenvectors, store them in matrix T , find the diagonal form
and use the law
eA = T eD T −1 . (2.65)
First, we compute the eigenvalues of A. It holds
PA (λ) = det(A − λI)
 
1 − λ −1
= det
2 4−λ (2.66)
2
= λ − 5λ + 6
= (λ − 2) · (λ − 3).
Therefore, the eigenvalues are λ1 = 2 and λ2 = 3 and they have algebraic multiplicity 1.
We compute now the eigenvectors:
• Eλ1 = E2 : from (A − λ1 I)x = 0 one gets the system of equations
 
−1 −1 0
2 2 0
One can note that the second row is linear dependent with the first. We therefore have a
free parameter and the eigenspace for λ1 reads
n −1 o
E2 = . (2.67)
1
E2 has geometric multiplicity 1.
• Eλ2 = E3 : from (A − λ2 I)x = 0 one gets the system of equations
 
−2 −1 0
2 1 0
One notes that the first and the second row are linearly dependent. We therefore have a
free parameter and the eigenspace for λ2 reads
n −1 o
E3 = (2.68)
2
E3 has geometric multiplicity 1. Since the algebraic and geometric multiplicity concide
for every eigenvalue of A, the matrix is diagonalizable. With the computed eigenspaces,
one can build the matrix T as  
−1 −1
T = , (2.69)
1 2
and D as a diagonal matrix with the eigenvalues on the diagonal:
 
2 0
D= . (2.70)
0 3

43
Gioele Zardini Control Systems II FS 2018

It holds
 
−1 1 2 1
T = ·
(−2 + 1) −1 −1
  (2.71)
−2 −1
= .
1 1

Using Equation (2.65) one gets

Ad = eA
= T eD T −1
   2   
−1 −1 e 0 −2 −1 (2.72)
= · ·
1 2 0 e3 1 1
 
2e2 − e3 e2 − e3
= .
−2e2 + 2e3 −e2 + 2e3

For Bd holds
Z Ts
Bd = eAτ Bdτ
Z0 1
= eAτ Bdτ
Z0 1    2τ     
−1 −1 e 0 −2 −1 1
= · 3τ · · dτ
0 1 2 0 e 1 1 0
Z 1  2τ    (2.73)
−e −e3τ −2
= 2τ 3τ · dτ
0 e 2e 1
Z 1 
2e2τ − e3τ
= dτ
0 −2e2τ + 2e3τ
 2 e3 2 
e − 3 −3
= .
−e2 + 32 e3 + 13

Furthermore, one has Cd = C and Dd = D = 0.

44
Gioele Zardini Control Systems II FS 2018

2.5 Discrete-time Systems Stability


We want to investigate the stability conditions for the discrete-time system given as

x[k + 1] = Ad x[k] + Bd u[k]


(2.74)
y[k] = Cd x[k] + Dd u[k], x[0] = x0 .

As usual, we want to find conditions for wich the state x[k] does not diverge. The free
evolution (free means without input) can be written as

x[k + 1] = Ad x[k]. (2.75)

Starting from the initial state, one can write

x[1] = Ad x0
x[2] = A2d x0
.. (2.76)
.
x[k] = Akd x0 .

In order to analyze the convergence of this result, let’s assume that Ad ∈ Rn×n is diago-
nalizable and let’s rewrite Ad with the help of its diagonal form:

x[k] = Akd x0
k
= T DT −1 x0
!
(2.77)
= T D |T −1 −1
{zT} DT . . . T DT
−1

I
k −1
= TD T x0 .

where D is the matrix containing the eigenvalues of Ad and T is the matrix containing
the relative eigenvectors. One can rewrite this using the modal decomposition as
n
X
k −1
TD T x0 = αi λki vi , (2.78)
i=1

where vi are the eigenvectors relative to the eigenvalues λi and αi = T −1 x0 some coef-
ficients depending on the initial condition x0 . Considering any possible eigenvalue, i.e.
λi = ρi ejφi , one can write
n
X
k −1
TD T x0 = αi ρki ejφi k vi . (2.79)
i=1

It holds
|λi | = ρki |ejφi k |
(2.80)
= ρki .

This helps us defining the following cases:


• |λi | < 1 ∀i = 1, . . . , n: the free evolution converges to 0 and the system is asymp-
totically stable.

45
Gioele Zardini Control Systems II FS 2018

• |λi | ≤ 1 ∀i = 1, . . . , n and eigenvalues with unit modulus have equal geometric and
algebraic multiplicity: the free evolution converges (but not to 0) and the system is
marginally stable or stable.

• ∃i s.t. |λi | > 1: the free evolution diverges and the system is unstable.

Remark. The same analysis can be performed for non diagonalizable matrices Ad . The
same conditions can be derived, with the help of the Jordan diagonal form of Ad .

Example 14. You are given the dynamics


   
0 0 x10
x[k + 1] = x[k], x[0] = . (2.81)
1 12 x20
| {z }
Ad

Since Ad is a lower diagonal matrix, its eigenvalues lie in the diagonal, i.e. λ1 = 0, λ2 = 12 .
Since both eigenvalues satisfy |λi | < 1, the system is asymptotically stable.

2.6 Discrete Time Controller Synthesis


As you learned in class, there are two ways to discretize systems. The scheme in Figure
14 resumes them.

Figure 14: Emulation and Discrete time controller synthesis.

2.6.1 Emulation
In the previous chapter, we learned how to emulate a system. Let’s define the recipe for
such a procedure: Given a continous-time plant P (s):

1. Design a continuous-time controller for the continuous-time plant.

46
Gioele Zardini Control Systems II FS 2018

2. Choose a sampling rate that is at least twice (ten times in practice) the crossover
frequency.

3. If required, design an anti aliasing filter (AAF) to remove high frequency components
of the continuous signal that is going to be sampled.

4. Modify your controller to take into accout the phase lag introduced by the dis-
cretization (up to a sampling period delay) and the AAF.

5. Discretize the controller (e.g. use the Tustin method for best accuracy).

6. Check open loop stability. If the system is unstable, change the emulation method,
choose a faster sampling rate, or increase margin of phase.

7. Implement the controller.

2.6.2 Discrete-Time Synthesis


In this chapter we learn how to perform discrete time controller synthesis (also called
direct synthesis). The general situation is the following: a control loop is given as in
Figure 15. The continuous time transfer function G(s) is given.

Figure 15: Discrete-time control loop.

We want to compute the equivalent discrete-time transfer function H(z). The loop is
composed of a Digital-to-Analog Converter (DAC), the continuous-time transfer function
G(s) and of an Analog to Digital Converter (ADC). We aim reaching a structure that
looks like the one depicted in Figure 16.

yr ε(z) U (z) Y (z)


K(z) H(z)

n=0

Figure 16: Discrete Synthesis.

47
Gioele Zardini Control Systems II FS 2018

The first thing to do, is to consider an input to analyze. The usual choice for this type of
analysis is a unit-step of the form

u(kT ) = {. . . , 0, 1, 1, . . .}. (2.82)

Since the z-Transform is defined as



X
X(z) = x(n) · z −n (2.83)
n=0

one gets for u


U (z) = 1 + z −1 + z −2 + z −3 + . . . + z −n . (2.84)
This sum can be written as (see geometric series)
1
U (z) = . (2.85)
1 − z −1
For U (z) to be defined, this sum must converge. This can be verified by exploring the
properties of the geometric series.
Remark. Recall: sum of geometric series. Let Sn denote the sum over the first n
elements of a geometric series:

Sn = U0 + U0 · a + U0 · a2 + . . . + U0 · an−1
(2.86)
= U0 · (1 + a + a2 + . . . + an−1 ).

Then
a · Sn = U0 · (a + a2 + a3 + . . . + an ) (2.87)
and
Sn − a · Sn = U0 · (1 − an ), (2.88)
which leads to
1 − an
Sn = U0 · . (2.89)
1−a
From here, it can be shown that the limit for n going to infinity convergenges if and only
if the absolute value of a is smaller than one, i.e.
1
lim Sn = U0 · , iff |a| < 1. (2.90)
n→∞ 1−a
Therefore the limiting case |a| = 1 =: r is called radius of convergence. The according
convergence criterion is |a| < r.
H(z) contains the converters: at first, we have the digital -to-analog converter. The
Laplace-Transform of the unit-step reads generally
1
. (2.91)
s
Hence, the transfer function before the analog-to-digital converter reads

G(s)
. (2.92)
s

48
Gioele Zardini Control Systems II FS 2018

In order to consider the analog-to-digital converter, we have to apply the inverse Laplace
transfrom to get  
−1 G(s)
y(t) = L . (2.93)
s
Through a z− transform one can now get Y (z). It holds

Y (z) = Z (y(kT ))
(2.94)
  
−1 G(s)
=Z L .
s

The transfer function is then given as

Y (z)
H(z) = . (2.95)
U (z)

49
Gioele Zardini Control Systems II FS 2018

2.7 Examples
Example 15. A continuous-time system with the following transfer function is consid-
ered:
9
G(s) = . (2.96)
s+3
(a) Calculate the equivalent discrete-time transfer function H(z). The latter is com-
posed of a Digital-to-Analog Converter (DAC), the continuous-time transfer function
G(s) and an Analog-to-Digital Converter (ADC). Both converters, i.e. the DAC and
ADC, have a sampling time Ts = 1s.

(b) Calculate the static error if a proportional controller K(z) = kp is used and the
reference input yc,k is a step signal.
Hint: Heavyside with amplitude equal to 1.

50
Gioele Zardini Control Systems II FS 2018

Solution.
(a) Rather than taking into account all the individual elements which make up the
continuous-time part of the system (DAC, plant, ADC), in a first step, these el-
ements are lumped together and are represented by the discrete-time description
H(z). In this case, the discrete-time output of the system is given by

Y (z) = H(z) · U (z), (2.97)

where U (z) is the z−transform of the discrete input uk given to the system. There-
fore, the discrete-time representation of the plant is given by the ratio of the output
to the input
Y (z)
H(z) = . (2.98)
U (z)
For the sake of convenience, uk is chosen to be the discrete-time Heaviside function
(
uk [k] = 1, k ≥ 0
(2.99)
0, else.
This input function needs to be z−transformed. Recall the definition of the z-
transform ∞
X
X(z) = x(n) · z −n . (2.100)
n=0

and applying it to the above equation, with the input uk gives (uk [k] = 1 for k ≥ 0)

U (z) = X(uk )

X
= z −k
k=0 (2.101)
X∞
= (z −1 )k .
k=0

For U (z) to be defined, this sum must converge. Recalling the properties of geo-
metric series one can see
1
U (z) = , (2.102)
a − z −1
as long as the convergence criterion is satisfied, i.e. as long as |z −1 | < 1 or better
|z| > 1 (a = 1). This signal is then transformed to continuous time using a zero-
order hold DAC. The output of this transformation is again a Heaviside function
uh (t). Since the signal is now in continuous time, the Laplace transform is used to
for the analysis. The Laplace transform of the step function is well known to be
1
L(uh (t))(s) = = U (s). (2.103)
s
The plant output in continuous time is given by

Y (s) = G(s) · U (s)


G(s) . (2.104)
= .
s
51
Gioele Zardini Control Systems II FS 2018

After the plant G(s), the signal is sampled and transformed into discrete time once
more. Therefore, the z−transform of the output has to be calculated. However, the
signal Y (s) cannot be transformed directly, since it is expressed in the frequency
domain. Thus, first, it has to be transformed back into the time domain (i.e. into
y(t)) using the inverse Laplace transform, where it is then sampled every t = k · T .
The resulting series of samples {y[k]} is then transformed back into the z−domain,
i.e.    
−1 G(s)
Y (z) = X {L (kT )} . (2.105)
s
To find the inverse Laplace transform of the output, its frequency domain represen-
tation is decomposed into a sum of simpler functions

G(s) 9
=
s s · (s + 3)
α β
= + (2.106)
s s+3
s · (α + β) + 3 · α
= .
s · (s + 3)

The comparison of the numerators yields

α = 3, β = −3. (2.107)

and thus
G(s) 3 3
= −
s s s + 3  (2.108)
1 1
=3· − .
s s+3

Now the terms can be individually transformed with the result


 
−1 G(s)
= 3 · 1 − e−3t · uh (t)

L
s (2.109)
= y(t).

The z−transform of the output sampled at discrete time istants y(kT ) is given by
"∞ ∞
#
X X
X({y(kT )}) = 3 · z −k − e−3kT · z −k
" k=0

k=0

#
X X (2.110)
=3· z −1k − (e−3T · z −1 )k
k=0 k=0
= Y (z).

From above, the two necessary convergence criteria are known:

|z −1 | < 1 ⇒ |z| > 1


(2.111)
|e−3T · z −1 | ⇒ |z| > |e−3T |.

52
Gioele Zardini Control Systems II FS 2018

Using the above equations the output transfrom converges to (given that the two
convergence criteria are satisfied)
 
1 1
Y (z) = 3 · − . (2.112)
1 − z −1 1 − e−3T · z −1

Finally, the target transfer function H(z) is given by

Y (z)
H(z) =
U (z)
= (1 − z −1 ) · Y (z)
1 − z −1 (2.113)
 
=3· 1−
1 − e−3T · z −1
(1 − e−3T ) · z −1
=3· .
1 − e−3T · z −1

(b) From the signal flow diagram, it can be seen that the error ε(z) is composed of

ε(z) = Yc (z) − Y (z)


= Yc (z) − H(z) · K(z) · ε(z)
(2.114)
Yc (z)
= .
1 + kp · H(z)

The input yc (t) is a discrete step signal, for which the z-transform was calculated in
(a):
1
Yc (z) = . (2.115)
1 + z −1
Therefore, the error signal reads
1
1+z −1
ε(z) = −3T )·z −1 . (2.116)
1+3· kp · (1−e
1−e−3T ·z −1

To calculate the steady-state error, i.e. the error after infinite time, the discrete-time
final value theorem6 is used:

lim ε(t) = lim(1 − z −1 ) · ε(z), (2.117)


t→∞ z→1

but as z goes to 1, so does z −1 and thereforem 1 is substituted for each z −1 in ε(z)


and the stati error becomes
1
ε∞ = . (2.118)
1 + 3 · kp
Note that the error does not completely vanish but can be made smaller by increasing
kp . This is the same behaviour which would have been expected from a purely
proportional controller in continuous time. To drive the static error to zero, a
1
discrete-time integrator of the form Ti ·(1−z −1 ) would be necessary.

6
limt→∞ ε(t) = lims→0 s · ε(s)

53
Gioele Zardini Control Systems II FS 2018

Example 16. Together with your friends, you founded the new startup TACSII, which
is active in self-driving taxi services. You are given the dynamics of the demand for the
two different types of vehicles you offer: small vehicles (x1 (t)) and large vehicles (x2 (t)).
TACSI, a startup founded a year ago, helps you when you don’t have enough vehicles, by
giving you a multiple of e(t) extra vehicles when needed. Due to software limitations, for
the moment you can only mesaure the sum of small and large vehicles that are requested
to you. The dynamics read

ẋ1 (t) = 5x1 (t) − 6x2 (t) + e(t)


(2.119)
ẋ2 (t) = 3x1 (t) − 4x2 (t) + 2e(t)

a) Write down the state space description for the system, considering the extra vehicles
as an input to the system, i.e. find the system matrices A, B, C and D.

b) Discretize the system using the forward Euler approach and a sampling time Ts = 2s,
i.e. find Ad,f , Bd,f , Cd,f and Dd,f .

c) Discretize the system using exact discretization and a sampling time of Ts = 1s, i.e.
find Ad , Bd , Cd and Dd .

54
Gioele Zardini Control Systems II FS 2018

Solution.
a) The state space description reads
     
ẋ1 (t) 5 −6 1
ẋ = = x(t) + E(t)
ẋ2 (t) 3 −4 2
| {z } |{z}
A B (2.120)

y(t) = 1 1 x(t).
| {z }
C

One notes that for this example D = 0.

b) The forward Euler approach for differentials reads

x[k + 1] − x[k]
ẋ ≈ . (2.121)
Ts
Applying this to the generic state space formulation

ẋ(t) = Ax(t) + Bu(t)


(2.122)
y(t) = Cx(t) + Du(t),
one gets
x[k + 1] − x[k]
= Ax[k] + Bu[k]
Ts (2.123)
y[k] = Cx[k] + Du[k],

which results in
x[k + 1] = (I + Ts A) x[k] + Ts B u[k]
| {z } |{z}
Ad,f Bd,f
(2.124)
y[k] = |{z}
C x[k] + |{z}
D u[k].
Cd,f Dd,f

For our special case, it holds

Ad,f = (I + Ts A)
   
1 0 10 −12
= +
0 1 6 −8
 
11 −12
= ,
6 −7
Bd,f = Ts B (2.125)
 
2
= ,
4
Cd,f = C

= 1 1
Dd,f = D = 0,

where we used Ts = 2s.

55
Gioele Zardini Control Systems II FS 2018

c) In order to compute the exact discretization, we use the formulas derived in class.
For Ad , one has
Ad = eATs = eA . (2.126)
In order to compute the matrix exponential, one has to compute its eigenvalues,
store them in a matrix D, find its eigenvectors, store them in matrix T , find the
diagonal form and use the law

eA = T eD T −1 . (2.127)

First, we compute the eigenvalues of A. It holds

PA (λ) = det(A − λI)


 
5−λ −6
= det
3 −4 − λ (2.128)
2
=λ −λ−2
= (λ − 2) · (λ + 1).

Therefore, the eigenvalues are λ1 = 2 and λ2 = −1 and they have algebraic multi-
plicity 1. We compute now the eigenvectors:

• Eλ1 = E2 : from (A − λ1 I)x = 0 one gets the system of equations

 
3 −6 0
3 −6 0

One can note that the second row is identical to the first. We therefore have a free
parameter and the eigenspace for λ1 reads
n 2 o
E2 = . (2.129)
1

E2 has geometric multiplicity 1.

• Eλ2 = E−1 : from (A − λ2 I)x = 0 one gets the system of equations

 
6 −6 0
3 −3 0

One notes that the first and the second row are linearly dependent. We therefore
have a free parameter and the eigenspace for λ2 reads
n 1 o
E−1 = (2.130)
1

E−1 has geometric multiplicity 1. Since the algebraic and geometric multiplicity
concide for every eigenvalue of A, the matrix is diagonalizable. With the computed
eigenspaces, one can build the matrix T as
 
2 1
T = , (2.131)
1 1

56
Gioele Zardini Control Systems II FS 2018

and D as a diagonal matrix with the eigenvalues on the diagonal:


 
2 0
D= . (2.132)
0 −1

It holds
 
−1 1 1 −1
T = ·
(2 − 1) −1 2
  (2.133)
1 −1
=
−1 2

Using Equation (2.127) one gets

Ad = eA
= T eD T −1
   2   
2 1 e 0 1 −1 (2.134)
= · ·
1 1 0 e−1 −1 2
−1 −1
 2 2

2e − e −2e + 2e
= .
e2 − e−1 2e−1 − e2

For Bd holds
Z Ts
Bd = eAτ Bdτ
0
Z 1
= eAτ Bdτ
Z0 1    2τ     
2 1 e 0 1 −1 1
= · −τ · · dτ
0 1 1 0 e −1 2 2
Z 1  2τ −τ    (2.135)
2e e −1
= 2τ −τ · dτ
0 e e 3
Z 1
−2e2τ + 3e−τ

= dτ
0 −e2τ + 3e−τ
4 − e2 − 3e−1
 
= 7 e2 .
2
− 2 − 3e−1

Furthermore, one has Cd = C and Dd = D = 0.

57
Gioele Zardini Control Systems II FS 2018

Example 17.

(a) Choose all the signals that can be sampled without aliasing. The sampling time is
Ts = 1s.

 x(t) = cos(4π · t).


 x(t) = cos(4π · t + π).
 x(t) = 2 · cos(4π · t + π).
 x(t) = cos(0.2π · t).
 x(t) = cos(0.2π · t + π).
 x(t) = 3 · cos(0.2π · t + π).
 x(t) = cos(π · t).
 x(t) = cos(π · t + π).
 x(t) = 2 · cos(π · t + π).
 x(t) = cos(0.2π · t) + cos(4π · t).
 x(t) = sin(0.2π · t) + sin(0.4π · t).
 x(t) = 100 2π
P 
i=1 cos i+1 · t .

 x(t) = 100 2π
P 
i=1 cos i+2 · t .

(b) The signal

x(t) = 2 · cos(20 · π · t + π) + cos(40 · π · t) + cos(30 · π · t).

is sampled with sampling frequency fs . What is the minimal fs such that no aliasing
occurs?

58
Gioele Zardini Control Systems II FS 2018

Solution.

(a)  x(t) = cos(4π · t).


 x(t) = cos(4π · t + π).
 x(t) = 2 · cos(4π · t + π).
3 x(t) = cos(0.2π · t).

3 x(t) = cos(0.2π · t + π).

3 x(t) = 3 · cos(0.2π · t + π).

 x(t) = cos(π · t).
 x(t) = cos(π · t + π).
 x(t) = 2 · cos(π · t + π).
 x(t) = cos(0.2π · t) + cos(4π · t).
3 x(t) = sin(0.2π · t) + sin(0.4π · t).

 x(t) = 100 2π
P 
i=1 cos i+1 · t .

3 x(t) = P100 2π

 i=1 cos i+2
· t .
Explanation:
If one goes back to the definition of the ranges to ensure no aliasing occurs, one gets
the formula
1
f< .
2 · Ts
In this case the condition reads
1
f< = 0.5Hz.
2 · 1s
One can read the frequency of a signal from its formula: the value that multiplies t
is ω and
ω
= f.

The first three signals have

f= = 2Hz.

which is greater than 0.5Hz. Moreover, additional phase and gain don’t play an
important role in this sense. The next three signals have a frequency of
0.2π
f= = 0.1Hz.

that is lower than 0.5Hz and hence perfect, in order to not encounter aliasing. The
next three signals have the critical frequency and theoretically speaking, one sets
this as already aliased. The reason for that is that the Nyquist theorem sets a strict
< in the condition.
For the next two signals a special procedure applies: if one has a combination of
signals, one has to look at the bigger frequency of the signal. In the first case
this reads 2Hz, that exceeds the limit frequency. In the second case this reads
0.2Hz, that is acceptable. The last two cases are a general form of combination

59
Gioele Zardini Control Systems II FS 2018

1
of signals. The leading frequency of the first sum, decreases with i+1 and has its
biggest value with i = 1, namely 0.5Hz. This is already at the limit frequency,
1
hence not acceptable. The leading frequency of the second sum, decreases with i+2
and has its biggest value with i = 1, namely 0.33Hz. This is lower that the limit
frequency, hence acceptable.

(b) The general formula reads


fs > 2 · fmax .
Here it holds
40π
fmax = = 20Hz.

It follows
fs > 2 · 20Hz = 40Hz.

60
Gioele Zardini Control Systems II FS 2018

Example 18.

(a) For which A and B is the system asymptotically stable?


   
1 2 1
 A= ,B= .
1 2 2
   
−1 −2 1
 A= ,B= .
−1 −2 2
   
0 0 0.1
 A= ,B= .
0 0 0
   
−1 −2 1
 A= ,B= .
1 −0.5 2
   
−1 −2 1
 A= ,B= .
0 −0.5 2
   
1 −2 1
 A= ,B= .
0 0.5 2
   
−0.1 −2 1
 A= ,B= .
0 −0.5 2
   
−0.1 −2 0.1
 A= ,B= .
0 −0.5 0
   
0.1 −2 1
 A= ,B= .
0 0.5 2
   
0.1 −2 0.1
 A= ,B= .
0 0.5 0

(b) The previous exercise can be solved independently of B.

 True.
 False.

61
Gioele Zardini Control Systems II FS 2018

Solution.
   
1 2 1
(a)  A= ,B= .
1 2 2
   
−1 −2 1
 A= ,B= .
−1 −2 2
   
3A= 0 0 0.1
 ,B= .
0 0 0
   
−1 −2 1
 A= ,B= .
1 −0.5 2
   
−1 −2 1
 A= ,B= .
0 −0.5 2
   
3A= −0.1 −2 1
 ,B= .
0 −0.5 2
   
3A= −0.1 −2 0.1
 ,B= .
0 −0.5 0
   
3 0.1 −2 1
 A= ,B= .
0 0.5 2
   
3A= 0.1 −2 0.1
 ,B= .
0 0.5 0

(b)

3 True.

 False.

Explanation:
The eigenvalues of A should fulfill
λi | < 1. (2.136)
Furthermore. this has nothing to do with B.

62
Gioele Zardini Control Systems II FS 2018

Example 19. One of your colleagues has developed the following continuous-time con-
troller for a continuous-time plant
2s + 1
C(s) = (2.137)
s+α
where α ∈ R is a tuning factor.

(a) You now want to implement the controller on a microprocessor using the Euler
backward emulation approach. What is the resulting function C(z) for a generic
sampling time T ∈ R+ ?

(b) What is the range of the tuning factor α that produces an asymptotically stable
discrete.time controller when applying the Euler backward emulation approach?

(c) What is the condition on α to obtain an asymptotically stable continuous-time


controller C(s) and an asymptotically stable discrete-time controller C(z) using the
Euler backward emulation approach?

63
Gioele Zardini Control Systems II FS 2018

Solution.

(a) The Euler backward emulation approach reads


z−1
s= . (2.138)
T ·z
If one substitutes this into the transfer function of the continuous-time controller,
one gets
z−1
2· T ·z
+1
C(z) = z−1
T ·z

(2.139)
z · (2 + T ) − 2
= .
z · (1 + α · T ) − 1

(b) The controller C(z) is asymptotically stable if its pole πd fulfills the condition

|πd | < 1.

The pole reads

z · (1 + α · T ) − 1 = 0
1 (2.140)
πd = .
1+α·T
This, together with the condition for stability gives
1 2
−1 < < 1 ⇒ α > 0 or α < − . (2.141)
1+α·T T

(c) For C(s) to be asymptotically stable, its pole πc = −α must lie in the left half of
the complex plane:
Re{πc } < 0 ⇒ α > 0. (2.142)
Together with the results from (b), the condition on α is α > 0.

64
Gioele Zardini Control Systems II FS 2018

3 Introduction to MIMO Systems


MIMO systems are systems with multiple inputs and multiple outputs. In this chapter
we will introduce some analytical tools.

3.1 System Description


3.1.1 State Space Description
The state-space description of a MIMO system is very similar to the one of a SISO system.
For a linear, time invariant MIMO system with m input signals and p output signals, it
holds
ẋ(t) = A · x(t) + B · u(t), x(t) ∈ Rn , u(t) ∈ Rm
(3.1)
y(t) = C · x(t) + D · u(t), y(t) ∈ Rp
where
x(t) ∈ Rn×1 , u(t) ∈ Rm×1 , y(t) ∈ Rp×1 , A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .
(3.2)
Remark. The dimensions of the matrices A, B, C, D are very important and they are a
key concept to understand problems.

The big difference from SISO systems is that u(t) and y(t) are here vectors and not
scalars anymore. For this reason B, C, D are now matrices.

3.1.2 Transfer Function


One can compute the transfer function of a MIMO system with the well known formula
P (s) = C · (s · I − A)−1 · B + D. (3.3)
This is no more a scalar, but a p × m-matrix. The elements of that matrix are rational
functions. Mathematically:
 
P11 (s) · · · P1m (s)
bij (s)
P (s) =  ... ... ..  , Pij (s) = . (3.4)

.  aij (s)
Pp1 (s) · · · Ppm (s)
Here Pij (s) is the transfer function from the j-th input to the i-th output.
Remark. In the SISO case, the only matrix we had to care about was A. Since for the
MIMO case B,C,D are matrices, one has to pay attention to a fundamental mathematical
property: the matrix multiplication is not commutative (i.e. A · B 6= B · A). Since now
P (s) and C(s) are matrices, considering Figure 17, it holds
LO (s) = P (s) · C(s) 6= C(s) · P (s) = LI (s), (3.5)
where LO (s) is the outer loop transfer function and LI (s) is the inner loop transfer func-
tion. Moreover, one can no more define the complementary sensitivity and the sensitivity
as
L(s) 1
T (s) = , S(s) = . (3.6)
1 + L(s) 1 + L(s)
because no matrix division is defined. There are however similar expressions to describe
those transfer functions:

65
Gioele Zardini Control Systems II FS 2018

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I C(s) P (s)

Figure 17: Standard feedback control system structure.

Output Sensitivity Functions


Referring to Figure 17, one can write
Y (s) = N (s) + η(s)
= N (s) + P (s)V (s)
= N (s) + P (s) (D(s) + U (s)) (3.7)
= N (s) + P (s) (D(s) + C(s)E(s))
= N (s) + P (s) (D(s) + C(s)(R(s) − Y (s))) ,
from which follows
(I + P (s)C(s))Y (s) = N (s) + P (s)D(s) + P (s)C(s)R(s)
(3.8)
Y (s) = (I + P (s)C(s))−1 (N (s) + P (s)D(s) + P (s)C(s)R(s)) .
It follows
• Output sensitivity function (n → y)
SO (s) = (I + LO (s))−1 . (3.9)

• Output complementary sensitivity function (r → y)


TO (s) = (I + LO (s))−1 LO (s). (3.10)

Input Sensitivity Functions


Referring to Figure 18, one can write
U (s) = C(s)E(s)
= C(s) (R(s) − Y (s))
= C(s)R(s) − C(s) (N (s) + η(s)) (3.11)
= C(s)R(s) − C(s)N (s) − C(s)P (s)V (s)
= C(s)R(s) − C(s)N (s) − C(s)P (s) (D(s) + U (s)) ,
from which it follows
(I + C(s)P (s))U (s) = C(s)R(s) − C(s)N (s) − C(s)P (s)D(s)
−U (s) = (I + C(s)P (s))−1 (−C(s)R(s) + C(s)N (s) + C(s)P (s)D(s)) .
(3.12)
It follows

66
Gioele Zardini Control Systems II FS 2018

• Input sensitivity function (d → v)


SI (s) = (I + LI (s))−1 . (3.13)

• Input complementary sensitivity function (d → −u)


TI (s) = (I + LI (s))−1 LI (s). (3.14)

Example 20. In order to understand how to work with these matrices, let’s analyze the
following problem. We start from the standard control system’s structure (see Figure 18).
To keep things general, let’s say the plant P (s) ∈ Cp×m and the controller C(s) ∈ Cm×p .
The reference r ∈ Rp , the input u ∈ Rm and the disturbance d ∈ Rp . The output Y (s)
can as always be written as
Y (s) = T (s) · R(s) + S(s) · D(s). (3.15)
where T (s) is the transfer function of the complementary sensitivity and S(s) is the
transfer function of the sensitivity.

w=0 d
r e u y
C(s) P (s)

n=0

Figure 18: Standard feedback control system structure.

Starting from the error E(s)


If one wants to determine the matrices of those transfer functions, one can start writing
(by paying attention to the direction of multiplications) with respect to E(s)
E(s) = R(s) − P (s) · C(s) · E(s) − D(s),
(3.16)
Y (s) = P (s) · C(s) · E(s) + D(s).
This gives in the first place
E(s) = (I + P (s) · C(s))−1 · (R(s) − D(s)) . (3.17)
Inserting and writing the functions as F (s) = F for simplicity, one gets
Y =P ·C ·E+D
= P · C · (I + P · C)−1 · (R − D) + D
= P · C · (I + P · C)−1 · R − P · C (I + P · C)−1 · D + (I + P · C) (I + P · C)−1 ·D
| {z }
I
−1 −1
= P · C · (I + P · C) · R + (I + P · C − P · C) · (I + P · C) ·D
−1 −1
= P · C · (I + P · C) · R + (I + P · C) · D.
(3.18)

67
Gioele Zardini Control Systems II FS 2018

Recalling the general equation (4.39) one gets the two transfer functions:
T1 (s) = P (s) · C(s) · (I + P (s) · C(s))−1 ,
(3.19)
S1 (s) = (I + P (s) · C(s))−1 .

Starting from the input U (s)


If one starts with respect to U (s), one gets
U (s) = C(s) · (R(s) − D(s)) − C(s) · P (s) · U (s),
(3.20)
Y (s) = P (s) · U (s) + D(s).
This gives in the first place
U (s) = (I + C(s) · P (s))−1 · C(s) · (R(s) − D(s)) . (3.21)
Inserting and writing the functions as F (s) = F for simplicity, one gets
Y =P ·U +D
= P · (I + C · P )−1 · C · (R − D) + D (3.22)
−1 −1 
= P · (I + C · P ) · C · R + I − P · (I + C · P ) · C · D.
Recalling the general equation (4.39) one gets the two transfer functions:
T2 (s) = P (s) · (I + C(s) · P (s))−1 · C(s),
(3.23)
S2 (s) = I − P (s) · (I + C(s) · P (s))−1 · C(s).
It can be shown that this two different results actually are the equivalent. It holds
S1 = S2
(I + P · C)−1 = I − P · (I + C · P )−1 · C
I = I + P · C − P · (I + C · P )−1 · C · (I + P · C)
I = I + P · C − P · (I + C · P )−1 · (C + C · P · C)
I = I + P · C − P · (I + C · P )−1 · (I + C · P ) · C
I=I+P ·C −P ·C
I=I
(3.24)
T1 = T2
P · C · (I + P · C)−1 = P · (I + C · P )−1 · C
P · C = P · (I + C · P )−1 · C · (I + P · C)
P · C = P · (I + C · P )−1 · (C + C · P · C)
P · C = P · (I + C · P )−1 · (I + C · P ) · C
P ·C =C ·P
I = I.
Finally, one can show that
S(s) + T (s) = (I + P · C)−1 + P · C · (I + P · C)−1
= (I + P · C) · (I + P · C)−1 (3.25)
= I.

68
Gioele Zardini Control Systems II FS 2018

3.2 Poles and Zeros


Since we have to deal with matrices, one has to use the theory of minors (see Lineare
Algebra I/II ) in order to compute the zeros and the poles of a transfer function.
The first step of this computation is to calculate all the minors of the transfer function
P (s). The minors of a matrix F ∈ Rn×m are the determinants of all square submatrices.
By maximal minor it is meant the minor with the biggest dimension.

Example 21. The minors of a given matrix


 
a b c
P (s) = (3.26)
d e f
are:
First order:
a, b, c, d, e, f (3.27)
Second order (maximal minors):
     
a b a c b c
det , det , det . (3.28)
d e d f e f

From the minors one can calculate the poles and the zeros as follows:

3.2.1 Zeros
The zeros are the zeros of the numerator’s greatest common divisor of the maximal minors,
after their normalization with respect to the same denominator (polepolynom).

3.2.2 Poles
The poles are the zeros of the least common denominator of all the minors of P (s).

3.2.3 Directions
In MIMO systems, the poles and the zeros are related to a direction. Moreover, a zero-
pole cancellation occurs only if zero and pole have the same magnitude and input-output
in,out
direction. The directions δπ,i associated with a pole πi are defined by
in out
P (s) s=πi
· δπ,i = ∞ · δπ,i . (3.29)

in,out
The directions δξ,i associated with a zero ξi are defined by
in out
P (s) s=ξi
· δξ,i = 0 · δξ,i . (3.30)

The directions can be computed with the singular value decomposition (see next
chapters) of the matrix P (s).

69
Gioele Zardini Control Systems II FS 2018

3.3 Examples
Example 22. One wants to find the poles and the zeros of the given transfer function
 s+2 
s+3
0
P (s) = . (3.31)
0 (s+1)·(s+3)
s+2

70
Gioele Zardini Control Systems II FS 2018

Solution. First of all, we list all the minors of the transfer function:

Minors:
s+2 (s+1)·(s+3)
• First order: s+3
, s+2
, 0, 0 ;

• Second order: s + 1.

Poles:
The least common denominator of all the minors is

(s + 3) · (s + 2) (3.32)

This means that the poles are

π1 = −2
(3.33)
π2 = −3.

Zeros:
The maximal minor is s + 1 and we have to normalize it with respect to the polepolynom
(s + 3) · (s + 2). It holds

(s + 1) · (s + 2) · (s + 3)
(s + 1) ⇒ (3.34)
(s + 2) · (s + 3)

The numerator reads


(s + 1) · (s + 2) · (s + 3) (3.35)
and so the zeros are
ζ1 = −1
ζ2 = −2 (3.36)
ζ3 = −3.

71
Gioele Zardini Control Systems II FS 2018

Example 23. One wants to find the poles and the zeros of the given transfer function
!
1 1 2·(s+1)
s+1 s+2 (s+2)·(s+3)
P (s) = s+3 s+4 . (3.37)
0 (s+1)2 s+1

72
Gioele Zardini Control Systems II FS 2018

Solution. First of all, we list all the minors of the transfer function:

Minors:
• First order: 1
, 1 , 2·(s+1) , 0, (s+1)
s+1 s+2 (s+2)·(s+3)
s+3 s+4
2 , s+1 ;

s+3 s+4 2 1 s+4


• Second order: ,
(s+1)3 (s+1)·(s+2)
− (s+2)·(s+1)
= s+1
, − (s+1) 2.

Poles:
The least common denominator of all the minors is

(s + 1)3 · (s + 2) · (s + 3). (3.38)

This means that the poles are

π1 = −1
π2 = −1
π3 = −1 (3.39)
π4 = −2
π5 = −3.

Zeros:
The numerators of the maximal minors are (s + 3), 1 and −(s + 4). We have to normalize
them with respect to the polepolynom (s + 1)3 · (s + 2) · (s + 3). It holds

(s + 3)2 · (s + 2)
(s + 3) ⇒ , (3.40)
(s + 1)3 · (s + 2) · (s + 3)
(s + 1)2 · (s + 2) · (s + 3)
1 ⇒ , (3.41)
(s + 1)3 · (s + 2) · (s + 3)
(s + 4) · (s + 1) · (s + 2) · (s + 3)
−(s + 4) ⇒ − . (3.42)
(s + 1)3 · (s + 2) · (s + 3)

The greatest common divisor of these is

(s + 3) · (s + 2). (3.43)

Hence, the zeros are

ζ1 = −2,
(3.44)
ζ2 = −3.

73
Gioele Zardini Control Systems II FS 2018

Example 24. The following transfer function matrix is given:


1 s−1
 
0
 s+1 (s + 1)(s + 2) 
P (s) =  . (3.45)
 
 −1 1 1 
s−1 s+2 s+2
Compute the poles and the zeros of the system.

74
Gioele Zardini Control Systems II FS 2018

Solution. First order minors are:


1 s−1 −1 1 1
, , , , (3.46)
s+1 (s + 1)(s + 2) s−1 s+2 s+2

Minors of second order are:


−(s − 1) 2 1
, , (3.47)
(s + 1)(s + 2)2 (s + 1)(s + 2) (s + 1)(s + 2)

The least common denominator – the pole-polynom – is (s + 1)(s + 2)2 (s − 1), from which
the system’s poles and their multiplicities can be read: s = −1 (multiplicity = 1), s = 1
(multiplicity = 1) und s = −2 (multiplicity = 2).
Normalizing the minors of second order with the denominator (s + 1)(s + 2)2 (s − 1) yields

−(s − 1)2 2(s − 1)(s + 2) (s − 1)(s + 2)


, , (3.48)
(s + 1)(s + 2)2 (s − 1) (s + 1)(s + 2)2 (s − 1) (s + 1)(s + 2)2 (s − 1)
The greatest common divisor of these minors is (s − 1) and therefore the MIMO–system
has its only zero at s = 1.

75
Gioele Zardini Control Systems II FS 2018

4 Analysis of MIMO Systems


4.1 Norms
The concept of norm will be extremely useful for evaluating signals and systems quan-
titatively during this course. In the following, we will present vector norms and matrix
norms.

4.1.1 Vector Norms


Definition 10. A norm on a linear space (V, F ) is a function k · k : V → R+ such that
a) ∀v1 , v2 ∈ V, kv1 + v2 k ≤ kv1 k + kv2 k (triangle inequality).
b) ∀v ∈ V, ∀α ∈ F, kαvk = |α| · kvk.
c) kvk = 0 ⇔ v = 0.
Remark. Note that norms are always non-negative. This can be noticed by seeing that a
norm always maps to R+ .
Considering x ∈ Cn , i.e. V = Cn , one can define the p−norm as
n
! p1
X
kxkp = |xi |p , p = 1, 2, . . . (4.1)
i=1

The easiest example of such a norm is the case where p = 2, i.e. the euclidean norm
(shortest distance between two points):
v
u n
uX
kxk2 = t |xi |2 . (4.2)
i=1

Another important example of such a norm is the infinity norm (largest element in the
vector):
kxk∞ = max |xi |. (4.3)
i

Example 25. You are given the vector


 
−1
x =  2 . (4.4)
3

Compute the kxk1 ,kxk2 and kxk∞ norms of x.


Solution. It holds
kxk1 = | − 1| + |2| + |3|
= 6.

kxk2 = 1 + 4 + 9
√ (4.5)
= 14.
kxk∞ = max{1, 2, 3}
= 3.

76
Gioele Zardini Control Systems II FS 2018

4.1.2 Matrix Norms


In addition to the defined axioms for norms, matrix norms fulfill
kA · Bk ≤ kAk · kBk. (4.6)
Considering the linear space (V, F ), with V = Cm×n , and assuming a matrix A ∈ Cm×n
is given, one can define the Frobenius norm (euclidean matrix norm) as:
m Xn
! 21
X
kAkF = kAk2 = a2ij , (4.7)
i=1 j=1

where aij are the elements of A. This can also be written as


p
kAkF = tr (A∗ A), (4.8)
where tr is the trace (i.e. sum of eigenvalues or diagonal elements) of the matrix and A∗
is the Hermitian transpose of A (complex transpose), i.e.
A∗ = (conj(A))T . (4.9)
Example 26. Let  
1 −2 − i
A= . (4.10)
1+i i
Then it holds
A∗ = (conj(A))T
 T
1 −2 + i
= (4.11)
1−i −i
 
1 1−i
= .
−2 + i −i
The maximum matrix norm is the largest element of the matrix and is defined as
kAk∞ = max max |aij |. (4.12)
i=1,...,m j=1,...,n

Matrix Norms as Induced Norms


Matrix norms can always be defined as induced norms.
Definition 11. Let A ∈ Cm×n . Then, the induced norm of matrix A can be written as

 
kAxkp
kAkp = sup
x∈Cn , x6=0 kxkp (4.13)
= sup (kAxkp )
x∈Cn , x=1

Remark. At this point, one would ask what is the difference between sup and max. A
maximum is the largest number within a set. A sup is a number that bounds a set. A
sup may or may not be part of the set itself (0 is not part of the set of negative numbers,
but it is a sup because it is the least upper bound). If the sup is part of the set, it is also
the max.

77
Gioele Zardini Control Systems II FS 2018

u y
G

Figure 19: Interpretation of induced norm.

The definition of induced norm is interesting because one can interpret this as in Figure
19. In fact, using the induced norm
 
kGuk
kGkp = sup , (4.14)
kuk6=0 kuk

one can quantify the maimum gain (amplification) of an output vector for any possible
input direction at a given frequency. This turns out to be extremely useful for evaluating
system interconnections. Referring to Figure 20 and using the multiplication property for
norms, it holds

kykp = kG2 G1 ukp ≤ kG2 kp · kG1 kp · kukp .


kykp (4.15)
⇒ ≤ kG2 kp · kG1 kp .
kukp

In words, the input-output gain of a system series is upper bounded by the product of
the induced matrix norms.

u w y
G1 G2

Figure 20: Interpretation of induced norm, system series.

Properties of the Euclidean Norm


We can list a few useful properties for the Euclidean norm, intended as induced norm:

(i) If A is squared (i.e. m = n), the norm is defined as



kAk2 = µmax
p (4.16)
= maximal eigenvalue of A∗ · A.

(ii) If A is orthogonal:
kAk2 = 1. (4.17)
Note that this is the case because orthogonal matrices always have eigenvalues with
magnitude 1.

(iii) If A is symmetric (i.e. A| = A):

kAk2 = max(|λi |), (4.18)


i

where λi are the eigenvalues of A.

78
Gioele Zardini Control Systems II FS 2018

(iv) If A is invertible:
1
kA−1 k2 = √
µmin
(4.19)
1
=√ .
minimal eigenvalue of A∗ · A
(v) If A is invertible and symmetric:
1
kA−1 k2 = . (4.20)
mini (|λi |)
Remark. Remember: the matrix A∗ A is always a square matrix.

4.1.3 Signal Norms


The norms we have seen so far are space measures. Temporal norms (signal norms),
take into account the variability of signals in time/frequency. Let
|
e(t) = e1 (t) . . . en (t) , ei (t) ∈ C, i = 1, . . . , n. (4.21)
The p-norm is defined as
n
! p1
Z ∞ X
ke(t)kp = |ei (τ )|p dτ . (4.22)
−∞ i=1

A special case of this is the two-norm, also called euclidean norm, integral square error,
energy of a signal: v
u Z ∞ n !
u X
ke(t)k2 = t |ei (τ )|2 dτ . (4.23)
−∞ i=1

The infinity norm is defined as


 
ke(t)k∞ = sup max |ei (τ )| . (4.24)
τ i

4.1.4 System Norms


Considering linear, time-invariant, causal systems of the form depicted in Figure 19, one
can write the relation
y(t) = G ∗ u(t)
Z ∞
(4.25)
= G(t − τ )u(τ )dτ.
−∞

The two-norm for the transfer function Ĝ reads


 Z ∞  12
1 2
kĜ(s)k2 = |Ĝ(jω)| dω
2π −∞
 Z ∞ 
1   21

= tr Ĝ (jω)Ĝ(jω) dω (4.26)
2π −∞
Z ∞ X ! 21
1
= |gij |2 dω .
2π −∞ i,j=1

79
Gioele Zardini Control Systems II FS 2018

Remark. Note that this norm is a measure of the combination of system gains in all
directions, over all frequency. This is not an induced norm, as it does not respect the
multiplicative property.
The infinity norm is
kĜ(s)k∞ = sup kĜ(jω)k2 (4.27)
ω

Remark. This norm is a measure of the peak of the maximum singular value, i.e. the
biggest amplification the system may bring at any frequency, for any input direction
(worst case scenario). This is an induced norm and respects the multiplicative property.

4.1.5 Examples
Example 27.

a) Calculate the euclidean and the maximum norm of the matrix:


 
1 4 4
A = −7 2 −4 (4.28)
2 1 6

b) Find the euclidean and the maximum signal norm of the following signals, for
t ∈ (0, ∞):

i) u(t) = e−2t .
ii) v(t) = cos(5t).

80
Gioele Zardini Control Systems II FS 2018

Solution.
a) The euclidean norm of a matrix is defined as:
sX
kGk2 = |gi,j |2 . (4.29)
i,j

The 2-norm of matrix A is therefore:



kAk2 = 1 + 16 + 16 + 49 + 4 + 16 + 4 + 1 + 36 ≈ 12. (4.30)

The maximum norm is defined by:

kGkmax = max |gi,j |. (4.31)


i,j

For this reason we have:

kAkmax = max {1, 4, 4, 7, 2, 4, 2, 1, 6} = 7. (4.32)

b) The requested norms are defined as:


v
uZ ∞ n
X
u
ke(t)k2 = t |ei (τ )|2 dτ .
−∞ i=1 (4.33)
ke(t)k∞ = sup(max |ei (τ )|).
τ i

The given input signals are only one dimensional, so n = 1 for both cases. As
written in the exercise description, only times from 0 to ∞ are to be considered.
i)
sZ

ku(t)k2 = |u(τ )|2 dτ
0
sZ

= |e−2τ |2 dτ
0
sZ

= e−4τ dτ (4.34)
0
r
−1 −4τ ∞
= [e ]0
4
r
1
= − [0 − 1]
4
1
= .
2

ku(t)k∞ = sup(|u(τ )|)


τ
= sup(|e−2τ |) with τ ∈ (0, ∞) (4.35)
τ
= 1.

81
Gioele Zardini Control Systems II FS 2018

ii)
sZ

kv(t)k2 = |v(τ )|2 dτ
0
sZ (4.36)

= | cos(5τ )|2 dτ ,
0

which is ∞.
kv(t)k∞ = sup(|v(τ )|)
τ
= sup(| cos(5τ )|) (4.37)
τ
= 1.

82
Gioele Zardini Control Systems II FS 2018

4.2 Singular Value Decomposition (SVD)


The Singular Value Decomposition plays a central role in MIMO frequency response
analysis. Let’s recall some concepts from the course Lineare Algebra I/II :

4.2.1 Preliminary Definitions


The induced norm kAk of a matrix that describes a linear function like

y =A·u (4.38)

is defined as
kyk
kAk = max
u6=0 kuk
(4.39)
= max kyk.
kuk=1

Let’s recall Equation 4.16, and let’s notice that if A ∈ Rn×m it holds

A∗ = A| . (4.40)

In order to define the SVD we have to go a step further. Let’s consider a Matrix A and
the linear function given in Equation 4.38. It holds

kAk22 = max y ∗ · y
kuk=1

= max (A · u)∗ · (A · u)
kuk=1

= max u∗ · A∗ · A · u (4.41)
kuk=1

= max µ(A∗ · A)
i
= max σi2 .
i

where σi are the singular values of matrix A. They are defined as



σi = µi (4.42)

where µi are the eigenvalues of A∗ · A.


Combining Equations 4.39 and 4.42 one gets

kyk
σmin (A) ≤ ≤ σmax (A). (4.43)
kuk

4.2.2 Singular Value Decomposition


Our goal is to write a general matrix A ∈ Cp×m as product of three matrices: U , Σ and
V ∗ . It holds

A = U · Σ · V ∗ with U ∈ Cp×p , Σ ∈ Rp×m , V ∈ Cm×m . (4.44)


Remark. U and V are orthogonal, Σ is a diagonal matrix.

83
Gioele Zardini Control Systems II FS 2018

u
y =A·u

Figure 21: Illustration of the singular values.

Kochrezept:
Let A ∈ Cp×m be given:
(I) Compute all the eigenvalues and eigenvectors of the matrix
A∗ · A ∈ Cm×m . (4.45)
and sort them as
µ1 ≥ µ2 ≥ . . . ≥ µr > µr+1 = . . . = µm = 0 (4.46)

(II) Compute an orthogonal basis from the eigenvectors vi and write it in a matrix as
V = (v1 . . . vm ) ∈ Cm×m . (4.47)

(III) We have already found the singular values: they are defined as

σi = µi for i = 1, . . . , min{p, m}. (4.48)
By ordering them from the biggest to the smallest, we can then write Σ as
 
σ1 0 ... 0
Σ=
 .. .. ..  ∈ Rp×m , p < m (4.49)
. . .
σp 0 . . . 0
 
σ1
..

 . 

σm 
 
Σ=  ∈ Rp×m , p > m. (4.50)

0 ... 0 
. .. 
 .. . 
0 ... 0

(IV) One finds u1 , . . . , ur from


1
ui = · A · vi for all i = 1, . . . , r (for σi 6= 0) (4.51)
σi

84
Gioele Zardini Control Systems II FS 2018

(V) If r < p one has to complete the basis u1 , . . . , ur (with ONB from Gram-Schmid) to
obtain an orthogonal basis, with U orthogonal.

(VI) If you followed the previous steps, you can write

A = U · Σ · V ∗. (4.52)

Motivation for the computation of Σ, U und V .

A∗ · A = (U · Σ · V ∗ )∗ · (U · Σ · V ∗ )
= V · Σ∗ · U ∗ · U · Σ · V ∗
(4.53)
= V · Σ∗ · Σ · V ∗
= V · Σ2 · V ∗ .

This is nothing else than the diagonalization of the matrix A∗ · A. The columns of V
are the eigenvectors of A∗ · A and the σi2 the eigenvalues.
For U :
A · A∗ = (U · Σ · V ∗ ) · (U · Σ · V ∗ )∗
= U · Σ · V ∗ · V · Σ · U∗
(4.54)
= U · Σ∗ · Σ · U ∗
= U · Σ2 · U ∗ .

This is nothing else than the diagonalization of the matrix A · A∗ . The columns of U
are the eigenvectors of A · A∗ and the σi2 the eigenvalues.

Remark. In order to derive the previous two equations I used that:

• The matrix A∗ · A is symmetric, i.e.

(A∗ · A)∗ = A∗ · (A∗ )∗


(4.55)
= A∗ · A.

• U −1 = U ∗ (because U is unitary).

• V −1 = V ∗ (because V is unitary).

Remark. Since the matrix A∗ ·A is always symmetric and positive semidefinite, the singular
values are always real numbers.
Remark. The Matlab command for the singular value decomposition is

[U,S,V]=svd

One can write A| as A.’=transpose(A) and A∗ as A’=conj(transpose(A)). Those two


are equivalent for real numbers.

85
Gioele Zardini Control Systems II FS 2018

4.2.3 Intepretation
Considering the system depicted in Figure 19, one rewrite the system G as G = U ΣV ∗ .
The matrix V is orthogonal and contains the input directions of the system. The
matrix U is orthogonal as well and contains the output directions of the system
(unfortunate notation). It holds
G = U ΣV ∗
GV = U Σ (4.56)
Gvi = σi ui , ∀i,
which is similar to an eigenvalue equation. This can be rewritten as
kGvi k
σi = . (4.57)
kui k
For a unitary input, i.e. kuk2 = 1, one has
y1 = σ1 u1 ,
(4.58)
ym = σm um ,

where σm is the last (and hence the smallest) singular value. This can be interpreted
using Figure 21 and interpreting the circle as a unit circle.
Example 28. Let u be  
cos(x)
u= (4.59)
sin(x)
with kuk = 1. The matrix M is given as
!
2 0
M= 1
. (4.60)
0 2

We know that the product of M and u defines a linear function


y =M ·u
! 
2 0

cos(x)
= ·
0 12 sin(x) (4.61)
!
2 · cos(x)
= 1
.
2
· sin(x)

We need the maximum of kyk. In order to avoid square roots, one can use that the x that
maximizes kyk should also maximize kyk2 .
1
kyk2 = 4 · cos2 (x) + · sin2 (x) (4.62)
4
has maximum
dkyk2 1 !
= −8 · cos(x) · sin(x) + · sin(x) · cos(x) = 0
dx 2
 (4.63)
π 3π
⇒ xmax = 0, , π, .
2 2

86
Gioele Zardini Control Systems II FS 2018

Inserting back for the maximal kyk one gets:


1
kykmax = 2, kykmax = . (4.64)
2
The singular values can be calculated with M ∗ · M :

M∗ · M = M| · M
(4.65)
     
4 0 1 1
= ⇒ λi = 4, ⇒ σi = 2, .
0 41 4 2

As stated before, one can see that kyk ∈ [σmin , σmax ]. The matrix U has eigenvectors of
M · M | as coulmns and the matrix V has eigenvectors of M | · M as columns.
In this case
M · M | = M | · M,
hence the two matrices are equal. Since their product is a diagonal matrix one should
recall from the theory that the eigenvectors are easy to determine: they are nothing else
than the standard basis vectors. This means
     
1 0 2 0 1 0
U= , Σ= , V = . (4.66)
0 1 0 21 0 1

Interpretation:
Referring to Figure 22, let’s interprete these calculations. One can see that the maximal
amplification occurs at v = V (:, 1) and has direction u = U (:, 1), i.e. the vector u
is doubled (σmax ). The minimal amplification occurs at v = V (:, 2) and has direction
u = U (:, 2), i.e. the vector u is halved (σmin ).

1
0.5 u y =M ·u

1 2

Figure 22: Illustration of the singular value decomposition.

87
Gioele Zardini Control Systems II FS 2018

Example 29. Let  


−3 0
A = √0 3 (4.67)
3 2
be given.
Question: Find the singular values of A and write down the matrix Σ.

88
Gioele Zardini Control Systems II FS 2018

Solution. Let’s compute A| A:


√  √ 
 
 −3 0 
| −3 0 3  12
√ 2 3
A A= · √0 3 = (4.68)
0 3 2 2 3 13
3 2

One can see easily that the eigenvalues are

λ1 = 16, λ2 = 9. (4.69)

The singular values are


σ1 = 4, σ2 = 3. (4.70)
One writes in this case  
4 0
Σ = 0 3  . (4.71)
0 0

89
Gioele Zardini Control Systems II FS 2018

Example 30. A transfer function G(s) is given as


1 s+1
!
s+3 s+3
s+1 1
(4.72)
s+3 s+3

Find the singular values of G(s) at ω = 1 rad


s
.

90
Gioele Zardini Control Systems II FS 2018

Solution. The transfer function G(s) evaluated at ω = 1 rad


s
has the form
1 j+1
!
j+3 j+3
G(j) = j+1 1
(4.73)
j+3 j+3

In order to calculate the singular values, we have to compute the eigenvalues of H = G∗ ·G:

H = G∗ · G
1 −j+1 1 j+1
! !
−j+3 −j+3 j+3 j+3
= −j+1 1
· j+1 1
−j+3 −j+3 j+3 j+3
(4.74)
3 2
!
10 10
= 2 3
10 10
 
1 3 2
= · .
10 2 3

For the eigenvalues it holds


3 2
!
10
−λ 10
det(H − λ · I) = det 2 3
−λ10 10
 2  2
3 2
= −λ − −
10 10 (4.75)
6 5
= λ2 − λ +
 10   100 
1 5
= λ− · λ− .
10 10

It follows
1
λ1 =
10 (4.76)
1
λ2 =
2
and so
r
1
σ1 =
10
≈ 0.3162.
r (4.77)
1
σ2 =
2
≈ 0.7071.

91
Gioele Zardini Control Systems II FS 2018

Example 31. Let be


 
1 2
A=
0 1 (4.78)

B= j 1 .

Find the singular values of the two matrices.

92
Gioele Zardini Control Systems II FS 2018

Solution.
• Let’s begin with matrix A. It holds
H = A∗ · A
   
1 0 1 2
= ·
2 1 0 1 (4.79)
 
1 2
= .
2 5
In order to find the eigenvalues of H we compute
 
1−λ 2
det(H − λ · I) = det
2 5−λ
(4.80)
= (1 − λ) · (5 − λ) − 4
= λ2 − 6λ + 1.
This means that the eigenvalues are

λ1 = 3 + 2 2
√ (4.81)
λ2 = 3 − 2 2.
The singular values are then
σ1 ≈ 2.4142
(4.82)
σ2 ≈ 0.4142.
• Let’s look at matrix B. It holds
F = B∗ · B
 
−j 
= · j 1
1 (4.83)
 
1 −j
= .
j 1
In order to find the eigenvalues of F we compute
 
1 − λ −j
det(F − λ · I) = det
j 1−λ
= (1 − λ)2 − 1 (4.84)
= λ2 − 2λ
= λ · (λ − 2).
This means that the eigenvalues are
λ1 = 0
(4.85)
λ2 = 2.
The singular values are then
σ1 = 0
√ (4.86)
σ2 = 2.

93
Gioele Zardini Control Systems II FS 2018

4.2.4 Directions of poles and zeros


Directions of zeros
Assume a system G(s) has a zero at s = z. Then, it must holds

G(z)uz = 0, yz∗ G(z) = 0, (4.87)

where uz is the input zero direction and yz is the output zero direction. Furthermore,
it holds
kuz k2 = 1, kyz k2 = 1. (4.88)

Directions of poles
Assume a system G(s) has a pole at s = p. Then, it must holds

G(p)up → ∞, yp∗ G(p) → ∞, (4.89)

where up is the input pole direction and yp is the output pole direction. Furthermore,
it holds
kup k2 = 1, kyp k2 = 1. (4.90)
Remark. In both cases, if the zero/pole causes an unfeasible calculation, one consider
feasible variations, i.e. z + ε, p + ε.

4.2.5 Frequency Responses


As we learned for SISO systems, if one excites a system with an harmonic signal

u(t) = h(t) · cos(ω · t), (4.91)

the answer after a big amount of time is still an harmonic function with equal frequency
ω:
y∞ (t) = |P (j · ω)| cos(ω · t + ∠(P (j · ω))). (4.92)
One can generalize this and apply it to MIMO systems. With the assumption of p = m,
i.e. equal number of inputs and outputs, one excite a system with
 
µ1 · cos(ω · t + φ1 )
..
u(t) =   · h(t) (4.93)
 
.
µm · cos(ω · t + φm )

and get  
ν1 · cos(ω · t + ψ1 )
y∞ (t) =  ..
. (4.94)
 
.
νm · cos(ω · t + ψm )
Let’s define two diagonal matrices

Φ = diag(φ1 , . . . , φm ) ∈ Rm×m ,
(4.95)
Ψ = diag(ψ1 , . . . , ψm ) ∈ Rm×m

94
Gioele Zardini Control Systems II FS 2018

and two vectors


T
µ = µ1 . . . µm ,
T (4.96)
ν = ν1 . . . νm .

With these one can compute the Laplace Transform of the two signals as:
Φ·s s
U (s) = e ω ·µ· . (4.97)
s2 + ω2
and
Ψ·s s
Y (s) = e ω ·ν· . (4.98)
s2 + ω2
With the general equation for a systems one gets

Y (s) = P (s) · U (s)


Ψ·s s Φ·s s
e ω ·ν· 2 2
= P (s) · e ω · µ · 2
s +ω s + ω2 (4.99)
Ψ·j·ω Φ·j·ω
e ω · ν = P (s) · e ω · µ
eΨ·j · ν = P (s) · eΦ·j · µ.

We then recall that the induced norm for the matrix of a linear transformation y = A · u
from 4.39. Here it holds
keΨ·j · νk
kP (j · ω)k = max
eΦ·j ·µ6=0 keΦ·j · µk
(4.100)
= max keΨ·j · νk.
keΦ·j ·µk=1

Since
keΦ·j · µk = kµk (4.101)
and
keΨ·j · νk = kνk. (4.102)
One gets

kνk
kP (j · ω)k = max
µ6=0 kµk
(4.103)
= max kνk.
kµk=1

Here one should get the feeling of why we introduced the singular value decomposition.
From the theory we’ve learned, it is clear that

σmin (P (j · ω)) ≤ kνk ≤ σmax (P (j · ω)). (4.104)

and if kµk =
6 1
kνk
σmin (P (j · ω)) ≤ ≤ σmax (P (j · ω)). (4.105)
kµk
with σi singular values of P (j · ω). These two are worst case ranges and is important to
notice that there is no exact formula for ν = f (µ).

95
Gioele Zardini Control Systems II FS 2018

Maximal and minimal Gain


You are given a singular value decomposition

P (j · ω) = U · Σ · V ∗ . (4.106)

One can read out from this decomposition several informations: the maximal/minial gain
will be reached with an excitation in the direction of the column vectors of V . The
response of the system will then be in the direction of the coulmn vectors of U .
Let’s look at an example and try to understand how to use these informations:

Example 32. We consider a system with m = 2 inputs and p = 3 outputs. We are given
its singular value decomposition at ω = 5 rad
s
:
 
0.4167 0
Σ= 0 0.2631 ,
0 0
 
0.2908 0.9568
V = , (4.107)
0.9443 − 0.1542 · j −0.2870 + 0.0469 · j
 
−0.0496 − 0.1680 · j 0.1767 − 0.6831 · j −0.6621 − 0.1820 · j
U =  0.0146 − 0.9159 · j −0.1059 + 0.3510 · j −0.1624 + 0.0122 · j  .
0.0349 − 0.3593 · j 0.1360 − 0.5910 · j 0.6782 + 0.2048 · j

For the singular value σmax = 0.4167 the eigenvectors are V (:, 1) and U (:, 1):
     
0.2908 0.2908 0
V1 = , |V1 | = , ∠(V1 ) = , (4.108)
0.9443 − 0.1542 · j 0.9568 −0.1618
     
−0.0496 − 0.1680 · j 0.1752 −1.8581
U1 =  0.0146 − 0.9159 · j  , |U1 | = 0.9160 , ∠(U1 ) = −1.5548 . (4.109)
0.0349 − 0.3593 · j 0.3609 −1.4741

The maximal gain is then reached with


 
0.2908 · cos(5 · t)
u(t) = . (4.110)
0.9568 · cos(5 · t − 0.1618)

The response of the system is then


   
0.1752 · cos(5 · t − 1.8581) 0.1752 · cos(5 · t − 1.8581)
y(t) = σmax · 0.9160 · cos(5 · t − 1.5548) = 0.4167 · 0.9160 · cos(5 · t − 1.5548) .
0.3609 · cos(5 · t − 1.4741) 0.3609 · cos(5 · t − 1.4741)
(4.111)
Since the three signals y1 (t), y2 (t) and y3 (t) are not in phase, the maximal gain will never
be reachen. One can show that

max ky(t)k ≈ 0.4160 < 0.4167 = σmax (4.112)


t

The reason for this difference stays in the phase deviation between y1 (t), y2 (t) and y3 (t).
The same analysis can be computed for σmin .

96
Gioele Zardini Control Systems II FS 2018

Example 33. Given the MIMO system


1 1
!
s+3 s+1
P (s) = 1 3
. (4.113)
s+1 s+1

Starting at t = 0, the system is excited with the following input signal:


 
cos(t)
u(t) = . (4.114)
µ2 cos(t + ϕ2 )

Find the parameters ϕ2 and µ2 such that for steady-state conditions the output signal
 
y1 (t)
(4.115)
y2 (t)

has y1 (t) equal to zero.

97
Gioele Zardini Control Systems II FS 2018

Solution. For a system excited using a harmonic input signal


 
µ1 cos(ωt + ϕ1 )
u(t) = (4.116)
µ2 cos(ωt + ϕ2 )

the output signal y(t), after a transient phase, will also be a harmonic signal and hence
have the form  
ν1 cos(ωt + ψ1 )
y(t) = . (4.117)
ν2 cos(ωt + ψ2 )
As we have learned, it holds

eΨ·j · ν = P (jω) · eΦ·j · µ. (4.118)

One gets
 ψ ·j       ϕ ·j   
e 1 0 ν1 P11 (jω) P12 (jω) e 1 0 µ1
ψ2 ·j · = · ϕ2 ·j · . (4.119)
0 e ν2 P21 (jω) P22 (jω) 0 e µ2

For the first component one gets

eψ1 ·j · ν1 = P11 (jω) · eϕ1 ·j · µ1 + P12 (jω) · eϕ2 ·j · µ2 . (4.120)

For y1 (t) = 0 to hold we must have ν1 = 0. In the given case, some parameters can be
easily copied from the signals:
µ1 = 1
ϕ1 = 0 (4.121)
ω = 1.

With the given transfer functions, one gets


1 1
0= + µ2 · · eϕ2 ·j
j+3 j+1
3−j 1 − j ϕ2 ·j
0= + µ2 · ·e
10 2
3−j 1−j (4.122)
0= + µ2 · · (cos(ϕ2 ) + j sin(ϕ2 ))
10 2  
3 1 1 1
0= + µ2 · · (cos(ϕ2 ) + sin(ϕ2 )) + j · µ2 · · (sin(ϕ2 ) − cos(ϕ2 )) − .
10 2 2 10
Splitting the real to the imaginary part, one can get two equations that are easily solvable:

1 3
µ2 · · (cos(ϕ2 ) + sin(ϕ2 )) + =0
2 10 (4.123)
1 1
µ2 · · (sin(ϕ2 ) − cos(ϕ2 )) − = 0.
2 10
Adding and subtracting the two equations one can reach two better equations:
1
µ2 · sin(ϕ2 ) + =0
5 (4.124)
2
µ2 · cos(ϕ2 ) + = 0.
5
98
Gioele Zardini Control Systems II FS 2018

One of the solutions (periodicity) reads


1
µ2 = √
5
  (4.125)
1
ϕ2 = arctan + π.
2

99
Gioele Zardini Control Systems II FS 2018

Example 34. A 2 × 2 linear time invariant MIMO system with transfer function
1 2
!
s+1 s+1
P (s) = s2 +1 1
(4.126)
s+10 s2 +2

is excited with the signal


 
µ1 · cos(ω · t + ϕ1 )
u(t) = . (4.127)
µ2 · cos(ω · t + ϕ2 )

Because we bought a cheap signal generator, we cannot know exactly the constants µ1,2
and ϕ1,2 . A friend of you just found out with some measurements, that the excitation
frequency is ω = 1 rad . The cheap generator, cannot produce signals with magnitude of
s p
µ bigger than 10, i.e. µ21 + µ22 ≤ 10. This works always at maximal power, i.e. at 10.
Choose all possible responses of the system after infinite time.
 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(t)
 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(2 · t)
 
sin(t + 0.542)
 y∞ (t) = .
sin(t + 0.459)
 
19 · cos(t + 0.114)
 y∞ (t) = .
cos(t + 1.124)
 
5 · cos(t + 0.114)
 y∞ (t) = .
5 · cos(t)
 
10 · sin(t + 2.114)
 y∞ (t) = .
11 · sin(t + 1.234)

100
Gioele Zardini Control Systems II FS 2018

Solution.
 
3 y∞ (t) = 5 · sin(t + 0.114) .

cos(t)
 
5 · sin(t + 0.114)
 y∞ (t) = .
cos(2 · t)
 
sin(t + 0.542)
 y∞ (t) = .
sin(t + 0.459)
 
19 · cos(t + 0.114)
 y∞ (t) = .
cos(t + 1.124)
 
3 5 · cos(t + 0.114)
 y∞ (t) = .
5 · cos(t)
 
3 10 · sin(t + 2.114)
 y∞ (t) = .
11 · sin(t + 1.234)

Explanation
We have to compute the singular values of the matrix P (j · 1). These are

σmax = 1.8305
(4.128)
σmin = 0.3863.

With what we have learned it follows

10 · σmin = 3.863 ≤ kνkk ≤ 18.305 = 10 · σmax . (4.129)



√ response has kνk = 26 that is in this range. The second response also has
The first
kνk = 26 but the frequency in its second element
√ changes and that isn’t possible for
linear systems. The third response
√ has kνk = 2 that is too small to be in the range.
The fourth √response has kνk = 362 that is too big to be in the range.√The fifth response
has kνk = 50 that is in the range. The sixth response has kνk = 221 that is in the
range.

101
Gioele Zardini Control Systems II FS 2018

Example 35. A 3 × 2 linear time invariant MIMO system is excited with the input
 
3 · sin(30 · t)
u(t) = . (4.130)
4 · cos(30 · t)
You have forgot your PC and you don’t know the transfer function of the system. Before
coming to school, however, you have saved the Matlab plot of the singular values of the
system on your phone (see Figure 23. Choose all the possible responses of the system.

Figure 23: Singular values behaviour.

 
0.5 · sin(30 · t + 0.314)
 y∞ (t) =  0.5 · cos(30 · t) .
0.5 · cos(30 · t + 1)
 
4 · sin(30 · t + 0.314)
 y∞ (t) =  3 · cos(30 · t) .
2 · cos(30 · t + 1)
 
0.1 · sin(30 · t + 0.314)
 y∞ (t) =  0.1 · cos(30 · t) .
0.1 · cos(30 · t + 1)
 
0
 y∞ (t) =  4 · cos(30 · t) .
2 · cos(30 · t + 1)
 
2 · cos(30 · t + 0.243)
 y∞ (t) = 2 · cos(30 · t + 0.142).
2 · cos(30 · t + 0.252)

102
Gioele Zardini Control Systems II FS 2018

Solution.
 
0.5 · sin(30 · t + 0.314)
3 y∞ (t) =  0.5 · cos(30 · t) .

0.5 · cos(30 · t + 1)
 
4 · sin(30 · t + 0.314)
 y∞ (t) =  3 · cos(30 · t) .
2 · cos(30 · t + 1)
 
0.1 · sin(30 · t + 0.314)
 y∞ (t) =  0.1 · cos(30 · t) .
0.1 · cos(30 · t + 1)
 
0
3 y∞ (t) =  4 · cos(30 · t) .

2 · cos(30 · t + 1)
 
2 · cos(30 · t + 0.243)
3 y∞ (t) = 2 · cos(30 · t + 0.142).

2 · cos(30 · t + 0.252)

Explanation
From the given input one can read

kµk = 32 + 42 = 5. (4.131)

From the plot one can read at ω = 30 rad


s
σmin = 0.1 and σmax = 1. It follows

5 · σmin = 0.5 ≤ kνk ≤ 5 = 5 · σmax . (4.132)



√ response has kνk = 0.75 that is in the range. The second response
The first √ has
kνk = 29 that is too big to be in the range. The third response √ has kνk = 0.03 that
is to small to be in the range.√The fourth response has kνk = 20 that is in the range.
The fifth response has kνk = 12 that is in the range.

103
Gioele Zardini Control Systems II FS 2018

Example 36. You are given the transfer function


1
 
1 s−3
P (s) = . (4.133)
1 1

a) Find the poles of the system.

b) Find the zeros of the system.

c) Find the directions of the zeros of the system.

d) How could you compute the directions of the poles of the system? Note that you
don’t need to compute them by hand.

104
Gioele Zardini Control Systems II FS 2018

Solution.

a) In order to compute the poles of the transfer function, one needs to compute its
minors. First order minors are
1
1, , 1, 1. (4.134)
s−3
Second order minor is
1 s−4
1− = . (4.135)
s−3 s−3
The least common denominator (i.e. the pole-polynom) is (s − 3), from which it is
clear that the only pole of the system is π1 = 3.

b) Normalizing the second order minor with the pole-polynom results in


s−4
. (4.136)
s−3
Since we have only one element, the greatest common divisor is (s − 4), from which
it is clear that the only zero of the system is z1 = 4.

c) With the method learned in class, in order to find the direction of the zero z1 = 4,
one needs to compute the singular value decomposition of P (4). It holds
   
1 1 ∗ 1 1
P (4) = , P = . (4.137)
1 1 1 1

In the following, we refer to the SVD recipe provided in the lecture.

1) As a first step, we compute the eigenvalues of M = P ∗ (4)P (4). It holds


   
∗ 1 1 1 1
M =P P = ·
1 1 1 1
  (4.138)
2 2
= .
2 2

In order to find the eigenvalues, we compute


 
2−λ 2
det(M − λI) = det
2 2−λ
(4.139)
= (2 − λ)2 − 4
= λ(λ − 4).

From this characteristic polynom one can read the eigenvalues

λ1 = 4, λ2 = 0. (4.140)

2) The singular values are the positive square roots of the found eigenvalues. It
holds
σ1 = 2, σ2 = 0. (4.141)

105
Gioele Zardini Control Systems II FS 2018

3) The singular value matrix is


 
2 0
Σ= . (4.142)
0 0

4) The right singular vectors vi are the eigenvectors of M with respect to the
found eigenvalues λi . It holds
• Eλ1 = E4 : From (M − λ1 I) · x = 0 one gets the linear system of equations
 
−2 2 0
.
2 −2 0
Using the first row as reference and summing it to the second row, one
gets the form
 
−2 2 0
.
0 0 0
Since one has a zero row, one can introduce a free parameter. Let x1 =
s ∈ R. Using the first row, one can recover x2 = x1 . This defines the first
eigenspace, which reads
n 1 o  
1
E4 = ⇒ v1 = (4.143)
1 1

The eigenvectors have to be normalized. The magnitude of this eigenvector


is √ √
kv1 k2 = 1 + 1 = 2. (4.144)
It holds  
1 1
v1,n =√ . (4.145)
2 1
• Eλ2 = E0 : From (M − λ2 I) · x = 0 one gets the linear system of equations
 
2 2 0
.
2 2 0
Using the first row as reference and subtracting it from the second row,
one gets the form
 
2 2 0
.
0 0 0
Since one has a zero row, one can introduce a free parameter. Let x1 =
s ∈ R. Using the first row, one can recover x2 = −x1 . This defines the
first eigenspace, which reads
n 1 o  
1
E1 = ⇒ v2 = (4.146)
2 −1 −1

The eigenvectors have to be normalized. The magnitude of this eigenvector


is √ √
kv2 k2 = 1 + 1 = 2. (4.147)
It holds  
1 1
v2,n =√ . (4.148)
2 −1

106
Gioele Zardini Control Systems II FS 2018

The matrix V is then  


1 1 1
V =√ . (4.149)
2 1 −1
5) The left singular vectors can be computed with
1
ui,n = P vi,n . (4.150)
σi
• For the first vector it holds
  
1 1 1 1 1
u1,n = √
2 2 1 1 1
  (4.151)
1 1
=√ .
2 1
• For the second vector, because of the 0 singular value the formula does not
hold anymore. However, from the lecture we know that one can find ui as
the eigenvector with respect of λi of matrix P P ∗ . It holds
   
∗ 1 1 1 1
P (4)P (4) = ·
1 1 1 1
  (4.152)
2 2
= .
2 2
The eigenspace for eigenvalue λ2 = 0 is the same as the one computed in
Equation 4.148 (same matrices). It holds then
 
1 1
u2,n = √ (4.153)
2 −1
The matrix U is then  
1 1 1
U=√ . (4.154)
2 1 −1
One can now write the singular value decomposition
P =U ·Σ·V∗
∗
(4.155)
    
1 1 1 2 0 1 1 1
=√ · ·√ .
2 1 −1 0 0 2 1 −1
In order to find the input zero direction, we want to find a direction uz s.t.
 
0
P (z) · uz = . (4.156)
0
This corresponds to the vector
 
1 1
uz = √ , (4.157)
2 −1

which is the second column of matrix V (relative to the 0 singular value). In order
to find the output zero direction, we want to find a direction yz s.t.

yz∗ P (z) = 0 0 .

(4.158)

107
Gioele Zardini Control Systems II FS 2018

This corresponds to the vector


 
1 1
yz = √ , (4.159)
2 −1

which is the second column of matrix U (relative to the 0 singular value).

d) In order to use the same method used for the zeros, we consider a deviation ε = 0.001
from the pole. One can write

P (p1 + ε) = P (3 + 0.001)
(4.160)
 
1 1000
= .
1 1

The singular value decomposition of P (3.001) is


     ∗
−1 −0.001 1000 0 −0.001 1
P (3.001) = · · , (4.161)
−0.001 1 0 1 −1 −0.001

from which one can identify


   
1 −0.001 1 −1
up = √ , yp = √ , (4.162)
1 + 0.0012 −1 1 + 0.0012 −0.001

where we took the directions relative to the biggest singular value (pole causes in-
crease of the value of the transfer function), i.e. first columns of U and V . Note that
in order to find this decomposition, the MATLABr command [U,V,D]=svd(P) has
been used.
For more informations and examples, please have a look at
http://karimpor.profcms.um.ac.ir/imagesm/354/stories/mul_con/multivariable_
lec4.pdf.

108
Gioele Zardini Control Systems II FS 2018

4.3 MIMO Stability


4.3.1 External Stability
The input-output stability (also known as external stability) describes the stability
properties of a system with respect to its input-output behaviour. Let’s consider the
system interaction depicted in Figure 24.

u y
G

Figure 24: Interpretation of induced norm.

Definition 12. A MIMO system y = Gu is said to be BIBO stable (i.e. bounded input
bounded output) if there exists a finite constant k ∈ R such that

kyk∞ ≤ kkuk∞ . (4.163)

Remark. A necessary and sufficient condition for BIBO stability is: the closed loop
transfer function
P (s) = C(sI − A)−1 B + D (4.164)
has all poles in the open left-half of the complex plane (all poles have real part strictly
smaller than 0).

4.3.2 Internal Stability


Consider the linear time invariant system
ẋ(t) = A · x(t) + B · u(t), x(t) ∈ Rn , u(t) ∈ Rm
(4.165)
y(t) = C · x(t) + D · u(t), y(t) ∈ Rp
where

x(t) ∈ Rn×1 , u(t) ∈ Rm×1 , y(t) ∈ Rp×1 , A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n , D ∈ Rp×m .
(4.166)
Such a system is internally stable if for all initial conditions, and all bounded signals
injected at any place in the system, all states remain bounded for all future time.
Definition 13. The MIMO linear time invariant system described in Equation 4.165 is
BIBO stable if and only if C(sI − A)−1 B + D has all poles on the open left-half of the
complex plane (all poles have real part strictly smaller than 0).
Remark.
• Internal stability implies BIBO stability. The converse is not true.
• BIBO stability with controllability and observability imply internal stability.
This is a crucial concept: it is not sufficient for the input-output transfer function of
the system to be stable. In fact, internal transfer functions, related to the sensitivity
functions, must be stable as well to prevent pole/zero cancellations, which could hide
instabilities.

109
Gioele Zardini Control Systems II FS 2018

w1 e1
G(s)

K(s)
e2 w2

Figure 25: MIMO Loop.

Internal Stability Check


Assume a MIMO loop as the one depiced in Figure 25. It holds

E1 (s) = W1 (s) + K(s)E2 (s)


= W1 (s) + K(s) [G(s)E1 (s) + W2 (s)] (4.167)
= W1 (s) + K(s)G(s)E1 (s) + K(s)W2 (s),

from which it follows


(I − K(s)G(s)) E1 (s) = W1 (s) + K(s)W2 (s)
E1 (s) = (I − K(s)G(s))−1 W1 (s) + (I − K(s)G(s))−1 K(s)W2 (s).
(4.168)

Similarly, one can write

E2 (s) = W2 (s) + G(s)E1 (s)


= W2 (s) + G(s) [K(s)E2 (s) + W1 (s)] (4.169)
= W2 (s) + G(s)K(s)E2 (s) + G(s)W1 (s),

from which it follows


(I − G(s)K(s)) E2 (s) = W2 (s) + G(s)W1 (s)
E2 (s) = (I − G(s)K(s))−1 W2 (s) + (I − G(s)K(s))−1 G(s)W1 (s).
(4.170)

Resuming the calculations into matrix form, one gets

(I − K(s)G(s))−1 (I − K(s)G(s))−1 K(s)


     
E1 (s) W1 (s)
= · . (4.171)
E2 (s) (I − G(s)K(s))−1 G(s) (I − G(s)K(s))−1 W2 (s)

The necessary and sufficient condition for internal stability is: each of the four transfer
functions in relation 4.171 must be stable, i.e. smaller than 0 . Note: even if three of four
are stable, the system is not internally stable.

4.3.3 Lyapunov Stability


The Lyapunov stability theorem analyses the behaviour of a system near to its equilibrium
points when u(t) = 0. Because of this, we don’t care if the system is MIMO or SISO. The
three cases are
• Asymptotically stable: limt→∞ kx(t)k = 0;

110
Gioele Zardini Control Systems II FS 2018

• Stable: kx(t)k < ∞ ∀ t ≥ 0;

• Unstable: limt→∞ kx(t)k = ∞.

As it was done for the SISO case, one can show by using x(t) = eA·t · x0 that the stability
can be related to the eigenvalues of A through:

• Asymptotcally stable: Re(λi ) < 0 ∀ i;

• (Marginally) Stable: Re(λi ) ≤ 0 ∀ i;

• Unstable: Re(λi ) > 0 for at least one i.

4.3.4 Examples
Example 37. You are given the feedback control loop depicted in Figure 26.

w1 e1
G(s)

K(s)
e2 w2

Figure 26: MIMO Loop.

|
a) Derive the internal stability criterion, i.e. write the error vector e1 e2 as a
|
function of w1 w2 .

b) You are given


s−1 1
G(s) = , K(s) = − . (4.172)
s+1 s−1
Is the resulting system internally stable?

c) You are given


1
   1−s 
s−1
0 s+1
−1
G(s) = 1 , K(s) = . (4.173)
0 s+1 0 −1
Is the resulting system internally stable?

111
Gioele Zardini Control Systems II FS 2018

Solution.

a) Considering Figure 26, one can write

E1 (s) = W1 (s) + K(s)E2 (s)


= W1 (s) + K(s) [G(s)E1 (s) + W2 (s)] (4.174)
= W1 (s) + K(s)G(s)E1 (s) + K(s)W2 (s),

from which it follows


(I − K(s)G(s)) E1 (s) = W1 (s) + K(s)W2 (s)
E1 (s) = (I − K(s)G(s))−1 W1 (s) + (I − K(s)G(s))−1 K(s)W2 (s).
(4.175)

Similarly, one can write

E2 (s) = W2 (s) + G(s)E1 (s)


= W2 (s) + G(s) [K(s)E2 (s) + W1 (s)] (4.176)
= W2 (s) + G(s)K(s)E2 (s) + G(s)W1 (s),

from which it follows


(I − G(s)K(s)) E2 (s) = W2 (s) + G(s)W1 (s)
E2 (s) = (I − G(s)K(s))−1 W2 (s) + (I − G(s)K(s))−1 G(s)W1 (s).
(4.177)

Resuming the calculations into matrix form, one gets

(I − K(s)G(s))−1 (I − K(s)G(s))−1 K(s)


     
E1 (s) W1 (s)
= · .
E2 (s) (I − G(s)K(s))−1 G(s) (I − G(s)K(s))−1 W2 (s)
(4.178)
The necessary and sufficient condition for internal stability is: each of the four
transfer functions in relation 4.178 must be stable. (Note: even if three of four are
stable, the system is not internally stable).

b) It holds
  −1
−1 1
(I − K(s)G(s)) = 1 − −
s+1
 −1
s+2
=
s+1
s+1
= .
s+2 (4.179)
s+1
(I − K(s)G(s))−1 K(s) = − .
(s + 2)(s − 1)
s−1
(I − G(s)K(s))−1 G(s) = .
s+2
s+1
(I − G(s)K(s))−1 = .
s+2

112
Gioele Zardini Control Systems II FS 2018

Rewriting Equation 4.178 for this specific problem, one gets

(I − K(s)G(s))−1 (I − K(s)G(s))−1 K(s)


     
E1 (s) W1 (s)
= ·
E2 (s) (I − G(s)K(s))−1 G(s) (I − G(s)K(s))−1 W2 (s)
 s+1 s+1
− (s+2)(s−1)
  
W1 (s)
= s+2
s−1 s+1 · .
s+2 s+2
W2 (s)
(4.180)

One can notice hat the element in the first row, second column has a pole at s =
1 > 0, which causes the system to be not internally stable.

c) We start by computing the term in the first column, first row of Equation 4.178:
   s−1   1 −1
−1 1 0 − s+1 −1 s−1
0
(I − K(s)G(s)) = − · 1
0 1 0 −1 0 s+1
   1 1
−1
1 0 − s+1 − s+1
= − 1
0 1 0 − s+1
 s+2 1 −1
= s+1 s+1 (4.181)
0 s+2
s+1
(s + 1)2 s+2 1
 
s+1
− s+1
= s+2
(s + 2)2 0 s+1
 s+1 s+1 
− (s+2)2
= s+2 s+1 .
0 s+2

The term in the first row,second column of Equation 4.178 is


 s+1 s+1   s−1
− (s+2)

−1 s+2 2 − s+1 −1
(I − K(s)G(s)) K(s) = s+1 ·
0 s+2
0 −1
 s−1
− s+2 − s+1 s+1 
s+2
+ (s+2) 2
= s+1 (4.182)
0 − s+2
!
(s+1)2
− s−1
s+2
− (s+2) 2
= .
0 − s+1
s+2

The term in the second row, second column of Equation 4.178 is


   1   s−1 −1
−1 1 0 s−1
0 − s+1 −1
(I − G(s)K(s)) = − 1 ·
0 1 0 s+1 0 −1
   1 1
 −1
1 0 − s+1 − s−1
= − 1
0 1 0 − s+1
 s+2 1 −1
= s+1 s−1 (4.183)
0 s+2
s+1
(s + 1)2 s+2 1
 
s+1
− s−1
= s+2
(s + 2)2 0 s+1
2
!
s+1 (s+1)
− 2
= s+2 (s−1)(s+2)
s+1
.
0 s+2

113
Gioele Zardini Control Systems II FS 2018

The term in the second row,first column of Equation 4.178 is


! 
s+1 (s+1)2 1

−1 − 2 s−1
0
(I − G(s)K(s)) G(s) = s+2 (s−1)(s+2)
s+1
· 1
0 s+2
0 s+1
 s+1 (4.184)
s+1
− (s−1)(s+2)

2
= (s−1)(s+2) 1 .
0 s+2

It is easy to see, that the last two terms we computed contain a pole at s = 1 > 0,
which causes the system to be not internally stable.

114
Gioele Zardini Control Systems II FS 2018

4.4 MIMO Controllability and Observability


4.4.1 Controllability
Controllable: is it possible to control all the states of a system with an input u(t)?
Mathematically, a linear time invariant system is controllable if, for every state x∗ (t) and
every finite time T > 0, there exists an input function u(t), 0 < t ≤ T such that the
system can be driven from the initial state x(0) = x0 to x(T ) = x∗ (t).
A system of the form of the one represented in Equation 4.165 is said to be completely
controllable, if the controllability Matrix

R = B A · B A2 · B . . . An−1 · B ∈ Rn×(n·m) .

(4.185)

has full rank n (easy by checking row rank).

4.4.2 Observability
Observable: is it possible to reconstruct the initial conditions of all the states of a system
from the output y(t)?
A system is said to be completely observable, if the observability Matrix
 
C
 C ·A 
 
2 
O =  C · A  ∈ R(n·p)×n . (4.186)

 .. 
 . 
n−1
C ·A

has full rank n (easy by checking column rank).

115
Gioele Zardini Control Systems II FS 2018

Example 38. The dynamics of a system are given as


   
4 1 0 1 0
ẋ(t) = −1 2 0 · x(t) + 0 0 · u(t)
0 0 2 0 1 (4.187)
 
1 0 0
y(t) = · x(t).
0 1 1

Moreover the transfer function of the system is given as


(s−2)
!
2 0
P (s) = s −6s+9
−1 1
. (4.188)
s2 −6s+9 s−2

(a) Is the system Lyapunov stable, asymptotically stable or unstable?

(b) Is the system completely controllable?

(c) Is the system completely observable?

(d) The poles of the system are π1 = 2 and π2,3 = 3. The zero of the system is ζ1 = 2.
Are there any zero-pole cancellations?

116
Gioele Zardini Control Systems II FS 2018

Solution.

(a) First of all, one identifies the matrices as:


   
4 1 0 1 0
ẋ(t) = −1 2 0 ·x(t) + 0 0 ·u(t)
0 0 2 0 1
| {z } | {z }
A B (4.189)
   
1 0 0 0 0
y(t) = ·x(t) + ·u(t).
0 1 1 0 0
| {z } | {z }
C D

We have to compute the eingevalues of A. It holds


 
4−λ 1 0
det(A − λ · I) = |  −1 2 − λ 0 |
0 0 2−λ
 
4−λ 1
= (2 − λ) · | | (4.190)
−1 2 − λ
= (2 − λ) · ((4 − λ) · (2 − λ) + 1)
= (2 − λ) · (λ2 − 6λ + 9)
= (2 − λ) · (λ − 3)2 .

Since all the three eigenvalues are bigger than zero, the system is Lyapunov unsta-
ble.

(b) The controllability matrix can be found with the well-known multiplications:
   
4 1 0 1 0
A · B = −1
 2 0 · 0 0
0 0 2 0 1
 
4 0
=  −1 0 ,
0 2
    (4.191)
4 1 0 4 0
A2 · B = −1 2 0 · −1 0
0 0 2 0 2
 
15 0
= −6 0 .
0 4

Hence, the controllability matrix reads


 
1 0 4 0 15 0
R = 0 0 −1 0 −6 0 . (4.192)
0 1 0 2 0 4

This has full rank 3: the system ist completely controllable.

117
Gioele Zardini Control Systems II FS 2018

(c) The observability matrix can be found with the well-known multiplications:
 
  4 1 0
1 0 0 
C ·A= · −1 2 0
0 1 1
0 0 2
 
4 1 0
= ,
−1 2 2
  (4.193)
  4 1 0
4 1 0 
C · A2 = · −1 2 0
−1 2 2
0 0 2
 
15 6 0
= .
−6 3 4

Hence, the observability matrix reads


 
1 0 0
0 1 1
 
4 1 0
O=
−1
. (4.194)
 2 2
 15 6 0
−6 3 4

This has full rank 3: the system is completely observable.

(d) Although ζ1 = 2 and π1 = 2 have the same magnitude, they don’t cancel out.
Why? Since the system ist completely controllable and completely observable, we
have already the minimal realization of the system. This means that no more
cancellation is possible. The reason for that is that the directions of the two don’t
coincide. We will learn more about this in the next chapter.

118
Gioele Zardini Control Systems II FS 2018

Example 39. Given is the system


   
−2 0 0 1 0
ẋ(t) =  0 −2 5 · x(t) + 0 0 · u(t)
0 −1 0 1 1 (4.195)
 
−1 0 1
y(t) = · x(t),
0 1 0

with two inputs u1 (t) and u2 (t) and two outputs y1 (t) and y2 (t). The transfer function of
the system reads !
2s−1 s+2
(s2 +2s+5)·(s+2) s2 +2s+5
P (s) = 5 5
. (4.196)
s2 +2s+5 s2 +2s+5

(a) How many state variables are needed, in order to describe the input/output be-
haviour of the system?

(b) How many outputs are needed, in order to reconstruct the initial state x(0)?

(c) For every x(0) 6= 0 we have to ensure limt→∞ x(t) = 0. How many inputs are needed,
in order to ensure this condition?

119
Gioele Zardini Control Systems II FS 2018

Solution.

(a) We want to find the minimal order of the system, that is, the number of poles.

Minors 1st. Order


2s − 1 s+2 5
, 2 , 2 . (4.197)
(s2 + 2s + 5) · (s + 2) s + 2s + 5 s + 2s + 5

Minors 2nd. Order


2s − 1 5 s+2 5
· 2 − 2 · 2
(s2
+ 2s + 5) · (s + 2) s + 2s + 5 s + 2s + 5 s + 2s + 5
 
5 2s − 1 s+2
= 2 · −
s + 2s + 5 (s2 + 2s + 5) · (s + 2) s2 + 2s + 5
(4.198)
5 −s2 − 2s − 5
= 2 · 2
s + 2s + 5 (s + 2s + 5) · (s + 2)
−5
= 2 .
(s + 2s + 5) · (s + 2)
The pole-polynom reads
(s2 + 2s + 5) · (s + 2). (4.199)
and the poles

π1 = −2
(4.200)
π2,3 = −1 ± 2j.

The minimal order of the system is n = 3, since we have 3 poles. The given
description is already in the minimal realization.

Alternative

As alternative one can compute the controllability matrix and the observability
matrix as follows: for the controllability matrix it holds:
   
−2 0 0 1 0
A · B =  0 −2 5 · 0 0
0 −1 0 1 1
 
−2 0
=  5 5 .
0 0
    (4.201)
−2 0 0 −2 0
A2 · B =  0 −2 5 ·  5 5
0 −1 0 0 0
 
4 0
= −10 −10 .

−5 −5

120
Gioele Zardini Control Systems II FS 2018

The matrix reads  


1 0 −2 0 4 0
R = 0 0 5 5 −10 −10 . (4.202)
1 1 0 0 −5 −5
This matrix has full rank r = 3 = n: the system is completely controllable.
For the observability matrix it holds:
 
  −2 0 0
−1 0 1 
C ·A= · 0 −2 5
0 1 0
0 −1 0
 
2 −1 0
= .
0 −2 5
  (4.203)
  −2 0 0
2 −1 0 
C · A2 = · 0 −2 5
0 −2 5
0 −1 0
 
−4 2 −5
= .
0 −1 −10
The matrix reads  
−1 0 1
0 1 0 
 
 2 −1 0 
O=  . (4.204)
 0 −2 5 

−4 2 −5 
0 −1 −10
This matrix has full rank r = 3 = n: the system is completely observable.
Since the system is completely observable and controllable, the given state-space
description is already in its minimal realization, which means that the minimal
order is n = 3.
(b) If we take into account just the first output y1 (t), we get

C1 = −1 0 1 (4.205)
with its observability matrix
 
−1 0 1
 2 −1 0  . (4.206)
−4 2 −5
This matrix has full rank and the partial system is completely observable: this
means that x(0) cab be reconstructed from the first output.
Remark. This holds e.g. not for the second output y2 (t). One would get

C2 = 0 1 0 (4.207)
with its observability matrix
 
0 1 0
0 −2 5  . (4.208)
0 −1 −10
This observability matrix has rank r = 2 and so the system is not completely
observable: x(0) cannot be reconstructed from the second output only.

121
Gioele Zardini Control Systems II FS 2018

(c) None. The eigenvalues of the system are

λ1 = −2
(4.209)
λ2,3 = −1 ± 2j.

These eigenvalues are all asymptotically stable: this means that no matter which
input is given, the state will come back to its initial state.
Remark. If this wouldn’t be the case, we would proceed like in task (b): if one takes
into account just input u1 (t), one gets
 
1
B1 = 0 (4.210)
1

and its controllability matrix


 
1 −2 4
C = 0 5 −10 . (4.211)
1 0 −5

This controllability matrix has full rank and would satisfy the condition. You can
check yourselves that this wouldn’t be the case for u2 (t).

122
Gioele Zardini Control Systems II FS 2018

4.5 MIMO Performance Analysis


A good performance means
• good disturbance rejection,
• good noise attenuation,
• good reference tracking,
at input and output. Recalling the general MIMO loop depicted in Figure 27, we defined

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I C(s) P (s)

Figure 27: Standard feedback control system structure.

inner and outer loop transfer functions

LO (s) = P (s) · C(s) 6= C(s) · P (s) = LI (s), (4.212)

and the input/output sensitivity functions, i.e.


• Output sensitivity function (n → y)

SO (s) = (I + LO (s))−1 . (4.213)

• Output complementary sensitivity function (r → y)

TO (s) = (I + LO (s))−1 LO (s). (4.214)

• Input sensitivity function (d → v)

SI (s) = (I + LI (s))−1 . (4.215)

• Input complementary sensitivity function (d → −u)

TI (s) = (I + LI (s))−1 LI (s). (4.216)

4.5.1 Output Conditions


Referring to Figure 27, one can write
Y (s) = N (s) + η(s)
= N (s) + P (s)V (s)
= N (s) + P (s) (D(s) + U (s)) (4.217)
= N (s) + P (s) (D(s) + C(s)E(s))
= N (s) + P (s) (D(s) + C(s)(R(s) − Y (s))) ,

123
Gioele Zardini Control Systems II FS 2018

from which follows


(I + P (s)C(s))Y (s) = N (s) + P (s)D(s) + P (s)C(s)R(s)
(4.218)
Y (s) = (I + P (s)C(s))−1 (N (s) + P (s)D(s) + P (s)C(s)R(s)) .
Using the defined sensitivity functions, one can write
Y (s) = SO (s)N (s) + SO (s)P (s)D(s) + SO (s)LO (s)R(s). (4.219)

Disturbance Rejection
Equation 4.274 shows that the effects of the disturbance D(s) on the output can be re-
jected by making the output sensitivity function SO (s) small. Since typically disturbances
occur at low frequencies, one needs to do that only for this frequency range. How can we
relate this to what we have learned about singular values? It must hold
σ̄ (SO (jω)P (jω)) = σ̄ (I + P (jω)C(jω))−1 P (jω)


= σ̄ P (jω)(I + C(jω)P (jω))−1



push-through rule
(4.220)
= σ̄(P (jω)SI (jω))
 1,
where we used the push-through rule
G1 (I − G2 G1 )−1 = (I − G1 G2 )−1 G1 , (4.221)
and σ̄(H(jω) refers to the maximum singular value of H(jω).

Noise Attenuation
Similarly, Equation 4.274 shows that the effects of the noise N (s) on the output can be
attenuated by making the output sensitivity function SO (s) small. Since typically noise
occurs at high frequencies, one needs to do that only for this frequency range. It holds
σ̄ (SO (jω)) = σ̄ (I + P (jω)C(jω))−1

(4.222)
 1.

4.5.2 Input Conditions


Referring to Figure 27, one can write
V (s) = D(s) + U (s)
= D(s) + C(s)E(s)
= D(s) + C(s) (R(s) − Y (s))
(4.223)
= D(s) + C(s) (R(s) − N (s) − η(s))
= D(s) + C(s) (R(s) − N (s) − P (s)V (s))
= D(s) + C(s)R(s) − C(s)N (s) − C(s)P (s)V (s),
from which follows
(I + C(s)P (s)) V (s) = D(s) + C(s)R(s) − C(s)N (s)
(4.224)
V (s) = (I + C(s)P (s))−1 (D(s) + C(s)R(s) − C(s)N (s))
Using the defined sensitivity functions, one can write
V (s) = SI (s)D(s) + SI (s)C(s)R(s) − SI (s)C(s)N (s). (4.225)

124
Gioele Zardini Control Systems II FS 2018

Disturbance Rejection
Equation 4.225 shows that the effects of the disturbance D(s) on the input can be rejected
by making the input sensitivity function SI (s) small. Since typically disturbances occurr
at low frequencies, one needs to do that only for this frequency range. It must hold

σ̄ (SI (jω)) = σ̄ (I + C(jω)P (jω))−1



(4.226)
 1.

Noise Attenuation
Similarly, Equation 4.225 shows that the effects of the noise N (s) on the input can be
attenuated by making the input sensitivity function SI (s) small. Since typically noise
occurs at high frequencies, one needs to do that only for this frequency range. It holds

σ̄ (SI (jω)C(jω)) = σ̄ (I + C(jω)P (jω))−1 C(jω)



(4.227)
 1.

4.5.3 Reference Tracking


Referring to Figure 27, one can write

E(s) = R(s) − Y (s)


= R(s) − N (s) − η(s)
= R(s) − N (s) − P (s)V (s) (4.228)
= R(s) − N (s) − P (s) (D(s) + U (s))
= R(s) − N (s) − P (s)D(s) − P (s)C(s)E(s),

from which follows


(I + P (s)C(s)) E(s) = R(s) − N (s) − P (s)D(s)
(4.229)
E(s) = (I + P (s)C(s))−1 (R(s) − N (s) − P (s)D(s)) .

Using the defined sensitivity functions, one can write

E(s) = SO (s)(R(s) − N (s)) − SO (s)P (s)D(s) (4.230)

Disturbance Rejection
Equation 4.230 shows that the effects of the disturbance D(s) on the error can be rejected
by making the output sensitivity function SO (s) small. Since typically disturbances occurr
at low frequencies, one needs to do that only for this frequency range. It must hold

σ̄ (SO (jω)P (jω)) = σ̄ (I + P (jω)C(jω))−1 P (jω)




= σ̄ P (jω)(I + C(jω)P (jω))−1



push-through rule
(4.231)
= σ̄ (P (jω)SI (jω))
 1.

125
Gioele Zardini Control Systems II FS 2018

Noise Attenuation
Similarly, Equation 4.230 shows that the effects of the noise N (s) on the error can be
attenuated by making the output sensitivity function SO (s) small. Since typically noise
occurs at high frequencies, one needs to do that for this frequency range and for reference
relevant frequencies (we have R(s) in the term). It holds

σ̄ (SO (jω)) = σ̄ (I + P (jω)C(jω))−1



(4.232)
 1.

Remark. One can note that the reference tracking case resumes the other two cases.

4.5.4 Useful Properties


Given an invertible matrix A and a matrix B, it holds
(I) Inverse:
1
σ̄ A−1 =

, (4.233)
σ (A)
where σ(A) represents the smallest singular value of A.

(II) Sum:

σi (A) − σ̄(B) ≤ σi (A + B)
(4.234)
≤ σi (A) + σ̄(B).

In particular, it holds

σ(A) − 1 ≤ σ(I + A)
(4.235)
≤ σ(A) + 1.

(III) Product:

σ̄(AB) ≤ σ̄(A)σ̄(B)
(4.236)
σ(AB) ≤ σ(A)σ(B).

4.5.5 Towards Clearer Bounds


Assuming P (s) and C(s) are invertible, one can use the defined properties to write

σ(P (jω)C(jω)) − 1 ≤ σ(I + P (jω)C(jω)) ≤ σ(P (jω)C(jω)) + 1


(4.237)
σ(C(jω)P (jω)) − 1 ≤ σ(I + C(jω)P (jω)) ≤ σ(C(jω)P (jω)) + 1

For disturbance rejection, using Equations 4.220, 4.226, 4.231 one can write

σ̄(P (jω)SI (jω))  1


(4.238)
σ̄(SI (jω))  1.

For noise attenuation, using Equations 4.222, 4.227, 4.232 one can write

σ̄(SO (jω))  1
(4.239)
σ̄(SI (jω)C(jω))  1.

126
Gioele Zardini Control Systems II FS 2018

With the inverse property of singular values, we know that

σ̄(SI (jω)) = σ̄((I + C(jω)P (jω))−1 )


1
= ,
σ(I + C(jω)P (jω))
(4.240)
σ̄(SO (jω)) = σ̄((I + P (jω)C(jω))−1 )
1
= .
σ(I + P (jω)C(jω))

With Equation 4.237 and σ(C(jω)P (jω)) > 1, σ(P (jω)C(jω)) > 1, one can write
1 1
≤ σ̄(SI (jω)) ≤
σ(C(jω)P (jω)) + 1 σ(C(jω)P (jω)) − 1
(4.241)
1 1
≤ σ̄(SO (jω)) ≤ .
σ(P (jω)C(jω)) + 1 σ(P (jω)C(jω)) − 1

This implies

σ̄(SI (jω))  1 ⇔ σ(C(jω)P (jω))  1


(4.242)
σ̄(SO (jω))  1 ⇔ σ(P (jω)C(jω))  1

Disturbance Rejection
Suppose that P (s) and C(s) are invertible.

• Output: It holds

σ(P (jω)C(jω))  1 ⇔ σ̄(SO (jω)P (jω)) = σ̄((I + P (jω)C(jω))−1 P (jω))


≈ σ̄((P (jω)C(jω))−1 P (jω))
= σ̄(C(jω)−1 ) (4.243)
1
= .
σ(C(jω))

This implies:

σ̄(SO (jω)P (jω))  1 ⇔ σ(C(jω))  1, ∀ω ∈ (0, ωlow ). (4.244)

• Input: Considering Equation 4.241, one can write


1
σ̄(SI (jω)) ≥ . (4.245)
σ(C(jω)P (jω)) + 1

This implies

σ̄(SI (jω))  1 ⇔ σ(C(jω)P (jω))  1, ∀ω ∈ (0, ωlow ). (4.246)

Noise Attenuation
Suppose that P (s) and C(s) are invertible.

127
Gioele Zardini Control Systems II FS 2018

• Output: Using Equation 4.241, one can write


1
≤ σ̄(SO (jω)) (4.247)
σ(P (jω)C(jω)) + 1

This implies

σ̄(SO (jω))  1 ⇔ σ(P (jω)C(jω))  1, ∀ω ∈ (ωhigh , ∞). (4.248)

• Input: It holds

σ(C(jω)P (jω))  1 ⇔ σ̄(C(jω)SO (jω)) = σ̄(C(jω)(I + P (jω)C(jω))−1 )


≈ σ̄(C(jω)(P (jω)C(jω))−1 )
= σ̄(P (jω)−1 ) (4.249)
1
= .
σ(P (jω))

This implies

σ̄(C(jω)SO (jω))  1 ⇔ σ(P (jω))  1, ∀ω ∈ (ωhigh , ∞). (4.250)

4.5.6 Is this the whole Story? Tradeoffs


Robust Stability
One defines robust stability to be the stability in the presence of model uncertainty.
Let ∆ be a stable uncertainty matrix, such that

Preal (s) = (I + ∆) Pnominal (s) (4.251)

The perturbed closed loop transfer function is then characterized by

det(I + P (s)C(s)) → det(I + (I + ∆)P (s)C(s)) = det(I + P (s)C(s)) det(I + ∆TO ),


(4.252)

where we used

det(X + AB) = det(X) det(I + BX −1 A), ∀X : ∃X −1 (4.253)

Since
det(I + ∆TO ) ≈ 1, (4.254)
it holds
k∆TO k  1. (4.255)
This implies

σ̄(TO (jω))  1 ⇒ σ̄(LO (jω))  1, ∀ω ∈ (ωhigh , ∞). (4.256)

Remark. Note that typically ∆ becomes important at high frequencies.

128
Gioele Zardini Control Systems II FS 2018

Actuator Saturation
Using Figure 27, one can derive
U (s) = C(s)SO (s)R(s) − TI (s)D(s) − C(s)SO (s)N (s). (4.257)
With the defined conditions, it holds
U (s) ≈ C(s) (R(s) − N (s)) . (4.258)
In order to avoid the actuator saturation, the controller gain cannot be chosen too big,
i.e.
σ̄(C(jω)) ≤ M, ∀ω ∈ (ωhigh , ∞). (4.259)

4.5.7 Summary
The specifications we derived are resumed in Figure 28. Mathematically, we have found:

Figure 28: Desired Loop Gain

Disturbance Rejection
At frequency ω ∈ (0, ωlow ) holds
σ(C(jω))  1,
σ(C(jω)P (jω)  1, (4.260)
σ(P (jω)C(jω))  1.

Noise Attenuation
At frequency ω ∈ (ωhigh , ∞) holds
σ̄(C(jω)) ≤ M,
σ̄(C(jω)P (jω)  1, (4.261)
σ̄(P (jω)C(jω))  1.

129
Gioele Zardini Control Systems II FS 2018

4.6 MIMO Robust Stability


4.6.1 MIMO Robustness
All models are wrong, but some are useful. (4.262)
A model maps inputs into outputs and we consider good a model which predicts the
outputs accurately. The difference between a model prediction and reality (which is never
0) is referred to as model uncertainty.

Modeling Uncertainty
Let P (s), C(s) be the nominal MIMO plant and an internally stabilizing controller, re-
spectively. Let’s define ∆(s), W1 (s), W2 (s) to be stable, rational and proper transfer
matrices. We call W1 (s) and W2 (s) weighting functions. ∆(s) is the modeling error. We
represent uncertainty as
W1 (s)∆(s)W2 (s). (4.263)
Let’s define Π(s) to be the set of perturbed plants such that P (s) ∈ Π(s).

Unstructured Uncertainty

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I C(s) Π(s)

Figure 29: Standard feedback control system structure.

Using Figure 29 and the Equations for internal stability, one can write

(I − C(s)Π(s))−1 (I − C(s)Π(s))−1 C(s)


     
E1 (s) W1 (s)
= · . (4.264)
E2 (s) (I − Π(s)C(s))−1 Π(s) (I − Π(s)C(s))−1 W2 (s)

For unstructured uncertainty, nothing more can be said without writing the relation
between the uncertainty and the plant.

Additive Uncertainty
Theorem 4. (robust stability under additive uncertainty). Let

Π(s) = {P + W1 (s)∆(s)W2 (s) : ∆ rational, proper and stable} (4.265)

and let C(s) be a stabilizing controller for the nominal plant P (s). Then, the closed loop
system is well-posed (i.e., realizable) and internally stable for all k∆k∞ < 1 if and only if
kW2 (s)C(s)SO (s)W1 (s)k∞ ≤ 1.

130
Gioele Zardini Control Systems II FS 2018

Multiplicative Uncertainty
Theorem 5. (robust stability under multiplicative uncertainty). Let

Π(s) = (I + W1 (s)∆(s)W2 (s)) P (s) : ∆ rational, proper and stable} (4.266)

and let C(s) be a stabilizing controller for the nominal plant P (s). Then, the closed loop
system is well-posed (i.e., realizable) and internally stable for all k∆k∞ < 1 if and only if
kW2 (s)TO W1 (s)k∞ ≤ 1.
Definition 14. Robust stability: Given a controller C, one determines whether the
system remains stable for all possible plants P in the uncertaint set.

4.6.2 SISO Case


In order to understand what we will address in this section, let’s have a look at the SISO
case. Let’s assume multiplicative uncertainty, i.e.

P (s) = P0 (s) (1 + W (s)∆(s)) , with |∆(jω)| ≤ 1∀ω, (4.267)

where P (s) represents the perturbed plant and P0 (s) the nominal plant. Assuming a
controller which stabilizes the nominal plant, one has

|1 + L0 (jω)| > 0. (4.268)

If one looks at the perturbed plant, instead, one has


1 + L(jω) = 1 + P (jω)C(jω)
= 1 + P0 (jω)C(jω) +W (jω)∆(jω)P0 (jω)C(jω). (4.269)
| {z }
L0 (jω)

In order to ensure stability even in the worst case scenario, it should hold
|1 + L(jω)| > |1 + L0 (jω)| − |L0 (jω)W (jω)∆(jω)|
|∆| ≤ 1 > |1 + L0 (jω)| − |L0 (jω)W (jω)| (4.270)
> 0.
From this it follows
W (jω)L0 (jω)
| | < 1. (4.271)
1 + L0 (jω)
What we want to do, is to be able to write such relations for MIMO systems.

4.6.3 Linear Fractional Transform (LFT)


In order to analyze robust stability, it is worth first to separate the nominal plant from the
uncertainty which affects it. Assuming a nominal plant P0 (s) and a feedback controller
C(s) that stabilizes P0 (s), one can write the problem as in Figure 30. Note that the
generalization for an uncertainty block W1 (s)∆(s)W2 (s) instead of ∆(s) is trivial and can
be used as well. Note that w(t) = (r(t), d(t), n(t)) represents the exogenous inputs and
z(t) = (y(t), u(t), e(t)) represents the regulated variables. One can write
     
U∆ (s) M (s) N (s) Y∆ (s)
= · . (4.272)
Z(s) J(s) L(s) W (s)

131
Gioele Zardini Control Systems II FS 2018

y∆ (t) u∆ (t)

G0 (s) = P0 (s)C(s)
w(t) z(t)

Figure 30: Standard feedback control system structure.

   
U∆ (s) Y∆ (s)
Note that represents the plant outputs and the plant inputs, where
Z(s) W (s)
Y∆ (s) = ∆(s) · U∆ (s). In order for the system to be internally stable, each element of
the matrix  
M (s) N (s)
(4.273)
J(s) L(s)
must be stable itself. By looking at the transfer function which relates z(t) to w(t) one
has
Z(s) = J(s)Y∆ (s) + L(s)W (s)
(4.274)
= J(s)∆(s)U∆ (s) + L(s)W (s)
Furthermore, the first equation of the system reads
U∆ (s) = M (s)Y∆ (s) + N (s)W (s)
(4.275)
W (s) = N −1 [(I − M (s)∆(s)) U∆ (s)] .
By pluggin Equation 4.275 in Equation 4.274 one gets
Z(s) = J(s)∆(s)U∆ (s) + L(s)W (s)
= J(s)∆(s)U∆ (s)W −1 (s) + L(s) W (s)
 
(4.276)
= J(s)∆(s)U∆ (s)U∆ (s)−1 (I − M (s)∆(s))−1 N (s) + L(s) W (s)
 

= J(s)∆(s) (I − M (s)∆(s))−1 N (s) + L(s) W (s).


 

This means that the transfer function from w(t) to z(t) is

Gzw (s) = J(s)∆(s) (I − M (s)∆(s))−1 N (s) + L(s). (4.277)

The internal stability of the perturbed closed-loop system requires this transfer function
to be stable for all possible perturbations ∆(s). Since from above M (s), N (s), J(s), L(s)
are stable, Gzw (s) is stable for all stable (I − M (s)∆(s))−1 .

4.6.4 Unstructured Small Gain Theorem


Theorem 6. Let the set of allowable model uncertainties be
˜ = {∆ : k∆k∞ ≤ 1}
∆ (4.278)

and let M be stable. Then, (I − M (s)∆(s))−1 and ∆ (I − M (s)∆(s))−1 are stable, for all
˜ if and only if kM k∞ < 1.
∆ ∈ ∆,

132
Gioele Zardini Control Systems II FS 2018

Proof. We first prove sufficiency and then necessity.

(I) Sufficiency: we show that (I − M (s)∆(s)) has no zeros ζ in the right-half plane.
In particular, we show

kM k∞ < 1 ⇒ (I − M (s)∆(s))−1 stable. (4.279)

It holds
k(I − M (ζ)∆(ζ))xk2 > 0, ˜
x 6= 0, ∀∆ ∈ ∆
triangle inequality k(I − M (ζ)∆(ζ))xk2 ≥ kxk2 − kM (ζ)∆(ζ)xk2
induced matrix norm ≥ kxk2 − σ̄ (M (ζ)∆(ζ)) kxk2
≥ kxk2 − kM (ζ)k∞ k∆(ζ)k∞ kxk2
| {z }
≤1

> 0,
(4.280)

where in the last step we used the fact that σ̄(H(s)) ≤ kH(s)k∞ for stable and
causal H(s).
˜
(II) Necessity: we show by construction, that if σ̄(M (jω0 )) > 1, there exists a ∆ ∈ ∆
−1
such that (I − M ∆) is unstable, i.e.

det(I − M (jω0 )∆(jω0 ) = 0. (4.281)

In particular, we show

¬kM k∞ < 1 ⇒ ¬ (I − M (s)∆(s))−1 stable. (4.282)

Let’s write the singular value decomposition of M as


 
σ1 0 . . . 0
 0 σ2 . . . 0 
 ∗
M (jω0 ) = U  .. ..  V , σ1 > 1. (4.283)

.. . .
. . . .
0 0 . . . σp

We choose a ∆ such that


 −1 
σ1 0 ... 0
 0 0 ... 0  ∗
∆(jω0 ) = V  .. ..  U , k∆k∞ < 1. (4.284)

.. . .
 . . . .
0 0 ... 0

133
Gioele Zardini Control Systems II FS 2018

It holds then
   
σ1 0 ... 0 σ1−1 0 ... 0
 0 σ2 ... 0  0 0 ... 0
 ∗  ∗
(I − M (jω0 )∆(jω0 )) = I − U  .. ..  V V ..  U
 
.. ...  .. .. . .
. . .  . . . .
0 0 . . . σp 0 0 ... 0
  
1 0 ... 0
 0 0 . . . 0 
 ∗
= U I −  .. .. U
 
.
. . . 
 . . . . 
0 0 ... 0
 
0 1 ... 0
0 0 ... 0  ∗
= U  .. ..  U ,

.. . .
. . . .
0 0 ... 1
| {z }
I
(4.285)

where I is clearly not invertible.

4.6.5 From the Block-Diagram to the LFT


One follows usually this procedure

1. Define the input and the output of each perturbation block ∆i as (u∆,i , y∆,i ) and let
| |
u∆ = u∆,1 . . . u∆,q , y∆ = y∆,1 . . . y∆,q , (4.286)
where q is the number of uncertainties in the loop.

2. Compute each component of the transfer matrix M as the map between the (i, j)-th
inputs and outputs to each uncertainty block, assuming ∆i = I ∀i = 1, . . . , q, i.e.
 
M1,1 (s) M1,2 (s) . . . M1,q (s)
.. .. .. 
M2,1 (s)
 . . .  U∆,i
M (s) =  . . ... . , Mi,j = . (4.287)
 .. .. ..  Y∆,j

Mq,1 (s) ... . . . Mq,q (s)

3. The uncertainty block will be block diagonal in the MIMO case and diagonal in the
SISO one:
∆ = diag (∆1 , . . . , ∆q ) , k∆i k∞ < 1. (4.288)

Example 40. (Additive Uncertainty) You are given the system depicted in Figure 34
and the input output behaviour depicted in Figure 32, where

η(s) = P0 (s)U (s) + W1 (s)∆(s)W2 (s)U (s). (4.289)

134
Gioele Zardini Control Systems II FS 2018

d(t) = 0 n(t) = 0
r(t) e(t) u(t) u(t) η(t) y(t)
C(s) Π(s)

Figure 31: Additive Uncertainty Control System Loop

u(t) η(t)
Π(s) = P0 (s) + W1 (s)∆(s)W2 (s)

Figure 32: Input/Output Behaviour

In order to find the transfer function M , one rewrites the problem as depicted in Figure
33. It holds
U∆ (s) = W2 (s)U (s) (4.290)
and
U (s) = C(s)(R(s) − Y (s))
= −C(s) (P0 (s)U (s) + W1 (s)Y∆ (s))
= −C(s)W1 (s)Y∆ (s) − C(s)P0 (s)U (s) (4.291)
(I + C(s)P0 (s))U (s) = −C(s)W1 (s)Y∆ (s)
U (s) = −(I + C(s)P0 (s))−1 C(s)W1 (s)Y∆ (s),

from which it follows


U∆ (s) = W2 (s)U (s)
= − W2 (s)(I + C(s)P0 (s))−1 C(s)W1 (s) Y∆ (s). (4.292)
| {z }
M (s)

u∆ (t) y∆ (t)
W2 (s) ∆(s) W1 (s)

r(t) = 0 e(t) u(t) P0 U η(t) y(t)


C(s) P0 (s)

Figure 33: Additive Uncertainty Control System Loop

135
Gioele Zardini Control Systems II FS 2018

Example 41. (Multiplicative Uncertainty) You are given the system depicted in
Figure 34 and the input output behaviour depicted in Figure 35, where

d(t) = 0 n(t) = 0
r(t) e(t) u(t) u(t) η(t) y(t)
C(s) Π(s)

Figure 34: Multiplicative Uncertainty Control System Loop

η(s) = P0 (s)U (s) + P0 (s)W1 (s)∆(s)W2 (s)U (s). (4.293)

u(t) η(t)
Π(s) = P0 (s)(I + W1 (s)∆(s)W2 (s))

Figure 35: Input/Output Behaviour

In order to find the transfer function M , one rewrites the problem as depicted in Figure
36. It holds

u∆ (t) y∆ (t)
W2 (s) ∆(s) W1 (s)

P0 (s)

r(t) = 0 e(t) u(t) P0 U η(t) y(t)


C(s) P0 (s)

Figure 36: Multiplicative Uncertainty Control System Loop

U∆ (s) = W2 (s)U (s) (4.294)


and
U (s) = C(s)(R(s) − Y (s))
= −C(s) (P0 (s)U (s) + P0 (s)W1 (s)Y∆ (s))
= −C(s)P0 (s)W1 (s)Y∆ (s) − C(s)P0 (s)U (s) (4.295)
(I + C(s)P0 (s))U (s) = −C(s)P0 (s)W1 (s)Y∆ (s)
U (s) = −(I + C(s)P0 (s))−1 C(s)P0 (s)W1 (s)Y∆ (s),

136
Gioele Zardini Control Systems II FS 2018

from which it follows


U∆ (s) = W2 (s)U (s)
= − W2 (s)(I + C(s)P0 (s))−1 C(s)P0 (s)W1 (s) Y∆ (s). (4.296)
| {z }
M (s)

4.6.6 Recasting Performance in a Robust Stability Problem


One can summarize robust stability conditions in bounding the infinity norm of selected
functions. We are here assuming that we want to attenuate noise on the output. Let
knk2 < 1 be the norm of the noise signal and Wn (s) a weighting function to rescale and
shape the frequency content of the signal. We define the norm of the noise to be like this
in order to use the form of the problem we derived previously: the weighting function
Wn exists exactly for re-modulating the importance of n to its correct value. Using the
nominal performance approach, one uses the diagram depicted in Figure 37 and writes

Y (s) = (I + P (s)C(s))−1 Wn (s)N (s) + . . . ⇒ kS0 (s)Wn (s)k∞  1 (4.297)

Wn (s)

d(t) = 0
ν
r(t) e(t) u(t) u(t) η(t) y(t)
C(s) P (s)

Figure 37: Robust Performance Problem

The same result is obtained considering the following loop and treating it as a robust
stability problem, as in Figure 38. One can then identify the transfer function
Y
M (s) = = SO (s)Wn (s) ⇒ kM k∞ . (4.298)
N

4.7 MIMO Robust Performance


Definition 15. Robust Performance: The effect of exogenous signals in presence
of plant uncertainty can degrade performance to unacceptable levels before the system
goes unstable. We need a robust performance test to evaluate the worst case effect of
performance, given uncertainty.

Before having a closer look to the problem, let’s recall what we have seen so far:

• Nominal Stability (NS): The controller internally stabilizes the (nominal) plant.

137
Gioele Zardini Control Systems II FS 2018

Wn (s) ∆p (s)

d(t) = 0
ν
r(t) e(t) u(t) u(t) η(t) y(t)
C(s) P (s)

Figure 38: Robust Stability Problem

• Robust Stability (RS) The controller internally stabilizes all plants parametrized
through model uncertainty.

• Nominal Performance (NP): is guaranteed by imposing constraints on the in-


finity norm of some sensitivity function, given nominal stability.

• Robust Performance (RP): like NP, but for all plants within a given model set.

4.7.1 Problem Definition


Given a nominal plant P0 (s) and a model uncertainty parametrization ∆(s), find condi-
tions on the nominal closed loop system, such that

1. The controller C(s) stabilizes the closed loop system for all P ∈ Π with

Π = {(I + W1 (s)∆(s)W2 (s))P0 (s) : W1 (s), W2 (s), ∆, rational, proper, stable}


(4.299)

2. A performance metric on some relevant transfer function is satisfied for all P ∈ Π.

4.7.2 M-Delta Approach: from RP to RS


A robust performance as the one depicted in Figure 39 (with k∆r k∞ < 1 and k∆p k∞ < 1),
can be transformed in a robust stability problem as the one depicted in Figure 40.
In particular, one can show
    
U∆,1 −W2 (s)TO (s)W1 (s) −W2 (s)TO (s)Wn (s) Y∆,1
= (4.300)
U∆,2 SO (s)W1 (s) SO (s)Wn (s) Y∆,2 .
| {z }
M

Using the small gain theorem, a sufficient condition for robust performance is

kM k∞ < 1. (4.301)

138
Gioele Zardini Control Systems II FS 2018

u∆,1 (t) y∆,1 (t)


W2 (s) ∆r (s) W1 (s) Wn (s) ∆p (s)
y∆,2

r(t) = 0 e(t) u(t) P0 U η(t) y(t)


C(s) P0 (s)

Figure 39: M-Delta Approach

 
∆r 0
0 ∆p

Figure 40: M-Delta Approach

4.7.3 Structured Singular Value


Definition
The approach we have seen in the previous section applies to a diagonal uncertainty. How
˜
can we handle any uncertainty ∆ ∈ ∆?

Intuition: The Structured Singular Value is a generalization of the maximum singu-


lar value and the spectral radius. Through SSV, a generalized small gain theorem is
obtained. This accounts for the structure of uncertainty.

Definition 16. Mu: Given ∆, find the smallest (in terms of σ̄(∆)) ∆ which makes

det(I − M (s)∆(s)) = 0. (4.302)

Then:
1
µ(M ) = . (4.303)
σ̄(∆)
˜ then µ(M ) = 0.
If det(I − M (s)∆(s)) 6= 0 ∀∆ ∈ ∆,
˜ with
Theorem 7. (SSV Robust Stability) The M − ∆ system is stable for all ∆ ∈ ∆
k∆k∞ < 1 if and only if
sup µ(M (jω)) < 1. (4.304)
ω

Remark. Mu is a measure of the smallest perturbation that sends the system unstable.

139
Gioele Zardini Control Systems II FS 2018

Properties
(I)
µ(M ) ≥ 0. (4.305)

(II) It holds
˜ = {∆|∆ ∈ Cp×q , full matrix} ⇒ µ(M ) = σ̄(M ).
∆ (4.306)

(III) It holds
˜ = {λI|λ ∈ C} ⇒ µ(M ) = ρ(M ) = |λmax (M )|.
∆ (4.307)
because inf λ−1 (M ) = ρ(M ).

(IV) It holds
˜ = {diag(∆1 , . . . , ∆q )|∆i is complex} ⇒ ρ(M ) ≤ µ(M ) ≤ σ̄(M )
∆ (4.308)

(V) It holds
˜ = {diag(∆1 , . . . , ∆q )|∆i is complex} ⇒ µ(M ) = µ(D−1 M D), ∀D ∈ D, (4.309)

where D = {D = diag(d1 , . . . , dn )|di > 0}, D∆ = ∆D.

Remark.

• SSV provides a necessary and sufficient condition for RS (and thus RP), provided
mu. This leads to a less conservative bound than the infinity norm condition.

• Computing mu is very tricky. There exist numerical approaches to refine upper and
lower bounds for mu.

• The bounds are defined as

µ(M ) = µ(D−1 M D) ≤ inf σ̄(D−1 M D) (4.310)


D∈D

Robust Performance Noise Rejection: SISO Case


One can recover the structure defined in the previous chapters, but the SISO case offers
some simplifications:
W1 (s) = 1, S0 , T0 → S, T. (4.311)
It follows  
−W2 (s)T (s) −W2 (s)T (s)Wn (s)
M= . (4.312)
S(s) S(s)Wn (s)
d2
Let D = diag(d1 , d2 ) and α = d1
with |d1 |, |d2 | < 1. It must hold

µ(M (jω)) = µ(D−1 M (jω)D)


| {z }
A(α)
1
(4.313)
≤ inf λmax (A∗ (α)A(α))
2

|α|>0

< 1.

We perform the analysis following specific steps:

140
Gioele Zardini Control Systems II FS 2018

1. We fix ω and find A(α) and A∗ (α)A(α).


In the SISO case, the matrix ∆ is diagonal and we define
 
d1 0 d2
∆= , d1 , d2 < 1, α= . (4.314)
0 d2 d1

Since µ(M ) = µ(D−1 M D), let’s set D = ∆ and write


1   
−1 0 −W 2 (s)T (s) −W 2 (s)T (s)W n (s) d 1 0
A(α) = D M D = d1 1
0 d2 S(s) S(s)Wn (s) 0 d2
!
− W2 (s)T (s)
− W2 (s)Td(s)W n (s)

d1 d1 0
= S(s)
1
S(s)Wn (s)
d2 d2
0 d2
 
−W2 (s)T (s) −αW2 (s)T (s)Wn (s)
= 1 .
α
S(s) S(s)Wn (s)
(4.315)

Furthermore, it holds (by dropping the s in the notation for simplicity)


1
! !
− W̄2 T̄ (s) α
S̄ −W 2 T −αW 2 T Wn
A∗ (α)A(α) = 1
−αW̄2 T̄ W̄n S̄ W̄n α
S SWn
kSk22
!
kW2 k22 kT k22 + α2
αkW2 k22 kT k22 Wn + α1 kSk22 Wn
= .
1
αkW2 k22 kT k22 W̄n + α
kSk22 W̄n α2 kW2 k22 kT k22 kWn k22 + kSk22 kWn k22
(4.316)

2. We find λmax (α), i.e. the biggest λ from det(A∗ (α)A(α) − λI) = 0. It holds
| {z }
I

kSk22
!
kW2 k22 kT k22 + α2
−λ αkW2 k22 kT k22 Wn + α1 kSk22 Wn
det(I) = det
αkW2 k22 kT k22 W̄n + α1 kSk22 W̄n α2 kW2 k22 kT k22 kWn k22 + kSk22 kWn k22 − λ
= α2 kW2 k42 kT k42 kWn k22 + kW2 k22 kW2 k22 kT k22 kSk22 kWn k22 − λkW2 k22 kT k22
2 2 2 2 kSk42 kWn k22 λ
+ kW2 k2 kT k2 kSk2 kWn k2 + 2
− 2 kSk22 − λα2 kW2 k22 kT k22 kWn k22
α α
− λkSk22 kWn k22 + λ2 − α2 kW2 k42 kT k42 kWn k22 − 2kW2 k22 kW2 k22 kT k22 kSk22 kWn k22
1
− 2 kSk42 kWn k22
α  
2 2 2 2 2 2 2 2 2 1 2
= λ − λ kW2 k2 kT k2 + α kW2 k2 kT k2 kWn k2 + kSk2 kWn k2 + 2 kSk2 ,
α
(4.317)

from which it follows


1
λmax (A∗ (α)A(α)) = kW2 k22 kT k22 + α2 kW2 k22 kT k22 kWn k22 + kSk22 kWn k22 + kSk22 .
α2
(4.318)

141
Gioele Zardini Control Systems II FS 2018

3. We now want to minimize this with respect to α. It holds


d
(λmax ) = 0

1
2αkW2 k22 kT k22 kWn k22 − 2 3 kSk22 = 0
α (4.319)
2α4 kW2 k22 kT k22 kWn k22 − 2kSk22 = 0
kSk2
α2 = .
kW2 k2 kT k2 kWn k2

4. By plugging this into the original equation one gets

kSk2
µ(M ) = λmax = kW2 k22 kT k22 + kW2 k22 kT k22 kWn k22 + kSk22 kWn k22
kW2 k2 kT k2 kWn k2
kW2 k2 kT k2 kWn k2
+ kSk22
kSk2
= kW2 k22 kT k22 + 2kSk2 kW2 k2 kT k2 kWn k2 + kSk22 kWn k22
= (kSk2 kWn k2 + kW2 k2 kT k2 )2
(4.320)

5. The condition on µ implies

kSk2 kWn k2 + kW2 k2 kT k2 < 1. (4.321)

142
Gioele Zardini Control Systems II FS 2018

5 MIMO Control Fundamentals


5.1 Decentralized Control
5.1.1 Idea and Definitions
As we have introduced in previous lectures, the generalization from SISO to MIMO sys-
tems adds crosscouplings and complexities to the control problem. In general, one can
divide the control strategies into two philosophies:
1. Avoid the MIMO complexity by trying to use SISO controllers. How?
• Decentralized control: every input signal is determined only by a feedback from
one output.
• Pairing problem: choose use of input-output paris for feedback.
• Decoupled control: change of variables to facilitate input-output pairing.
2. Centralized multivariable control, optimizing some cost function, e.g.
• Linear Quadratic Regulator (LQR, next episode).
• H-infinity control
The first philosophy results in suboptimal solutions and requires less modeling effort.
The second philosophy results in optimal results, but the modeling effort increases. Let’s
address the problem more spefifically:
Definition 17. Decentralized control: when the control systems consists of indepen-
dent feedback controllers which interconnect a subset of the output measurements with a
subset of manipulated inputs. These subsets should not be used by any other controller.
This represents a good strategy if the the MIMO system shows a low degree of inter-
action between inputs and outputs. How can we evaluate this property? Let’s have a
look at a generic 2 × 2 MIMO system with full rank and same number of inputs ui (t) and
outputs ui (t). For the coupled system one can write
     P 
Y1 (s) P1 1(s) P12 (s) U1 (s) i P 1i (s)Ui (s)
= = P , (5.1)
Y2 (s) P21 (s) P22 (s) U2 (s) i P2i (s)Ui (s)

i.e. each input affects each output. For a decoupled system, one can e.g. write
      
Y1 (s) P11 (s) 0 U1 (s) P11 (s)U1 (s)
= = , (5.2)
Y2 (s) 0 P22 (s) U2 (s) P22 (s)U2 (s)
i.e. the system behaves like a union of non interacting SISO systems. Furthermore, if
one assumes a non-square system, for a general system P (s) ∈ Rl×n , one can meet the
following two cases:
1. Tall system(l > m): we have more outputs than inputs, i.e. not all outputs are
affected by an input. Which outputs are best controlled with which inputs?
2. Fat system(l < m): we have more inputs than outputs. How to distribute control
action over the inputs?
In the next section, we will introduce a systematic way to address this kind of problems.

143
Gioele Zardini Control Systems II FS 2018

5.1.2 Relative-Gain Array (RGA)


As introduced in the previous section, if a system has a specific decoupled form, one can
avoid complex control strategies and use independent SISO controllers. In some cases,
this reasoning is actually the good one, but how can one distinguish when to use this
approach?
The RGA-matrix tells us how the different subplants of a MIMO plant interact: this
matrix is a good indicator of how SISO a system is.
This matrix can be generally calculated as

RGA(s) = P (s). × P (s)−T (5.3)

where
P (s)−T = (P (s)T )−1 . (5.4)
and A. × A represents the element-wise, Shur multiplication (A.*A in Matlab). If
P (s) is not invertible (recall tall, fat and non inverbile square systems), one needs to
generalize the inverse with the Moore-Penrose Inverse. Recalling P (s) ∈ Rl×m one can
define two cases:

• Tall system(l > m): if rank(P (s)) = m,

A† = (A∗ A)−1 A∗ , A† A = Im . (5.5)

• Fat system(l < m): if rank(P (s)) = l,

A† = A∗ (AA∗ )−1 , AA† = Il . (5.6)

In general, each element of the matrix gives us a special information:


gain from ua to yb with all other loops open
[RGA]ab = . (5.7)
gain from ua to yb with all other loops closed (perfect control)

Remark. It’s intuitive to notice, that if

[RGA]ab ≈ 1 (5.8)

the numerator and the denominator are equal, i.e. SISO control is enough to bring ua at
yb .
Remark. The theory behind the relative-gain array goes far beyond the aim of this course
and one should be happy with the given examples. If however you are interested in this
topic, you can have a look here.
Let’s take the example of a 2 × 2 plant: in order to compute the first element (1, 1) of the
RGA(s) we consider the system depicted in Figure 41. We close with a SISO controller
C22 (s) the loop from y2 (t) to u2 (t) and try to compute the transfer function from u1 (t)
to y1 (t).
Everyone has his special way to decouple a MIMO system. I’ve always used this procedure:
starting from the general equation in frequency domain
     
Y1 (s) P11 (s) P12 (s) U1 (s)
= · , (5.9)
Y2 (s) P21 (s) P22 (s) U2 (s)

144
Gioele Zardini Control Systems II FS 2018

one can read


Y1 (s) = P11 (s) · U1 (s) + P12 (s) · U2 (s)
(5.10)
Y2 (s) = P21 (s) · U1 (s) + P22 (s) · U2 (s).
Since we want to relate u1 (t) and y1 (t) let’s express u2 (t) as something we know. Using
the controller C22 (s) we see
U2 (s) = −C22 (s) · Y2 (s)
= −C22 (s) · P21 (s) · U1 (s) − C22 (s) · P22 (s) · U2 (s)
(5.11)
−C22 (s) · P21 (s) · U1 (s)
⇒ U2 (s) = .
1 + P22 (s) · C22 (s)
With the general equation one can then write
Y1 (s) = P11 (s) · U1 (s) + P12 (s) · U2 (s)
−C22 (s) · P21 (s) · U1 (s)
= P11 (s) · U1 (s) + P12 (s) ·
1 + P22 (s) · C22 (s) (5.12)
P11 (s) · (1 + P22 (s) · C22 (s)) − P12 (s) · C22 (s) · P21 (s)
= · U1 (s).
1 + P22 (s) · C22 (s)
We have found the general transfer function that relates u1 (t) to y1 (t). We now consider
two extreme cases:
• We assume open loop conditions, i.e. all other loops open: C22 ≈ 0. One gets
Y1 (s) = P11 (s) · U1 (s). (5.13)

• We assume high controller gains, i.e. all other loops closed : P22 (s) · C22 (s)  1.
One gets
P11 (s) · (1 + P22 (s) · C22 (s)) − P12 (s) · C22 (s) · P21 (s)
lim
C22 (s)→∞ 1 + P22 (s) · C22 (s)
(5.14)
P11 (s) · P22 (s) − P12 (s) · P21 (s)
= .
P22 (s)

As stated before, the first element of the RGA is the division of these two. It holds
P11 (s)
[RGA]11 = P11 (s)·P22 (s)−P12 (s)·P21 (s)
P22 (s)
(5.15)
P11 (s) · P22 (s)
= .
P11 (s) · P22 (s) − P12 (s) · P21 (s)
Remark. As you can see, the definition of the element of the RGA matrix does not depend
on the chosen controller C22 (s). This makes this method extremely powerful.
By repeating the procedure one can try to find [RGA]22 . In order to do that one has to
close the loop from y1 (t) to u1 (t): the result will be exactly the same:
[RGA]11 = [RGA]22 . (5.16)
Let’s go a step further. In order to compute the element [RGA]21 , one has to close the
loop from y1 (t) to u2 (t) and find the transfer function from u1 (t) to y2 (t).

145
Gioele Zardini Control Systems II FS 2018

Remark. This could be a nice exercise to test your understanding!


With a similar procedure one gets
−P12 (s) · P21 (s)
[RGA]21 = . (5.17)
P22 (s) · P11 (s) − P21 (s) · P12 (s)
and as before
[RGA]21 = [RGA]12 . (5.18)
How can we now use this matrix, to know if SISO control would be enough? As already
stated before, [RGA]ab ≈ 1 means SISO control is enough. Moreover, if the diagonal
terms differ substantially from 1, the MIMO interactions (also called cross couplings) are
too important and a SISO control is no more recommended.
If
RGA ≈ I (5.19)
evaluated at the relevant frequencies of the system, i.e. at ωc ± one decade, one can
ignore the cross couplings and can control the system with SISO tools one loop at time. If
this is not the case, one has to design a MIMO controller. A bunch of observations could
be useful by calculations:
1. Rows and columns of the RGA matrix add up to 1. This means one can write the
matrix as
   
[RGA]11 [RGA]12 [RGA]11 1 − [RGA]11
= . (5.20)
[RGA]21 [RGA]22 1 − [RGA]11 [RGA]11

This allows to calculate just one element of the matrix.

2. If one looks at RGA(s = 0) and the diagonal entries of the matrix are positive, SISO
control is possible.

3. The RGA of a triangular matrix P (s) is the identity matrix.

4. The RGA is invariant to scaling, i.e. for every diagonal matrix Di it holds

[RGA](P (s)) = [RGA](D1 · P (s) · D2 ). (5.21)

y1 P11 P12 u1

y2 u2
P21 P22

−C22

Figure 41: Derivation of the RGA-Matrix for the 2 × 2 case.

146
Gioele Zardini Control Systems II FS 2018

Example 42. For a MIMO system with two inputs and two outputs just the first element
of the RGA matrix is given. This is a function of a system parameter p and is given as
1
[RGA(s)]11 = . (5.22)
ps2 + 2ps + 1

(a) Find the other elements of the RGA matrix.

(b) For which values of p is the system for all frequencies ω ∈ [0, ∞) controllable with
two independent SISO control loops (one loop at the time)?

Now, you are given the following transfer function of another MIMO system:
 1 s+2 
P (s) = s s+11 . (5.23)
1 − s+1

(c) Find the RGA matrix of this MIMO system.

(c) Use the computed matrix to see if for frequencies in the range ω ∈ [3, 10] rad/s the
system is controllable with two separate SISO controllers.

147
Gioele Zardini Control Systems II FS 2018

Solution.

(a) Using the theory we learned, it holds


1
[RGA(s)]11 = [RGA(s)]22 = (5.24)
ps2 + 2ps + 1
and
[RGA(s)]12 = [RGA(s)]21
= 1 − [RGA(s)]11
1
=1− 2 (5.25)
ps + 2ps + 1
ps · (s + 2)
= 2 .
ps + 2ps + 1

(b) In order to use two independend SISO control loops, the diagonal elements of the
RGA matrix should be ≈ 1 and the anti diagonal elements should be ≈ 0. It’s easy
to see that this is the case for p = 0. In fact, if one sets p = 0 one gets

RGA(s) = I. (5.26)

Hence, independently of the frequency one has, i.e. ω ∈ [0, ∞), the control problem
can be solved with two independent SISO controllers.

(c) Using the learned theory, it holds

[RGA(s)]11 = [RGA(s)]22
P11 (s) · P22 (s)
=
P11 (s) · P22 (s) − P12 (s) · P21 (s)
1
− s·(s+1)
= 1
− s·(s+1) − s+2
s+1
(5.27)
1
=
1 + s · (s + 2)
1
= 2
s + 2s + 1
1
= .
(s + 1)2

and
[RGA(s)]12 = [RGA(s)]21
= 1 − [RGA(s)]11
1
=1− (5.28)
(s + 1)2
s · (s + 2)
= .
(s + 1)2

148
Gioele Zardini Control Systems II FS 2018

(d) In order to evaluate the RGA matrix in this range, we have to express it with it’s
frequency dependence, i.e. s = jω. For the magnitudes it holds

|[RGA(jω)]11 | = |[RGA(jω)]22 |
1
= (5.29)
|jω + 1|2
1
= .
1 + ω2
and
|[RGA(jω)]12 | = |[RGA(jω)]21 |
1
= · |jω| · |jω + 2|
|jω + 1|2 (5.30)

ω · 4 + ω2
= .
1 + ω2
We can know insert the two limit values of the given range and get

|[RGA(j · 3)]11 | = |[RGA(j · 3)]22 |


1
=
10
= 0.10.
(5.31)
|[RGA(j · 3)]12 | = |[RGA(j · 3)]21 |

3 · 13
=
10
≈ 1.08.

and
|[RGA(j · 10)]11 | = |[RGA(j · 10)]22 |
1
=
101
= 0.01.
(5.32)
|[RGA(j · 10)]12 | = |[RGA(j · 10)]21 |

10 · 104
=
101
≈ 1.01.

In both cases the diagonal elements are close to 0 and the antidiagonal elements are
close to 1. This means that the system is diagonal dominant and SISO control
one loop at time is permitted. We just need to pay attention to what should be
controlled: since the antidiagonal elements are close to 1, we need to use u1 for y2
and u2 for y1 .

149
Gioele Zardini Control Systems II FS 2018

Example 43. Figure 42 shows a 2 × 2 MIMO system. Sadly, we don’t know anything
about the transfer functions Pij (s) but

P12 (s) = 0. (5.33)

Your boss wants you to use a one loop at the time approach as you see in the picture.

(a) Why is your boss’ suggestion correct?

(b) Just a reference ri is affecting both outputs yi , which one?

(c) Compute the transfer function ri → yj for i 6= j?

Figure 42: Structure of MIMO system.

150
Gioele Zardini Control Systems II FS 2018

Solution.
(a) To check if the suggestion is correct let’s have a look at the RGA matrix: it holds

[RGA]11 = [RGA]22
P11 (s) · P22 (s)
=
P11 (s) · P22 (s) − P12 (s) · P21 (s)
= 1. (5.34)
[RGA]12 = [RGA]21
= 1 − [RGA]11
= 0.

since P12 (s) = 0. This means that the RGA matrix is identical to the identity
matrix, resulting in a perfect diagonal dominant system, which can be controlled
with the one loop at the time approach.
(b) Let’s analyze the signals from Figure 42. Since P12 (s) = 0, the output y1 is not
affected from u2 . Moreover, this means that the reference signal r2 , which influences
u2 , cannot affect the output y1 . The only reference that acts on both y1 and y2 is
r1 : directly through C1 (s) on y1 and with crosscouplings through P21 (s) on y2 .
(c) As usual we set to 0 the reference values we don’t analyze: here r2 = 0. Starting
from the general equation in frequency domain
     
Y1 (s) P11 (s) P12 (s) U1 (s)
= ·
Y2 (s) P21 (s) P22 (s) U2 (s)
    (5.35)
P11 (s) 0 U1 (s)
= · .
P21 (s) P22 (s) U2 (s)
one can read
Y1 (s) = P11 (s) · U1 (s)
(5.36)
Y2 (s) = P21 (s) · U1 (s) + P22 (s) · U2 (s).
Since we want to relate r1 and y2 let’s express u1 as something we know.
Using Figure 42 one gets
R1 (s) · C1 (s) = U1 (s) + P11 (s) · C1 (s) · U1 (s)
R1 (s) · C1 (s) (5.37)
U1 = .
1 + P11 (s) · C1 (s)
Inserting this into the second equation one gets
R1 (s) · C1 (s)
Y2 (s) = P21 (s) · + P22 (s) · U2 (s). (5.38)
1 + P11 (s) · C1 (s)
One have to find an expression for U2 (s). To do that, we look at the second loop in
Figure 42 an see
R2 (s) ·C2 (s) − Y2 (s) · C2 (s) = U2 (s)
| {z }
=0 (5.39)
U2 (s) = −Y2 (s) · C2 (s).

151
Gioele Zardini Control Systems II FS 2018

Inserting this into the second equation one gets

R1 (s) · C1 (s)
Y2 (s) = P21 (s) · + P22 (s) · U2 (s)
1 + P11 (s) · C1 (s)
R1 (s) · C1 (s)
= P21 (s) · + P22 (s) · (−Y2 (s) · C2 (s))
1 + P11 (s) · C1 (s)
R1 (s) · C1 (s)
Y2 (s) · (1 + P22 (s) · C2 (s)) = P21 (s) ·
1 + P11 (s) · C1 (s)
P21 (s) · C1 (s)
Y2 (s) = ·R1 (s).
(1 + P11 (s) · C1 (s)) · (1 + P22 (s) · C2 (s))
| {z }
F (s)
(5.40)

where F (s) is the transfer function we wanted.

152
Gioele Zardini Control Systems II FS 2018

Example 44. Figure 43 shows the structure of a MIMO system, composed of three
subsystems P1 (s), P2 (s) and P3 (s). It has inputs u1 and u2 and outputs y1 and y2 . The

Figure 43: Structure of MIMO system.

three subsystems are given as


s−5
!
s+3 1 s+4
 s+2 1

P1 (s) = 1
, P2 (s) = s+3 s−5 , P3 (s) = s+5 s+1 . (5.41)
s+4

Compute the transfer function of the whole system.

153
Gioele Zardini Control Systems II FS 2018

Solution. One should think with matrix dimensions here. Let’s redefine the subsystem’s
matrices more generally:
 11 
P1 11
 
P1 (s) = 21 , P2 (s) = P2 P212 , P3 (s) = P311 P312 (5.42)
P1

Together with the structure of the system one gets

Y1 = P211 · U1 + P212 · P111 · U1 ,


(5.43)
Y2 = P311 · P121 · U1 + P312 · U2 .

This can be written in the general matrix for the transfer function:
 11 
P2 + P212 · P111 0
P (s) =
P311 · P121 P312
1 s+4 s−5
!
s+3
+ ·
s−5 s+3
0
= s+2 (5.44)
· 1
s+5 s+4
1
s+1
s+5
!
s+3
0
= s+2 1
.
(s+5)·(s+4) s+1

154
Gioele Zardini Control Systems II FS 2018

Example 45. Figure 44 shows the structure of a MIMO system, composed of two sub-
systems P1 (s), P2 (s). It has inputs u1 and u2 and outputs y1 and y2 . The subsystem P1 (s)

Figure 44: Structure of MIMO system.

is given with its state space description:


       
−3 0 1 0 0 2 0 1
A1 = , B1 = , C1 = D1 = . (5.45)
2 1 2 1 3 1 0 0

and the subsystem P2 (s) is given as


 
1 s−1
P2 (s) = s−2 (s+4)·(s−2) . (5.46)

Compute the transfer function of the whole system.

155
Gioele Zardini Control Systems II FS 2018

Solution. First of all, we compute the transfer function in frequency domain of the first
subsystem P1 (s). It holds

P1 (s) = C1 · (s · I − A1 )−1 · B1 + D1
   −1    
0 2 s+3 0 1 0 0 1
= · · +
3 1 −2 s − 1 2 1 0 0
       
0 2 1 s−1 0 1 0 0 1
= · · · +
3 1 (s + 3) · (s − 1) 2 s+3 2 1 0 0
     
1 0 2 s−1 0 0 1 (5.47)
= · · +
(s + 3) · (s − 1) 3 1 2s + 8 s + 3 0 0
 
1 4s + 16 (s + 1)(s + 3)
= ·
(s + 3) · (s − 1) 5s + 5 s+3
4s+16 s+1
!
(s+3)·(s−1) s−1
= 5s+5 1
.
(s+3)·(s−1) s−1

One should think with matrix dimensions here. Let’s redefine the subsystem’s matrices
more generally:  11 
P1 P112 11

P1 (s) = 21 22 , P2 (s) = P2 P212 . (5.48)
P1 P 1
Together with the structure of the system one gets

Y1 = P211 · U1 + P111 · P212 · U1 + P112 · P212 · U2 ,


(5.49)
Y2 = P121 · U1 + P122 · U2 .

This can be written in the general matrix for the transfer function:
 11 
P2 + P111 · P212 P112 · P212
P (s) =
P121 P122
= ... (5.50)
s+7 s+1
!
(s+3)·(s−2) (s+4)(s−2)
= 5s+5 1
.
(s+3)·(s−1) s−1

156
Gioele Zardini Control Systems II FS 2018

Example 46. The system in Figure 45 can be well controlled with two separate SISO
controllers.

Figure 45: Structure of MIMO system.

 True.

 False.

157
Gioele Zardini Control Systems II FS 2018

Solution.
3 True.


 False.

Explanation:
One can observe that the input u2 affects only the output y2 . This means that the transfer
function matrix has a triangular form and hence, that the RGA matrix is identical to
the identity matrix: this means that we can reach good control with two separate SISO
controllers.

5.1.3 Q Parametrization
Recalling the standard control Loop repicted in Figure 46, one can write

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I C(s) P (s)

Figure 46: Standard feedback control system structure.

(I + P (s)C(s))−1 P (s)C(s) (I + P (s)C(s))−1 P (s)


    
Y (s) R(s)
=
U (s) (I − C(s)P (s))−1 C(s) −(I + C(s)P (s))−1 C(s)P (s) D(s)
   (5.51)
TO (s) SO (s)P (s) R(s)
= .
SI (s)C(s) −TI (s) D(s)

In order for the system to be internally stable, TO (s), SO (s)P (s), SI (s)C(s) and TI (s)
must be stable. Ideally, we would like to translate this properties on direct consequences
for C(s). However, relations are not linear and it is not obvious how to find direct
translations. One defined

Q(s) = C(s)(I + P (s)C(s))−1 . (5.52)

Then, one can write

(I + P (s)C(s))−1 = (I + P (s)C(s) − P (s)C(s))(I + P (s)C(s))−1


= (I + P (s)C(s))(I + P (s)C(s))−1 − P (s)C(s)(I + P (s)C(s))−1
= I − P (s)Q(s),
(5.53)

158
Gioele Zardini Control Systems II FS 2018

and
(I + C(s)P (s))−1 = (I + C(s)P (s) − C(s)P (s))(I + C(s)P (s))−1
= (I + C(s)P (s))(I + C(s)P (s))−1 − C(s)P (s)(I + C(s)P (s))−1
= I − Q(s)P (s).
(5.54)
It follows
TO (s) = (I + P (s)C(s))−1 P (s)C(s)
= P (s)C(s)(I + P (s)C(s))
= P (s)Q(s).
(5.55)
SO (s)P (s) = (I − P (s)Q(s))P (s)
SI (s)C(s) = Q(s)
TI (s) = Q(s)P (s).
Theorem 8. Q internal stability: Let P (s) be a stable plant of a negative feedback
system, then the closed loop system is internally stable if and only if Q(s) is stable.
This makes the tuning of the controller extremely easier: the sensitivity functions depend
linearly on Q. Moreover, it holds:
• Supposing that the plant is stable: Q(s) can be any transfer matrix that satisfies
the definition.
• If only proper controllers are taken into account, then Q(s) must be proper.
• Finding a Q(s) is equivalent to finding the controller C(s).
• As long as Q(s) is stable, it can vary freely and internal stability will be guaranteed.
Even if Q(s) maps to an unstable controller C(s).
• Starting from the formula for Q(s), one can write
C(s) = (I − Q(s)P (s))−1 Q(s) = Q(s)(I − P (s)Q(s))−1 . (5.56)

5.2 Internal Model Control (IMC)


5.2.1 Principle
Principle:Accurate control can be achieved only if the control system encapsulates some
representation of the controlled process.
Approach: We feedback only the mismatch between the model prediction and the actual
measured output, i.e. the uncertainty in the control loop.

Connection with Q Parametrization


The control system structure for IMC is depicted in Figure 47. P (s) denotes the plant of
the system and P0 (s) the plant model. The measurement y(t) is corrupted by a measure-
ment noise n(t). The signal y0 (t) represents the predicted output. The signal i represents
the signal mismatch between the measured and the predicted outputs. The controller
Q(s) (please refer to the previous section for its form) produces the input u(t). Relating
this structure with the classic one, one can write:
C(s) = Q(s) (I(s) − P0 (s)Q(s))−1 . (5.57)

159
Gioele Zardini Control Systems II FS 2018

Remark. Note that the controller C(s) can be defined with the orange region in Figure
47.

Analysis
By trying to relate the output signal y(t) to the other signals available in the loop, one
can write
Y (s) = N (s) + η(s)
= N (s) + P (s)V (s)
= N (s) + P (s) (D(s) + U (s))
= N (s) + P (s)D(s) + P (s)Q(s)E(s)
= N (s) + P (s)D(s) + P (s)Q(s) (R(s) − I(s))
= N (s) + P (s)D(s) + P (s)Q(s)R(s) − P (s)Q(s) (Y (s) − Y0 (s))
= N (s) + P (s)D(s) + P (s)Q(s)R(s) − P (s)Q(s)Y (s) + P (s)Q(s)P0 (s)U (s)
= N (s) + P (s)D(s) + P (s)Q(s)R(s) − P (s)Q(s) (N (s) + P (s)V (s)) + P (s)Q(s)P0 (s)U (s)
= (I − P Q) N + P (I − QP ) D + P Q R + P Q (P − P0 ) U,
| {z } | {z } |{z}
SO (s) P SI (s) TO (s)
(5.58)

where in the last line we dropped the s dependency for simplicity reasons. In the case
where P (s) = P0 (s) and Q(s) = P −1 (s), one gets

Y (s) = R(s). (5.59)

d(t) n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I Q(s) P (s)

y0 (t) −
P0 (s)

C(s) i

Figure 47: Internal Model Control System Structure.

5.2.2 Example: Predictive Control


Why predictive control
If a SISO system has substantial delays, it is very difficult to control it with a normal PID
controller. The I part of the controller causes impatience, that is, integrates over time.
As a practical example think of taking a shower in the morning: one let the water flow
and of course this hasn’t the desired temperature. For this reason one chooses warmer

160
Gioele Zardini Control Systems II FS 2018

water by turning the temperature controller; the water becomes too hot and so one turns
it on the other side to have colder water and so on, resulting in a non optimal strategy.
Moreover, the D part of the controller is practically useless7 . What does the expression
substantial delays mean? As indicative range one can say that it is worth using predictive
control if
T
> 0.3, (5.60)
T +τ
where T is the delay and τ is the time constant of the system. Other prerequisites are

• The plant must be asymptotically stable.

• A good model of the plant should be available.

The Smith Predictor


One can see the two equivalent structures of the Smith Predictor in Figure 48 and Figure
49.
w
r u yr y
Q(s) Pr (s) e−T ·s

ŷr
P̂r (s) e−T̂ ·s −


Figure 48: Structure of the Smith predictor.

r e u y

Q(s) Pr (s) e−s·T

P̂r (s)

e−s·T̂

Figure 49: Structure of the Smith predictor.

If the system has big delays, one can assume that it is possible to write the delay element
and the nondelayed plant as a product in the frequency domain: that’s what is done in
7
Taking the derivative of a delay element doesn’t help to control it

161
Gioele Zardini Control Systems II FS 2018

the upper right side of Figure 48. This means that the transfer function u(t) → y(t) can
be written as
P (s) = Pr (s) · e−sT . (5.61)

Main Idea:
As long as we have no disturbance d(t) (i.e. d(t) = 0) and our model is good enough
(this means Pr (s) = P̂r (s), T = T̂ )8 , we can model a non delayed plant and get the non
delayed output ŷr (t) (one can see this on the lower right side of Figure 48). The feedback
signal results from the sum of ŷr (t) and the correction signal .

Analysis
The controller of the system is the transfer function e(t) → u(t) , which can be computed
as
  
U (s) = Q(s) E(s) − P̂r (s) U (s) − U (s)e−sT̂
  (5.62)
= Q(s)E(s) − Q(s)P̂r (s) 1 − e−sT̂ U (s),

from which it follows


U (s)
C(s) =
E(s)
Q(s) (5.63)
=  .
1 + Q(s)P̂r (s) 1 − e−sT̂

This means that the loop gain transfer function is

L(s) = P (s) · C(s)


Q(s)Pr (s)e−sT (5.64)
=  .
1 + Q(s)P̂r (s) 1 − e−sT̂

If one assumes as stated, that the model is good enough s.t. Pr (s) = P̂r (s), T = T̂ , one
gets

L(s)
T (s) =
1 + L(s)
Q(s)·Pr (s)·e−s·T
1+Q(s)·Pr (s)·(1−e−s·T )
= Q(s)·Pr (s)·e−s·T
1 + 1+Q(s)·P r (s)·(1−e
−s·T )

Q(s) · Pr (s) · e−s·T (5.65)


=
1 + Q(s) · Pr (s) · (1 − e−s·T ) + Q(s) · Pr (s) · e−s·T
Q(s) · Pr (s)
= · e−s·T
1 + Q(s) · Pr (s)
= Tref (s) · e−s·T .
8
We use . ˆ. . to identify the parameters of the model

162
Gioele Zardini Control Systems II FS 2018

Remark.

• This result is very important: we have shown that the delay cannot be completely
eliminated and that every tranfer function (here T (s) but also S(s)) will have the
same delay as the plant P (s).

• Advantages of the prediction are:

– Very fast.
– Same Robustness.

• Disadvantages of the prediction are:

– Very difficult to implement.


– Very difficult to analyze.
– Problems if there are model errors.

163
Gioele Zardini Control Systems II FS 2018

5.3 Examples
Example 47. Consider the control problem depicted in Figure 50, designed for the stable,
linear, SISO system P0 (s). Let’s consider w(t) to be the vector which describes the inputs
to the system (disturbances and reference), z(t) to be the vector which describes all the
interesting signals for control, u(t) to be the control signal from C(s) and y(t) to be signal
used by the controller.

d(t) n(t)

y(t) u(t) v(t) x(t)


C(s) P0 (s)

Figure 50: Control System Loop

w(t) z(t)
P (s)
u(t) y(t)

C(s)

Figure 51: Control System Loop Generalization

a) Choose    
d(t) x(t)
w(t) = , z(t) = . (5.66)
n(t) v(t)
Derive the transfer function P (s) such that the system can be rewritten in the form
depicted in Figure 51.
Hint: P (s) can be rewritten in terms of the contributions of its signals, i.e.
 
Pzw (s) Pzu (s)
P (s) = , (5.67)
Pyw (s) Pyu (s)
where Pij (s) is the transfer function between signal i and signal j.
b) We denote the closed-loop system (i.e. from w to z) H(s). Show that
H(s) = Pzw (s) + Pzu (s)C(s) (1 − Pyu (s)C(s))−1 Pyw (s). (5.68)

c) Determine H(s) for the system depicted in Figure 50 and rewrite it in terms of the
output sensitivity function SO (s) = (1 − Pyu (s)C(s))−1 and output complementary
sensitivity function TO (s) = −(1 − Pyu (s)C(s))−1 Pyu (s)C(s).
Hint: Use the formula you derived in b).

164
Gioele Zardini Control Systems II FS 2018

d) Rewrite H(s) using the Q parametrization Q = C(s) (1 − Pyu (s)C(s))−1 .


1 1
e) Given P0 (s) = s+1
and Q(s) = s+10
, the closed-loop system is internally stable.

 True.
 False.

165
Gioele Zardini Control Systems II FS 2018

Solution.
a) As a first step, one needs to identify the dimensions of the transfer functions in P (s)
(from the hint):

• Pzw (s): it holds


Z(s) = Pzw (s)W (s). (5.69)
Since Z(s) ∈ C2×1 and W (s) ∈ C2×1 , matrix Pzw (s) ∈ C2×2 . Moreover, one
can write the signal dependencies as
 
Pxd (s) Pxn (s)
Pzw (s) = . (5.70)
Pvd (s) Pvn (s)

• Pzu (s): it holds


Z(s) = Pzu (s)U (s). (5.71)
Since Z(s) ∈ C2×1 and U (s) ∈ C1×1 , matrix Pzu (s) ∈ C2×1 . Moreover, one can
write the signal dependencies as
 
Pxu (s)
Pzu (s) = . (5.72)
Pvu (s)

• Pyw (s): it holds


Y (s) = Pyu (s)W (s). (5.73)
Since Y (s) ∈ C1×1 and W (s) ∈ C2×1 , matrix Pyw (s) ∈ C1×2 . Moreover, one
can write the signal dependencies as

Pyw (s) = Pyd (s) Pyn (s) . (5.74)

• Pyu (s): it holds


Y (s) = Pyu (s)U (s). (5.75)
Since Y (s) ∈ C1×1 and U (s) ∈ C1×1 , matrix Pyu (s) ∈ C1×1 . Moreover, one can
write the signal dependencies as

Pyu (s) = Pyu (s). (5.76)

Referring to the given flow diagram, one can write the relations for V (s):

V (s) = D(s) + U (s). (5.77)

From Equation 5.77 one recovers

Pvd (s) = 1, Pvu (s) = 1, Pvn (s) = 0. (5.78)

The relations for X(s) are:

X(s) = P0 (s)V (s)


(5.79)
= P0 (s)D(s) + P0 (s)U (s).

From Equation 9.92 one deduces

Pxd (s) = P0 (s), Pxu (s) = P0 (s), Pxn (s) = 0. (5.80)

166
Gioele Zardini Control Systems II FS 2018

The relations for Y (s) are:

Y (s) = N (s) + X(s)


= N (s) + P0 (s)V (s) (5.81)
= N (s) + P0 (s)D(s) + P0 (s)U (s).

From Equation 5.81 one recovers

Pyd (s) = P0 (s), Pyu (s) = P0 (s), Pyn (s) = 1. (5.82)

Putting everything together, one gets


   
P0 (s) 0 P0 (s) 
Pzw (s) = , Pzu (s) = , Pyw (s) = P0 (s) 1 , Pyu (s) = P0 (s),
1 0 1
(5.83)
and hence  
P0 (s) 0 P0 (s)
P (s) =  1 0 1 . (5.84)
P0 (s) 1 P0 (s)

b) It holds

U (s) = C(s)Y (s), (5.85)

and
Y (s) = Pyu (s)U (s) + Pyw (s)W (s)
= Pyu (s)C(s)Y (s) + Pyw (s)W (s) (5.86)
⇒ Y (s) = (1 − Pyu (s)C(s))−1 Pyw (s)W (s).

With these two informations, we compute the relation between Z(s) and W (s):

Z(s) = Pzw (s)W (s) + Pzu (s)U (s)


= Pzw (s)W (s) + Pzu (s)C(s)Y (s)
(5.87)
= Pzw (s) + Pzu (s)C(s) (1 − Pyu (s)C(s))−1 Pyw (s) W (s).

| {z }
H(s)

c) Using the formula derived in b), we get

H(s) = Pzw (s) + Pzu (s)C(s) (1 − Pyu (s)C(s))−1 Pyw (s)


   
P0 (s) 0 P0 (s)
C(s) (1 − P0 (s)C(s))−1 P0 (s) 1

= +
1 0 1
   
P0 (s) 0 C P0 (s)2 P0 (s)
= +
1 0 1 − P0 (s)C(s) P0 (s) 1 (5.88)
 
1 P0 (s) P0 (s)C(s)
=
1 − P0 (s)C(s) 1 C(s)
 
P0 (s)SO (s) −TO (s)
= .
SO (s) C(s)SO (s)

167
Gioele Zardini Control Systems II FS 2018

d) Using the formula derived in b) and plugging in Q one gets

H(s) = Pzw (s) + Pzu (s)C(s) (1 − Pyu (s)C(s))−1 Pyw (s)


= Pzw (s) + Pzu (s)Q(s)Pyw (s)
   
P0 (s) 0 P0 (s)2 P0 (s) (5.89)
= + Q(s)
1 0 P0 (s) 1
 
P0 (s) + P0 (s)2 Q(s) P0 (s)Q(s)
= .
1 + P0 (s)Q(s) Q(s)

Note that each element of H(s) is linear in Q(s). This simplifies the tuning of the
controller.

e)

3 True.

 False.

Solution: Since Q(s) and P0 (s) are both asymptotically stable, the closed-loop
system is internally stable.
Remark. Theorem (Q internal stability): let P (s) be a stable plant of a negative
feedback system, then the closed loop system is internally stable if and only if Q(s)
is stable.

168
Gioele Zardini Control Systems II FS 2018

6 State Feedback
Motivation: Each control strategy we analyzed so far, was based on output feedback.
In fact, the main analysis has been based on the fact that system outputs are available
through measurements. Imagine now to have the states of the system available. Would the
control problem benefit from this new information? Intuitively, outputs are nothing else
than a linear combination of the states (one can always write this through the dynamics
of the system), hence contain less information.

6.1 Concept
The big difference to what we have seen so far, is that we are looking at a continuous
time control system, which operates in time domain and no more in frequency domain.
The basic state feedback control structure is depicted in Figure 52. The basic idea is: we

r(t) u(t) ẋ(t) = Ax(t) + Bu(t) y(t)


kr
− y(t) = Cx(t) + Du(t)
x(t)
K

Figure 52: Basic State Feedback Control Structure.

have the dynamics in the loop, with the input u(t) and the output y(t). We negatively
feedback the state x(t) with a controller K and add to a reference (or a multiple of it,
kr ). In words, we try to keep the state where we want it to be. Assuming for simplicity
D = 0 (the same analysis can be performed), one gets

u(t) = kr r(t) − Kx(t), (6.1)

from which it follows


ẋ(t) = Ax(t) + Bu(t)
= Ax(t) + Bkr r(t) − BKx(t)
(6.2)
= (A − BK) x(t) + Bkr r(t).
| {z }
Acl

We get a new closed loop matrix Acl , i.e., state feedback affects the poles of the closed
loop transfer function.

6.2 Reachability
A key property of a control system is reachability. Which set of points in the state space
of the system can be reached through the choice of a specific control input? Reachability
plays a central role in deciding if state feedback is a good strategy for the control of a
specific dynamic system. Let’s assume that a dynamic system of the form
d
x(t) = Ax(t) + Bu(t). (6.3)
dt

169
Gioele Zardini Control Systems II FS 2018

is given, where x ∈ Rn , u ∈ R, A ∈ Rn×n , B ∈ Rn . We deine the reachable set as the set


of all points xf such that there exists an input u(t) with 0 ≤ t ≤ T that steers the system
from x(0) = x0 to x(T ) = xf .

Definition 18. A linear system is reachable if for any x0 , xf ∈ Rn , there exists a T > 0
and u(t) : [0, T ] → R such that the corresponding solution satisfies x(0) = x0 and x(T ) =
xf ∈ Rn .

Reachability test: A system (A, B) is reachable if and only if rank(R) = n, where


x ∈ Rn and 
R = B AB . . . An−1 B . (6.4)

6.2.1 Reachable Canonical Form


Given a transfer function matrix
B0 + B1 s + . . . + Bn−1 sn−1
P (s) = , (6.5)
|{z} a0 + a1 s + . . . + an−1 sn−1 + sn
∈Rl×m

one wants to find the system matrices

(A, B, C, 0) , (6.6)

such that
P (s) = C(sI − A)−1 B. (6.7)
A possible solution is the reachable canonical form:
   
0m Im 0m ... 0m 0m
 0m 0m Im ... 0m  0m 
   
 .. . .
A= . . . . . , B =  ... 
  
. 0 m . .
    (6.8)
 0m 0m 0m ... Im  0m 
−a0 Im −a1 Im −a2 Im . . . −an−1 Im Im

C = B0 B1 . . . Bn−2 Bn−1 ,

where 0m is the m × m zero matrix and Im is the m × m identity matrix.


Remark. Note that this is the result of a possible change of coordinates. This solution is
not unique.

Theorem 9. A system in the reachable canonical form is always reachable.

Theorem 10. Let A and B be the dynamics of a reachable system. Then there exists
a transformation z(t) = T x(t) such that in the transformed coordinates the dynamics
matrices are in reachable canonical form and the characteristic polynomial for A is given
by
det(sI − A) = sn + a1 sn−1 + . . . + an−1 s + an . (6.9)

170
Gioele Zardini Control Systems II FS 2018

Example 48. You are given the system


   
dx(t) α ω 0
= x(t) + u(t). (6.10)
dt −ω α 1
| {z } |{z}
A B

The aim of the exercise, is to find the reachable canonical form of the system.

a) Which structure should the reachable canonical form have?

b) Compute the unknowns in the form you found in a).

c) Compute the reachability matrix for the original system.

d) Compute the reachability matrix for the general form you found in a).

e) Find a transformation z(t) = T x(t) which brings the original system in the reachable
form.

171
Gioele Zardini Control Systems II FS 2018

Solution.

a) We wish to find a transformation which converts the system into the reachable
canonical form:    
0 1 0
à = , B̃ = . (6.11)
−a1 −a2 1

b) A and à must have the same eigenvalues (they describe the same system). It holds

 
α−λ ω
det(A − λI) = det
−ω α − λ
= α2 − 2λα + λ2 + ω 2
  (6.12)
−λ 1
det(Ã − λI) = det
−a1 −a2 − λ
= λ2 + λa2 + a1 .

The comparison of the two reveals

a1 = α 2 + ω 2
(6.13)
a2 = −2α

c) The reachability matrix for the original system can be computed as follows:
 
0
B= ,
1
     (6.14)
α ω 0 ω
AB = = .
−ω α 1 α

This implies  
0 ω
R= . (6.15)
1 α

d) The reachability matrix for the general form can be computed as follows:
 
0
B̃ = ,
1
     (6.16)
0 1 0 1
ÃB̃ = = .
−a1 −a2 1 −a2

This implies  
0 1
R̃ = . (6.17)
1 −a2

e) Such a transformation results in ẋ = T −1 ż, which implies

ẋ(t) = T −1 ż(t) = AT −1 z(t) + Bu(t)


−1 (6.18)
⇒ ż(t) = T
| AT
{z } z(t) + |{z}
T B u(t).
à B̃

172
Gioele Zardini Control Systems II FS 2018

Since R contains
B̃ = T B
(6.19)
ÃB̃ = T AT −1 T B = T AB,

one can rewrite the reachability matrix as



R̃ = T B T AB

= T B AB (6.20)
= T R,

which leads to
T = R̃R−1
  
1 0 1 α −ω
=−
ω 1 −a2 −1 0
(6.21)
 
1 1 0
=
ω −α − a 2 ω
1 
0
= ωα .
ω
1

173
Gioele Zardini Control Systems II FS 2018

6.3 Pole Placement


The system dynamics are determined by the poles of the closed loop transfer function.
Recalling what we have seen so far, we write the plant transfer function as

P (s) = C(sI − A)−1 B, (6.22)

the characteristic polynomial as

p(s) = det(sI − A), p(πi ) = 0, (6.23)

where πi are the poles of the system.


Problem definition: We want to find through state feedback a controller K such that
the closed loop system has a desired characteristic polynomial

p∗cl (s) = det(sI − Acl ) (6.24)

But is it always possible to find a solution for the pole placement problem?

Theorem 11. The problem of pole placement has a solution if and only if the system is
reachable.

6.3.1 Direct Method


The direct method consists in introducing a matrix K with the correct dimensions and
forcing the eigenvalues of the closed loop system matrix A − BK to be the desired ones.

6.3.2 Ackermann Formula


Placing poles by hand is tedious and tricky if the state space dimension grows. The
Ackermann’s formula provides a one step procedure for calculating the controller K. It
holds
K = 0 . . . 0 1 R−1 p∗cl (A) = γp∗cl (A),

(6.25)
where

• R is the reachability matrix. Note that this must be invertible (hence, the system
is reachable).

• γ is the last row of the inverse of R.

• p∗cl (A) is the desired closed loop characteristic polynomial evaluated at s = A.

Check in the Problem Set for the derivation of the Ackermann’s formula.

174
Gioele Zardini Control Systems II FS 2018

Example 49. Your task is to keep a space shuttle in its trajectory . The deviations from
the desired trajectory are well described by

ẋ(t) = Ax(t) + bu(t)


(6.26)
   
0 1 0
= · x(t) + · u(t),
0 0 1

where x1 (t) is the position of the space shuttle and x2 (t) is its velocity. Moreover, u(t)
represents the propulsion. The position and the velocity are known for every moment and
for this reason we can use a state-feedback regulator. You want to find a state feedback
controller using pole placement. The specifications for the system are

• The system should not overshoot.

• The error of the regulator should decrease with e−3·t .

a) Find the poles such that the specifications are met.

b) Find the new state feedback matrix K2 .

c) Use the Ackermann formula to get the same result.

175
Gioele Zardini Control Systems II FS 2018

Solution

a) Overshoot or in general oscillations, are due to the complex part of some poles. The
real part of these poles is given by the decrease function of the error. Since the
system must have two poles (A is 2 × 2), it holds

π1 = π2 = −3. (6.27)

b) The closed loop has feedback matrix

A − B · K. (6.28)

We have to choose K such that the eigenvalues of the state feedback matrix are
both −3. The dimensions of K must make the matrix multiplication with B and
the subtraction with A feasible. It holds K ∈ C1×2 . It holds
   
0 1 0 
A−B·K = − · k1 k2
0 0 1
   
0 1 0 0
= − (6.29)
0 0 k1 k2
 
0 1
= .
−k1 −k2

The eigenvalues of this matrix are


p
−k2 ± k22 − 4k1
π1,2 = . (6.30)
2
Since the two eigenvalues should have the same value, we know the the part under
the square rooth has to vanish. This means that − k22 = −3 ⇒ k2 = 6. Moreover:

k22
k1 =
4 (6.31)
= 9.

The matrix finally reads 


K2 = 9 6 . (6.32)

c) The Ackermann formula for this problem reads

K = 0 1 R−1 p∗cl (A),



(6.33)

where R is the system reachability matrix and p∗cl (A) is the desired closed loop
characteristic polynomial evaluated at s = A. For our system it holds
 
0
B=
1
    
0 1 0 1
AB = = (6.34)
0 0 1 0
 
−1 0 1
⇒R=R = .
1 0

176
Gioele Zardini Control Systems II FS 2018

With the given poles, it holds

p∗cl (s) = (s + 3)2


p∗cl (A) = (A + 3I)2
  
3 1 3 1 (6.35)
=
0 3 0 3
 
9 6
= .
0 9

Putting everything together in Equation 6.33, one gets


  
 0 1 9 6
K= 0 1
1 0 0 9 (6.36)

= 9 6 ,

which confirms our previous result.

177
Gioele Zardini Control Systems II FS 2018

6.4 LQR
6.4.1 Motivation
In previous sections, we introduced the concept of state feedback, with the idea of using
the states of the system and its dynamics to synthesize a controller using desired pole-
placement. This week we introduce Linear Quadratic control (LQ) and the special case
of the Linear Quadratic Regulator (LQR). The concept behind this control strategy has
a key role for control theory and is worth a detailed explanation.
Moreover, pole placement has some drawbacks:
• Does not work well with model uncertainty.
• Does not allow specific tuning of desired trade-offs (e.g. cost vs. performance).

6.4.2 Problem Definition


Given the dynamics of a system
d
x(t) = A · x(t) + B · u(t), A ∈ Rn×n , B ∈ Rn×m , x(t) ∈ Rn , u(t) ∈ Rm (6.37)
dt
find a controller
u(t) = f (x(t), t), t ∈ [0, ∞] (6.38)
that brings x(t) asymptotically to zero (with x(0) 6= 0). In other words it should hold:
lim x(t) = 0, (6.39)
t→∞

i.e. the cost Z ∞


JLQR (x(t), u(t)) = kz(t)k22 + ρku(t)k22 dt, ρ ∈ R+ (6.40)
0
is minimized, where ρ allows trade-off between energy of the input and energy of the
controlled signal and z(t) = Ex(t) + F u(t) can be chosen to contain the state variables
of interest. The LQR standard control loop is reported in Figure 53.

z(t) ∈ Rk
r(t) e(t) u(t) ẋ(t) = Ax(t) + Bu(t)
KLQR
- y(t) = Cx(t) + Du(t)
x(t)

Figure 53: LQR Problem: Closed loop system.

6.4.3 General Form


Preliminary Definitions
Since the cost function introduced in Equation 6.40 contains the euclidean norm, one
recalls its definition:
ku(t)k22 = u(t)| u(t)
(6.41)
X
= ui (t)2 .
i

178
Gioele Zardini Control Systems II FS 2018

The weighted euclidean norm reads

ku(t)k2R,2 = u(t)| Ru(t). (6.42)

Definition 19. A pair (A, C) is observable if and only if rank(O) = n = dim(A), where
 
C
 CA 
O =  ..  . (6.43)
 
 . 
CAn−1

Definition 20. A pair (A, C) is detectable if all the unobservable modes are stable.

Definition 21. A real matrix M is said to be positive definite (denoted as M > 0)


when the associated quadratic form V (z) is positive, i.e.
X
V (z) = mi,j zi zj = z | M z > 00, ∀z 6= 0. (6.44)
i,j

Definition 22. A real matrix M is said to be positive semi-definite (denoted as M ≥ 0)


when the associated quadratic form V (z) is non-negative, i.e.
X
V (z) = mi,j zi zj = z | M z ≥ 0, ∀z 6= 0. (6.45)
i,j

How can we check if a matrix is positive definite?

Eigenvalue Test
A real, symmetric matrix is (semi)-positive definite if and only if it has all (non-negative)
positive eigenvalues.

Sylvester’s Criterion
An symmetric matrix M ∈ Rm×m is positive definite if and only if all the upper-left i × i
submatrices (principal minors), i ∈ 1, . . . , m have positive determinant. In the case of
 
a b
A= , (6.46)
b d

the conditions are

1. a > 0.

2. ad − b2 > 0.

179
Gioele Zardini Control Systems II FS 2018

6.4.4 Weighted LQR


With these definitions, one can write (by dropping the time dependency for simplicity)
the weighted LQR problem as
Z ∞
JLQR (x(t), u(t)) = kz(t)k2Q̄,2 + ρku(t)k2R̄,2 dt
Z0 ∞
= z | Q̄z + ρu| R̄udt
0
Z ∞
= (Ex + F u)| Q̄ (Ex + F u) + ρu| R̄udt
Z0 ∞
(x| E | + u| F | ) Q̄Ex + Q̄F u + ρu| R̄udt

=
Z0 ∞
= x| E | Q̄Ex + x| E | Q̄F u + u| F | Q̄Ex + u| F | Q̄F u + ρu| R̄udt
Z0 ∞
x| E | Q̄E x + u| F | Q̄F + ρR̄ u + 2x| E | Q̄F udt
  
=
Z0 ∞
= x| Qx + u| Ru + 2x| N udt,
0
(6.47)

where Q̄, R̄ are symmetric and positive definite, ρ ∈ R+ , u(t) ∈ Rm×1 , z(t) ∈ Rk×1 ,
x ∈ Rn×1 and
R = F | Q̄F + ρR̄, R ∈ Rm×m
Q = E | Q̄E, Q ∈ Rn×n (6.48)
N = E | Q̄F.

Example 50. You are given the criterion


Z ∞
x21 + 6 · x1 · x2 + 100 · x22 + 6 · u21 + 10 · u22 dt.

J(x, u) = (6.49)
0

Matrix N is the zero matrix and matrices Q and R are


 
1 3
Q= (6.50)
3 100

and  
6 0
R= . (6.51)
0 10

6.4.5 Solution
If

• The system (A, B) is stabilizable (all unstable modes are reachable). Intuition: this
is necessary for state feedback to work (stabilizable means that the unstable modes
must be controllable). Controllability is the same as reachability for continuous
time systems).

180
Gioele Zardini Control Systems II FS 2018

• the pair (Ã, Q̃) = (A − BR−1 N | , Q − N R−1 N | ) is detectable. Intuition: this is


necessary to ensure internal stability, i.e. that the closed loop is asymptotically
stable. If the system input were known a priori to stabilize the closed loop, then
this condition would not be necessary. For now, just remember that internal stabil-
ity corresponds to input-output stability when no unstable zero/pole cancellations
occur.

then
uLQR (t) = − R−1 (N + P B)| x(t), (6.52)
| {z }
KLQR

where P is the real, symmetric, positive definite solution of the algebraic Riccati
equation
(N + P · B) · R−1 · (N | + B | · P ) − P · A − A| · P − Q = 0. (6.53)

Solving the ARE


Hamiltonian Method
Starting from Equation 6.53, one can rearrange as

(N + P · B) · R−1 · (N | + B | · P ) − P · A − A| · P − Q = 0
(N R−1 + P BR−1 )(N | + B | P ) − P · A − A| · P − Q = 0
N R−1 N | + N R−1 B | P + P BR−1 N | + P BR−1 B | P − P · A − A| · P − Q = 0
− (A − BR−| N | )| P − P (A − BR−1 N | ) +P (BR−1 B | ) P − (Q − N R−1 N | ) = 0.
| {z } | {z } | {z } | {z }
Ã| Ã R̃ Q̃
(6.54)

Hence, one gets the Riccati equation

Ã∗ P + P Ã| − P R̃P + Q̃ = 0, (6.55)

with the unknown quadratic matrix P . Note that this equation can be rewritten as
  
 Ã R̃ I
P −I ∗ = 0, (6.56)
−Q̃ −Ã P
| {z }
H∈R2n×2n

where H is the hamiltonian matrix. In order to find the solution of the ARE, we assume
two things:

1. H has no eigenvalues on the imaginary axis, i.e. it wil have n eigenvalues in the
LHP and n in the RHP. Let the subspace spanned by the eigenvectors associated
to the stable eigenvalues (i.e. in the LHP) be
 
X1
XH = Im , (6.57)
X2

where X1 , X2 ∈ Cn×n .

2. X1 is invertible.

181
Gioele Zardini Control Systems II FS 2018

If the two assumptions are met, one says that the Hamiltonian belongs to the domain of
the Riccati operator, i.e. H ∈ dom(Ric). With these two assumptions, the solution of the
ARE can be computed as
P = X2 X1−1 . (6.58)
Remark. The ARE has in general more than one solution, but only one is stabilizing, i.e.
it makes the closed loop asymptotically stable (see assumption before).

Theorem 12. H ∈ dom(Ric) if there exists symmetri matrices P, H ∈ Rn×n with stable
H such that    
X1 X1
H = H, (6.59)
X2 X2
where

a) P = X2 X1−1 is real and symmetric.

b) P satisfies the ARE.

c) The matrix à + R̃P is stable (all eigenvalues are in the open LHP).

6.4.6 Direct Method


The direct method consists in introducing a matrix P with the correct dimensions and
unknowns, apply the Riccati equation and solve the system of equations.

182
Gioele Zardini Control Systems II FS 2018

6.4.7 Examples
Example 51. You have to design a LQ regulator for a plant with 2 inputs, 3 outputs
and 6 state variables.

(a) What are the dimensions of A, B, C and D?

(b) What is the dimension of the transfer function u → y?

(c) What is the dimension of the matrix Q of JLQR ?

(d) What is the dimension of the matrix R of JLQR ?

(e) What is the dimension of the matrix K?

183
Gioele Zardini Control Systems II FS 2018

Solution.

(a) One can find the solution by analyzing the meaning of the matrices:

• Since we are given 6 states variables, the matrix A should have 6 rows and 6
columns, i.e. A ∈ R6×6 .
• Since we are given 2 inputs, the matrix B should have 2 columns and 6 rows,
i.e. B ∈ R6×2 .
• Since we are given 3 outputs, the matrix C should have 6 columns and 3 rows,
i.e. C ∈ R3×6 .
• Since we are given 2 inputs and 3 outputs, the matrix D should have 2 columns
and 3 rows, i.e. D ∈ R3×2 .

(b) Since we are dealing with a system with 2 inputs and 3 outputs, P (s) ∈ R3×2 .
Moreover, P (s) should have the same dimensions of D because of its formula.

(c) From the formulation of Q one can easily see that its dimensions are the same of
the dimensions of A, i.e. Q ∈ R6×6 .

(d) From
u(t) = −K · x(t).
we can see that K should have 6 columns and 2 rows, i.e. K ∈ R2×6 .

184
Gioele Zardini Control Systems II FS 2018

Example 52. A system is given as

ẋ1 (t) = 3 · x2 (t)


1
ẋ2 (t) = 3 · x1 (t) − 2 · x2 (t) + · u(t) (6.60)
2
7
y(t) = 4 · x1 (t) + · x2 (t).
3
a) Solve the LQR problem for the criterion
Z ∞
1
J(x(t), u(t)) = 7 · x1 (t)2 + 3 · x2 (t)2 + · u(t)2 dt (6.61)
0 4
and find the state feedback controller K using the direct method.

b) Solve a) using the Hamiltonian method.

c) Find the eigenvalues of the closed-loop system with the LQ regulator K.

d) Does the new criterion


Z ∞
10 2
Jnew = 70 · x21 + 30 · x22 + · u dt (6.62)
0 4
affect the solution for K?

185
Gioele Zardini Control Systems II FS 2018

Solution.
a) Using quadratic forms, one can identify Q̃ and R̃ to be (using the null matrix for
N)  
7 0
Q= (6.63)
0 3
and
1
R= . (6.64)
4
The state-space description of the system can be re-written in standard form as
      
ẋ1 (t) 0 3 x1 (t) 0
= + 1 u(t)
ẋ2 (t) 3 −2 x2 (t) 2
| {z } | {z }
A
  B (6.65)
7
 x1 (t)
y(t) = 4 3 + |{z}
0 u(t).
| {z } x2 (t)
D
C

In order to find the controller K, one has to compute the symmetric, positive definite
solution of the Riccati equation related to this problem. First, one has to look at
the form that this solution should have. Here B ∈ R2×1 . This means that since
Φ = Φ| we are dealing with Φ ∈ R2×2 of the form
 
ϕ1 ϕ2
Φ= . (6.66)
ϕ2 ϕ3

With the Riccati equation, it holds

Φ · (B · R−1 · B | ) · Φ − Φ · (A − BR−1 N | ) − (A − BR−1 N | )| Φ − Q = 0


 
−1 | | 0 0
Φ·B·R ·B ·Φ−Φ·A−A ·Φ−Q=
0 0
         
0 0 3 0 3 7 0 0 0
Φ · 1 · 4 · 0 12 · Φ − Φ ·

− ·Φ− =
2
3 −2 3 −2 0 3 0 0
         
2ϕ2 3ϕ2 3ϕ1 − 2ϕ2 3ϕ2 3ϕ3 7 0 0 0
· ϕ22 ϕ23 −

− − =
2ϕ3 3ϕ3 3ϕ2 − 2ϕ3 3ϕ1 − 2ϕ2 3ϕ2 − 2ϕ3 0 3 0 0
 2     
ϕ2 ϕ2 ϕ3 6ϕ2 + 7 3ϕ1 − 2ϕ2 + 3ϕ3 0 0
− = .
ϕ2 ϕ3 ϕ23 3ϕ1 − 2ϕ2 + 3ϕ3 6ϕ2 − 4ϕ3 + 3 0 0
(6.67)

Hence, one gets 3 equations (two elements are equal because of symmetry):

ϕ22 − 6 · ϕ2 − 7 = 0 (I)
ϕ2 · ϕ3 − 3 · ϕ1 + 2 · ϕ2 − 3 · ϕ3 = 0 (II) (6.68)
ϕ23 − 6 · ϕ2 + 4 · ϕ3 − 3 = 0 (III).

Sylvester’s Criterion: An Hermitian (here symmetric) matrix M ∈ Cm×m is positive


definite if and only if all the upper-left i×i submatrices (leading minors), i ∈ 1, . . . , m
has positive determinant. Applying this to Φ one gets the conditions:

(a) ϕ1 > 0.

186
Gioele Zardini Control Systems II FS 2018

(b) ϕ1 ϕ3 − ϕ22 > 0.

From the Equation (I), one gets



6± 64
ϕ2 =
2 (6.69)
= {−1, 7}.

Since we cannot discard a specific value, we pursue with ϕ2,1 = −1 and ϕ2,2 = 7:

Case ϕ2,1 = −1:

Plugging this into the Equation (III), one gets

ϕ23 + 6 + 4ϕ3 − 3 = 0
(6.70)
ϕ23 + 4ϕ3 + 3 = 0,

and

−4 ± 4
ϕ3 =
2 (6.71)
= {−3, −1}.

In order for these two values to fulfill the second Sylvester condition, it should hold
ϕ1 < 0, which violates the first condition. For this reason this is not a possible
choice.

Case ϕ2,1 = 7:

Plugging this into the Equation (III), one gets

ϕ23 − 42 + 4ϕ3 − 3 = 0
(6.72)
ϕ23 + 4ϕ3 − 45 = 0,

and

−4 ± 196
ϕ3 =
2 (6.73)
= {−9, 5}.

ϕ3 = 5 is the only value which does not violate the two Sylverster’s conditions.
Plugging the values into Equation (II) one gets

35 − 3ϕ1 + 14 − 15 = 0
34 (6.74)
ϕ1 = .
3
The solution of the Riccati equation hence is
 34 
7
Φ= 3 . (6.75)
7 5

187
Gioele Zardini Control Systems II FS 2018

The controller K can be computed as

K = R−1 B | Φ
 34 
1
 7
=4· 0 2 · 3 (6.76)
7 5

= 14 10 .

b) Before applying the Hamiltonian method, one need to check

• (A, B) stabilizable (all unstable modes are reachable). The reachability matrix
for this pair

R = B AB
0 32 (6.77)
 
= 1
2
−1

has full rank, hence the system is reachable.


• (A, Q) detectable. The observability matrix for the pair
 
Q
O=
QA
 
7 0 (6.78)
0 3 
=0 21 

9 −6

has full column rank, hence the system is observable.

In order to use the Hamiltonian method, one needs to build the Hamiltonian matrix
 
à R̃
H= , (6.79)
−Q̃ −Ã|
where
à = A − BR−1 N |
R̃ = −BR−1 B | (6.80)
−1 |
Q̃ = Q − N R N .

For this specific example (N = 0), one has

à = A.
 
0 1 1

R̃ = − 1 1 0 2
 2  4
(6.81)
0 0
= .
0 −1
Q̃ = Q.

188
Gioele Zardini Control Systems II FS 2018

Therefore, the Hamiltonian is


 
0 3 0 0
 3 −2 0 −1
H=
−7 0
. (6.82)
0 −3
0 −3 −3 2

In order to have the eigenvalues, one computes


 
−λ 3 0 0
 3 −2 − λ 0 −1 
det (H − λI) = det 
 −7

0 −λ −3 
0 −3 −3 2 − λ
   
−2 − λ 0 −1 3 0 −1
= −λ det  0 −λ −3  − 3 det −7 −λ −3 
−3 −3 2 − λ 0 −3 2 − λ
    
−λ −3 0 −λ (6.83)
= λ (2 + λ) det + det
−3 2 − λ −3 −3
    
−λ −3 −7 −λ
− 9 det − 3 det
−3 2 − λ 0 −3
 
−λ −3
= (λ2 + 2λ − 9) det − 3λ2 + 63
−3 2 − λ
= (λ2 − 9 − 2λ)(λ2 − 9 + 2λ) − 3λ2 + 63
= λ4 − 25λ2 + 144.

Therefore, the eigenvalues are

λ1,2 = ±3, λ3,4 = ±4. (6.84)

Since we only care about stable eigenvalues (in LHP), we compute the eigenvectors
for λ2 = −3 and λ4 = −4.
It holds:

• Eλ2 = E−3 : from (H − λ2 I) · x = 0 one gets the linear system of equations


 
3 3 0 0 0
 3
 1 0 −1 0  .
 −7 0 3 −3 0 
0 −3 −3 5 0
Using the first row as reference and subtracting the correct multiples of it from
the other rows, one gets the form
 
3 3 0 0 0
 0 −2 0 −1 0 
 .
 0 7 3 −3 0 
0 −3 −3 5 0
Using the second row as reference and subtracting the correct multiples of it
from the other rows, one gets the form

189
Gioele Zardini Control Systems II FS 2018

 
3 3 0 0 0
 0 −2 0 −1 0 
 .
 0 0 3 − 13 0 
2
0 0 0 0 0
Since one has one zero row, one can introduce a free parameter. Let x4 = s,
then x3 = 13
6
s, x2 = − 2s , x1 = 2s , s ∈ R. This defines the first eigenspace,
which is (multiplying everything by 6)
 
3
n −3 o
E−3 =   13  .
 (6.85)
6

• Eλ4 = E−4 : from (H − λ4 I) · x = 0 one gets the linear system of equations


 
4 3 0 0 0
 3
 2 0 −1 0  .
 −7 0 4 −3 0 
0 −3 −3 6 0
Using the first row as reference and subtracting the correct multiples of it from
the other rows, one gets the form
 
4 3 0 0 0
 0 −1 0 −4 0 
 0 21 16 −12 0  .
 

0 −3 −3 6 0
Using the second row as reference and subtracting the correct multiples of it
from the other rows, one gets the form
 
4 3 0 0 0
 0 −1 0 −4 0 
 0 0 16 −96 0  .
 

0 0 0 0 0
Since one has one zero row, one can introduce a free parameter. Let x4 = s,
then x3 = 6s, x2 = −4s, x1 = 3s, s ∈ R. This defines the second eigenspace,
which is (multiplying everything by 6)
 
3
n −4 o
E−4 =  6 .
 (6.86)
1

Stacking the eigenvectors one gets


 
  3 3
X1 −4 −3
=
 6 13  .
 (6.87)
X2
1 6

190
Gioele Zardini Control Systems II FS 2018

It holds
Φ = X2 X1−1
  −1
6 13 3 3
=
1 6 −4 −3
(6.88)
  
1 6 13 −3 −3
=
3 1 6 4 3
 34 
7
= 3 ,
7 5

which confirms the result of a).

c) The closed-loop matrix to analyse is


   
0 3 0 
A−B·K = − 1 · 14 10
3 −2
  2 
0 3 0 0
= − (6.89)
3 −2 7 5
 
0 3
= .
−4 −7

The eigenvalues of the closed loop system are given by

det((A − B · K) − λ · I) = 0
(6.90)
λ2 + 7λ + 12 = 0

from which it follows: λ1 = −3 and λ2 = −4.

d) No. Since it holds Jnew = 10 · J, K remains the same.

191
Gioele Zardini Control Systems II FS 2018

Example 53. You design with Matlab a LQ Regulator:

1 A = [1 0 0 0; 1 1 0 0; 1 1 1 0; 0 0 1 1];
2 B = [1 1 1 1; 0 1 0 2]’;
3 C = [0 0 0 1; 0 0 1 1; 0 1 1 1];
4
5 nx = size(A,1); Number of state variables of the plant, in Script: n
6 nu = size(B,2); Number of input variables of the plant, in Script: m
7 ny = size(C,1); Number of output variables of the plant, in Script: p
8
9 q = 1;
10 r = 1;
11 Q = q*eye(###);
12 R = r*eye(###);
13
14 K = lqr(A,B,Q,R);

Fill the following rows:

11 : ### =
12 : ### =

Solution. The matrix Q is a weight for the states and the matrix R is a weight for the
inputs. The correct filling is

11 : ### = nx
12 : ### = nu

192
Gioele Zardini Control Systems II FS 2018

7 State Estimation
In the previous chapters, we assumed that all the state variables of a given systen system
were available at each time. In real systems, however, this is not the case: one knows just
the output y(t) and the input u(t). Hence, one has to figure out how to get the actual
state x(t). The idea is to use an observer to get an estimate of x(t), also called x̂(t).
A whole course about estimation if offered at IDSC in the master by Prof. D’Andrea:
Recursive Estimation.

7.1 Preliminary Definitions


Definition 23. A system is said to be observable if for any time T > 0 it is possible to
determine the state of the system x(T ) ∈ Rn through the measurements of u(t) and y(t),
with t ∈ [0, T ].

A pair (A, C) is observable if and only if rank(O) = n = dim(A), where


 
C
 CA 
O =  ..  (7.1)
 
 . 
CAn−1

is the observability matrix.

Definition 24. A pair (A, C) is detectable if all the unobservable modes are stable.

Definition 25. A linear system is reachable if for any x0 , xf ∈ Rn , there exists a T > 0
and u(t) : [0, T ] → R such that the corresponding solution satisfies x(0) = x0 and x(T ) =
xf ∈ Rn .

Reachability test: A system (A, B) is reachable if and only if rank(R) = n, where


x ∈ Rn and 
R = B AB . . . An−1 B . (7.2)
Remark. Note that for continuous linear time invariant systems, controllability is the same
as reachability. In general, reachability implies controllability, but not the converse.

7.2 Problem Definition


Given a linear, time invariant system

ẋ(t) = Ax(t) + Bu(t)


(7.3)
y(t) = Cx(t) + Du(t),

at each time instant t construct an estimate of the state x̂(t) by only measuring the
system’s inputs and outputs, such that

lim (x(t) − x̂(t)) = 0. (7.4)


t→∞

193
Gioele Zardini Control Systems II FS 2018

7.3 The Luenberger Observer


In order to do this, one creates a numerical copy of the system (an observer)
˙
x̂(t) = Ax̂(t) + Bu(t)
(7.5)
ŷ(t) = C x̂(t) + Du(t),

and one observes the dynamics of the estimation error

ê = x(t) − x̂(t). (7.6)

It holds
˙ = ẋ(t) − x̂(t)
ê(t) ˙
= Ax(t) + Bu(t) − Ax̂(t) − Bu(t) (7.7)
= Aê(t).

If matrix A has all the eigenvalues in the left half-plane, the error ê(t) will converge to
zero, resulting in a correct state estimation. But is this what we want? Essentially, our
error is converging to zero because the states of the two systems are designed to converge
to zero. In particular, we are not using the output as an information. How can we solve
the problem even for unstable systems? For the following, consider the structure reported
in Figure 54. Let’s add feedback from the measured output by considering the observer
˙
x̂(t) = Ax̂(t) + Bu + L(y(t) − ŷ(t)). (7.8)

It holds
˙ = ẋ(t) − x̂(t)
ê(t) ˙

= Ax(t) + Bu(t) − Ax̂(t) − Bu(t) − L(y(t) − ŷ(t)) (7.9)


= Ax(t) − Ax̂(t) − L(Cx(t) − C x̂(t))
= (A − LC)ê(t).

With this new equation, one can choose a matrix L such that the matrix A − LC has
eigenvalues with negative real parts and hence such that the error ê(t) will converge to 0.
This observer is known as the Luenberger observer.

u(t) x(t) y(t)


ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)

x̂(t) ŷ(t) −
˙
x̂(t) = Ax̂(t) + Bu(t) + L(y(t) − ŷ(t)) ŷ(t) = C x̂(t) + Du(t)

Figure 54: Luenberger Observer

194
Gioele Zardini Control Systems II FS 2018

7.3.1 Duality of Estimation and Control


The structure of Equation 7.9 is similar to the one of state feedback we introduce in the
previous chapter. Considering the state feedback dynamic equation
ẋ(t) = (A − BK)x(t), (7.10)
one can find in Equation 7.9 some analogies. In particular, it holds
˙ = ( A| − C | L| )| ê(t).
ê(t) (7.11)
|{z} |{z} |{z}
à B̃ K̃

Since the problem has the same form, one can use the same methodology to solve it.
One recalls that pole placement is allowed if and only if the system is reachable, i.e. if
rank(R) = n = dim(A), where R is the reachability matrix defined in Equation 7.2 By
using the analogies, one can define similarly

R̃ = B̃ ÃB̃ . . . Ãn−1 B̃
= C | A| C | . . . (A| )n−1 C |

 |
C
 CA  (7.12)
=  .. 
 
 . 
CAn−1
= O| ,
where O is the observability matrix defined in Equation 7.1. Using the well known rule
rank(O) = rank(O| ), (7.13)
one can impose rank(O) to be n in order for observer pole placement to be feasible.
Starting from the Ackermann formula for state feedback
K = 0 . . . 1 R−1 p∗cl (A),

(7.14)
one can write
K̃ = L|
= 0 . . . 0 1 R̃−1 p∗cl (A)

(7.15)
|
⇒ L = p∗cl (A)O−1 0 . . . 0 1 .

7.3.2 Putting Things Together


Considering Figure 55, one can define the augmented state
 
x(t)
x̃(t) = . (7.16)
ê(t)
The dynamics read
 
˙ ẋ(t)
x̃(t) = ˙
ê(t)
  
A − BK BK x(t) (7.17)
=
0 A − LC ê(t)
| {z }
Acl

195
Gioele Zardini Control Systems II FS 2018

Since Acl is upper triangular, it holds


σ(Acl ) = σ(A − BK) ∪ σ(A − LC). (7.18)
This is known as the separation principle. Intuitively, this means that control and esti-
mation do not interact with each other, hence can designed independently.

u(t) ẋ(t) = Ax(t) + Bu(t)


−KLQR
y(t) = Cx(t) + Du(t)
y(t)
x̂(t)
˙
x̂(t) = Ax̂(t) + Bu(t) − L(ŷ(t) − y(t))
ŷ(t) = C x̂(t) + Du(t)

Figure 55: Observer Problem: Closed loop system.

In general one follows this procedure:


1. Design the controller first. Find K to place the poles of A − BK where you desire
in the LHP.
2. Design the observer. Find L to place the poles of A − LC in the LHP. As a rule of
thumb, make the observer 10 times faster than the controller.

7.4 Linear Quadratic Gaussian (LQG) Control


LQR relies on the assumption that the states are known. How can one integrate the
defined estimation procedure in the LQR framework? Is the optimality defined for the
LQR method affected by this? In the following, we recall the LQR problem definition
and its solution.

7.4.1 LQR Problem Definition


With LQR one wants to find a stabilizing input uLQR (t), t ∈ [0, ∞] such that
Z ∞
uLQR (t) = argmin u(t)| Ru(t) + x(t)| Qx(t) + 2x(t)| N u(t)dt (7.19)
u 0

satisfying
• the dynamics
ẋ(t) = Ax(t) + Bu(t), x(0) = x0 (7.20)
and
z(t) = Ex(t) + F u(t), (7.21)
with u(t) ∈ Rm×1 , z ∈ Rk×1 and x ∈ Rn×1 .
• Q > 0, R > 0, Q = Q| , R = R| , with
R = F | Q̄F + ρR̄, ρ ∈ R+
Q = E | Q̄E (7.22)
N = E | Q̄F.

196
Gioele Zardini Control Systems II FS 2018

7.4.2 LQR Problem Solution


If

1. The system (A, B) is stabilizable and

2. the pair (Ã, Q̃) = (A − BR−1 N | , Q − N R−1 N | ) is detectable,

then
uLQR (t) = − R−1 (N + P B)| x(t), (7.23)
| {z }
KLQR

where P is the real, symmetric, positive definite solution of the algebraic Riccati equation
(ARE)

(A − BR−1 N | )| P + P (A − BR−1 N | ) +P (−BR−1 B | ) P + (Q − N R−1 N | ) = 0 (7.24)


| {z } | {z } | {z } | {z }
Ā| Ā R̄ Q̄

7.4.3 Simplified Case


It turns out that choosing N = 0 results in nice robustness properties. By writing P∞
instead of P (we are solving the finit horizon), one can simplify the ARE as

A| P∞ + P∞ A − P∞ BR−1 B | P∞ + Q = 0. (7.25)

7.4.4 Stady-state Kalman Filter


The observer problem, i.e. find L in
˙
x̂(t) = Ax̂(t) + Bu(t) + L(y(t) − ŷ(t)) (7.26)

such that (A − LC) is stable, shows duality with the control problem, i.e.

C | → B, A| → A, L| → K. (7.27)

Thank to this duality, one can solve the estimation problem by solving the control one.
The algebraic Riccati equation for estimation is

AP∞ + P∞ A| − P∞ C | R−1 CP∞ + Q = 0. (7.28)

The matrix L can be found with

L| = R−1 CP∞ ⇒ L = P∞ C | R−1 . (7.29)

The duality exists also for the technical conditions for the ARE:

(A| , C | ) stabilizable ↔ (A, C) detectable .


(7.30)
(A| , Q) detectable ↔ (A, Q) stabilizable .

197
Gioele Zardini Control Systems II FS 2018

Deterministic Interpretation
It is worth mentioning that the Kalman filter’s theory is not that short. Among others,
stochastic interpretation, recursive formulatiom, finite horizon and discrete time imple-
mentation represent important topics to be discussed. However, for the aim of this course,
we use the deterministic interpretation of Kalman filters. Considering the effect of dis-
turbances/noises on the plant

ẋ(t) = Ax(t) + Bu(t) + w(t), x(0) = x0


(7.31)
y(t) = Cx(t) + Du(t) + n(t),

where w(t) represents the process noise and n(t) the measurement noise. The Kalman
Filter can be interpreted deterministically as minimizing an uncertainty measure
Z T
2
kx0 k2 + kwk22 + knk22 dt, (7.32)
0

i.e. estimating the last energy/most likely initial condition, disturbance and measurement
noise that justify the measurements.

7.4.5 Summary
The linear quadratic gaussian (LQG) regulator is the union of a LQR controller and a
Kalman Filter. One can see such a closed loop system in Figure 56. The closed loop is

d(t) z(t) = Ex(t) + F u(t)

Kr r(t) n(t)
r(t) Trajectory xref (t) u(t) ẋ(t) = Ax(t) + Bu(t) y(t)
− KLQR
Generation y(t) = Cx(t) + Du(t)

x̂(t)
˙
x̂(t) = Ax̂(t) + Bu(t) − L(ŷ(t) − y(t))
ŷ(t) = C x̂(t) + Du(t)

Figure 56: LQG Problem: Closed loop system.

stable if and only if K is a stabilizing state feedback gain and L is a stabilizing estimation
gain.

198
Gioele Zardini Control Systems II FS 2018

7.5 Examples
Example 54. You are given the following system matrices:
   
1 3 0 2 
A = 0 −4 0  , B = 0 , C= 1 0 0 (7.33)
3 −2 −2 0

a)  The system is fully observable and fully controllable.


 The system is fully observable but not controllable.
 The system is not observable but fully controllable.
 The system not observable and not controllable.

b)  The eigenvalue corresponding to the non observable state is: 1


 The eigenvalue corresponding to the non observable state is: -2
 The eigenvalue corresponding to the non observable state is: -4
 All the states are observable.

c)  The eigenvalue corresponding to the non reachable state is: 1


 The eigenvalue corresponding to the non reachable state is: -2
 The eigenvalue corresponding to the non reachable state is: -4
 All the states are reachable.

d)  The system is detectable and stabilizable.


 The system is detectable but not stabilizable.
 The system is not detectable but stabilizable.
 The system is not detectable and not stabilizable.

199
Gioele Zardini Control Systems II FS 2018

Solution. You are given the following system matrices:


   
1 3 0 2 
A= 0  −4 0 ,  B = 0 ,
 C= 1 0 0 (7.34)
3 −2 −2 0
a)  The system is fully observable and fully controllable.
 The system is fully observable but not controllable.
 The system is not observable but fully controllable.
3 The system not observable and not controllable.

b)  The eigenvalue corresponding to the non observable state is: 1
3 The eigenvalue corresponding to the non observable state is: -2

 The eigenvalue corresponding to the non observable state is: -4
 All the states are observable.
c)  The eigenvalue corresponding to the non reachable state is: 1
 The eigenvalue corresponding to the non reachable state is: -2
3 The eigenvalue corresponding to the non reachable state is: -4

 All the states are reachable.
d) 3 The system is detectable and stabilizable.

 The system is detectable but not stabilizable.
 The system is not detectable but stabilizable.
 The system is not detectable and not stabilizable.

• The observability and controllability of a system can be determined by ana-


lyzing the rank of the controllability and observability matrices respectively.
After a short calculation you will see that both have only rank 2 which means
that neither controllability nor observability of the system is given.
• To determine the eigenvalue corresponding to a possible non observable state,
a PBH test for detectability needs to be done with every eigenvalue of the
system.
     
1·I −A −2 · I − A −4 · I − A
rank =3 rank =2 rank =3
C C C
(7.35)
Thus, the non observable state corresponds to the eigenvalue -2.
• To determine the eigenvalue corresponding to a possible non reachable state,
a PBH test for stabilizability needs to be done with every eigenvalue of the
system.
rank[1 · I − A B] = 3 rank[−2 · I − A B] = 3 rank[−4 · I − A B] = 2
(7.36)
Thus, the non reachable state corresponds to the eigenvalue -4.
• As the unstable state with eigenvalue 1 has full rank for both tests, the overall
system is detectable and stabilizable.

200
Gioele Zardini Control Systems II FS 2018

Example 55. The dynamics of a system are given as

ẋ1 (t) = x2 (t)


ẋ2 (t) = u(t) (7.37)
y(t) = x1 (t).

You want to design a state observer. The observer should use the measurements for y(t)
and u(t) in order to estimate the state variables x̂(t) ∼ x(t).

(a) Which dimension should the observer matrix L have?

(b) Compute the observer matrix L for R = 1 and Q = BB | .



(c) You have already computed a state feedback matrix K = 1 1 for the system
above. What is the complete transfer function of the controller C(s)?

201
Gioele Zardini Control Systems II FS 2018

Solution.
(a) Since C ∈ R1×2 matrix and L · C has the same dimensions of A ∈ R2×2 , L is a 2 × 1
matrix, i.e.  
L1
L= . (7.38)
L2

(b) First of all, let’s read from 7.37 the system matrices:
 
0 1
A=
0 0
 
0
B= (7.39)
1

C= 1 0
D = 0.
Plugging these matrices into the algebraic Riccati equation and using the unknown
matrix  
ψ1 ψ2
Ψ= , (7.40)
ψ2 ψ3
one gets:
1
· Ψ · CT · C · Ψ − Ψ · AT − A · Ψ − B · B T = 0
  R        
1  0 1 0 0 0  0 0
Ψ· · 1 0 ·Ψ−Ψ· − ·Ψ− · 0 1 =
0 0 0 1 0 1 0 0
       
ψ12 ψ1 · ψ2 2ψ2 ψ3 0 0 0 0
− − = .
ψ1 · ψ2 ψ22 ψ3 0 0 1 0 0
(7.41)
The matrix Ψ is symmetric and positive definite and with these informations we
can compute its elements:
• From the last term of the equation one gets
ψ22 = 1 ⇒ ψ2 = ±1. (7.42)

• By plugging this into the first equation √one gets ψ1 = ± 2. Because the
positive definite condition, one gets ψ1 = 2, ψ2 = 1.
• Because of the form of C we don’t care about ψ3 .
From these calculations it follows
1
LT = ·C ·Ψ
R √ 
1  2 1 (7.43)
= · 1 0 ·
1 1 ∗
√ 
= 2 1 ,
and so √ 
2
L= . (7.44)
1

202
Gioele Zardini Control Systems II FS 2018

Figure 57: Structure of LQG controller.


.

(c) By looking at Figure 57, one can write the transfer function of the feedback controller
as
Ĉ(s) = K · (s · I − (A − B · K − L · C))−1 · L. (7.45)
By plugging in the found matrices one gets
    √ 
0 1 0  2 
(A − B · K − L · C) = − · 1 1 − · 1 0
0 0 1 1
    √ 
0 1 0 0 2 0
= − − (7.46)
0 0 1 1 1 0
 √ 
− 2 1
= .
−2 −1

It follows
 √ −1
−1 s + 2 −1
(s · I − (A − B · K − L · C)) =
2 s+1
 
1 s+1 1√
= √ · .
(s + 2) · (s + 1) + 2 −2 s + 2
(7.47)

By plugging this into the formula one gets

Ĉ(s) = K · (s · I − (A − B · K − L · C))−1 · L
  √ 
 1 s+1 1√ 2
= 1 1 · √ · ·
(s + 2) · (s + 1) + 2 −2 s + 2 1
√ 
√ 
1 2
= √ · s−1 s+1+ 2 ·
(s + 2) · (s + 1) + 2 1 (7.48)
1 √ √ √
= √ · ( 2s − 2 + s + 1 + 2)
(s + 2) · (s + 1) + 2

( 2 + 1)s + 1
= √ .
(s + 2) · (s + 1) + 2

203
Gioele Zardini Control Systems II FS 2018

Example 56. Design a Luenberger Observer for the following system:


   
−2 1 0 
A= B= C= 1 0 (7.49)
0 −4 1

with the desired observer poles (−4, −4).

a) With the direct method, i.e. by imposing the poles by hand.

b) Using the Ackermann formula.

204
Gioele Zardini Control Systems II FS 2018

Solution.
 
l
a) The observer gain is defined as L = 1 .
l2
The evolution in the estimation error for a Luenberger observer can be written as:

ê˙ = (A − LC)ê (7.50)

By plugging in the matrices one gets:


     
−2 1 l1  −2 − l1 1
A − LC = − 1 0 = (7.51)
0 −4 l2 −l2 −4

The characteristic polynomial of this system can be written as:

det(λI − A + LC) = λ2 + (6 + l1 )λ + 4l1 + l2 + 8 (7.52)


 
−4
As the observer poles are required to be , the characteristic polynomial is
−4
once more:
(λ + 4)2 = λ2 + 8λ + 16 (7.53)
We can thus deduce that l1 = 2 and l2 = 0 by comparing the coefficients of the two
polynomials.
The Luenberger observer can therefore be written as:
˙
x̂(t) = (A − LC)x̂(t) + Bu(t) + Ly(t)
(7.54)
     
˙ −4 1 0 2
x̂(t) = x̂(t) + u(t) + y(t)
0 −4 1 0

b) To apply the Ackermann formula, the inverse of the observability matrix of the
system needs to be known:
     
C 1 0 −1 1 0
O= = ⇒O = (7.55)
CA −2 1 2 1
 
−4
The desired observer poles are at the characteristic polynomial is therefor
−4
written as:
p∗cl (s) = s2 + 8s + 16 ⇒ p∗cl (A) = A2 + 8A + 16I (7.56)

 2      
−2 1 −2 1 1 0 4 2
p∗cl (A) = +8 + 16 = (7.57)
0 −4 0 −4 0 1 0 0
The observer gain can now be derived using the Ackermann formula:
 
∗ −1 0
L = pcl (A)O (7.58)
1
       
l1 4 2 1 0 0 2
L= = = (7.59)
l2 0 0 2 1 1 0

205
Gioele Zardini Control Systems II FS 2018

Example 57. You are working for your semester thesis at a project which includes a water
reservoir. Your task is to determine the disturbance d(t) that acts on the reservoir. Figure
58 shows the situation. The only state of the system is the water volume x(t) = V (t).
The volume flows in the reservoir Vin∗ (t) are the known system input u(t) and the unknown
disturbance d(t) > 0. The volume flow of the system is assumed to be only dependend
on the water volume, i.e.

Vout (t) = −β · x(t). (7.60)
The system output y(t) is the water level h(t). The model of this reservoir reads

dx(t)
= −β · x(t) + u(t) + d(t),
dt (7.61)
1
y(t) = · x(t), α > 0, β > 0.
α

Figure 58: a) Drawing of the reservoir; b) Inputs and Outputs of the observer; c) Blocks
for signal flow diagram.
.

The goal is to determine d(t). Your supervisor has already tried to solve the model
equations for d(t): he couldn’t determine the change in volume dx(t)
dt
with enough precision.
Hence, you want to solve this problem with a state observer.

(a) Draw the signal flow diagram of such a state observer. Use the blocks of Figure
58c).

(b) The state feedback matrix L is in this case some scalar value. Which value can L
be, in order to get an asymptotically stable state observer?
ˆ in the state observer. This should approximate the real
(c) Introduce a new signal d(t)
disturbance d(t).

(d) Find the state space description of the observer with inputs u(t) and y(t) and output
ˆ
d(t).

206
Gioele Zardini Control Systems II FS 2018

Solution.
(a) The signal flow diagram can be seen in Figure 59.

Figure 59: Signal flow diagram of the state observer.


.

(b) The stability of the ovserver depends on the eigenvalues of A − L · C. In this case,
since A − L · C is a scalar,
A−L·C <0 (7.62)
should hold. This leads to
A
L> . (7.63)
C
With the given informations it follows
β
L>− . (7.64)
α

(c) The dashed line in Figure 59 represents the new output d(t).ˆ The integrator in
Figure 59 has now 3 inputs. The arrow from downwards from the reservoir is
∗ ∗
Vout (t), the arrow from left is the input flow u(t) = Vin1 (t). If we simulate the
system without the dashed arrow, there is a deviation between the measured y(t)
and the simulated ŷ(t). This results from the extra inflow d(t) = Vin∗ (t), which is
not considered in the simulation.
(d) The new state-space description reads
 
dx̂(t) L
= −β − · x̂(t) + (1) · u(t) + (L) · y(t) (7.65)
dt α
 
ˆ L
d(t) = − · x̂(t) + (0) · u(t) + (L) · y(t). (7.66)
α

207
Gioele Zardini Control Systems II FS 2018

8 H∞ Control
The big disadvantage of LQR/LQG is that one cannot directly impose frequency domain
specifications to the control loop. A solution to this problem is given by the H∞ control
formulation.

8.1 Problem Formulation


In H∞ control we consider the closed-loop system representation reported in Figure 60.
By referring to Figure 60, one can define

w z
G(s)

C(s) ỹ

Figure 60: General system for H∞ control.

• G(s) is called the extended system and is real, rational and proper.

• C(s) is the controller and is real, rational and proper.

• w(t) ∈ Cm1 ×1 is called exogenous input, and contains at least the reference signal
r(t) and possibly other exogenous signals, such as a noise model n(t).

• z(t) ∈ Cp1 ×1 is called the performance output and is a virtual output signal only
used for design.

• ũ(t) ∈ Cm2 ×1 is the control input, computed by the controller C(s).

• ỹ(t) ∈ Cp2 ×1 is the measured output, available to the controller C(s).

Remark. As a side node, exogenous means: caused or produced by factors external to a


model.
With H∞ control we are interested in finding the controller C(s) that stabilizes internally
and externally the closed loop system and minimizes

kzk2
kTzw (s)k∞ = sup
w6=0 kwk2

= sup σ̄ (Tzw (jω)) (8.1)


w6=0

:= γmin ,

where Tzw (s) is the transfer function which relates signals z(t) and w(t). Intuitively, this
is equivalent to

• Minimize the energy (k · k2 norm) gain of the closed-loop system.

208
Gioele Zardini Control Systems II FS 2018

• Have a chance to incorporate constraints on regulated variables in frequency space,


e.g. the tracking error E(s).

One can hence state the aim of H∞ control to be:

Optimal H∞ Control: Find all admissible controllers C(s) such that kTzw (jω)k∞ is
minimized.

Differently to what we observed in H2 control, the optimal H∞ controllers are not unique.
Moreover, the process of finding an optimal controller is complicated, numerically and
theoretically. This said, in practice is often not necessary to design an optimal controller:
often it is sufficient to find controllers which are close to optimality, but easier to com-
pute, i.e. suboptimal controllers.

Suboptimal H∞ Control: Given γ > 0, find all admissible controllers C(s) such that
kTzw (jω)k∞ < γ.

8.2 Mixed Sensitivity Approach


8.2.1 Transfer Functions Recap
By considering the standard MIMO control system structure with 0 disturbance depicted
in Figure 61 and defining the signals of interest to be E(s), U (s) and Y (s), one can derive
them as
E(s) = R(s) − Y (s)
= R(s) − N (s) − P (s)C(s)E(s)
(8.2)
⇒ E(s) = (I + P (s)C(s))−1 (R(s) − N (s))
= S(s)(R(s) − N (s))

U (s) = C(s)E(s)
(8.3)
= C(s)S(s)(R(s) − N (s)),

Y (s) = N (s) + P (s)C(s)E(s)


= N (s) + P (s)C(s)S(s)(R(s) − N (s))
= N (s) + T (s)(R(s) − N (s)) (8.4)
= (I − T (s))N (s) + T (s)R(s)
= S(s)N (s) + T (s)R(s),

which can be rewritten in matrix form as


   
E(s) S(s) −S(s)  
U (s) = C(s)S(s) −C(s)S(s) R(s)
. (8.5)
N (s)
Y (s) T (s) S(s)

209
Gioele Zardini Control Systems II FS 2018

d(t) = 0 n(t)
r(t) e(t) u(t) v(t) η(t) y(t)
F (s) = I C(s) P (s)

Figure 61: Standard feedback control system structure.

8.2.2 How to ensure Robustness?


As previously mentioned, H∞ control approach allows to introduce specifications in the
frequency domain. In particular, we always focused ourselves in the analysis of the useful
system transfer functions S(s) (sensitivity function) and T (s) (complementary sensitivity
function). As a general reminder, recall that a small sensitivity S(s) corresponds to
disturbance rejection and a small complementary sensitivity function T (s) corresponds to
noise attenuation and robustness on modeling errors. We now recall the bound we defined
in previous classes
kW1 (jω)S(jω)k + kW2 (jω)T (jω)k < 1, (8.6)
where W1 (s) and W2 (s) are the robust weighting functions for the sensitivity and the
complementary sensitivity, respectively. In order to ensure robustness, we would like to
minimize the left term in Equation 8.6. Since there exist no controller which is able to
solve this problem directly, one can write the two conditions separately, i.e.

kW1 (s)S(s)k∞ < 1 nominal performance,


(8.7)
kW2 (s)T (s)k∞ < 1 robust stability.

8.2.3 How to use this in H∞ Control?


Once the weighting functions are designed, one needs to augment the original plant in
order to let the approach meet the H∞ problem definition. In particular, the general form
of such an augmentation with ũ(t) = u(t), ỹ(t) = e(t) and w(t) = r(t) can be seen in
Figure 62. One can note that the signals resulting from the weighting are three and are
ze (t), zu (t), zy (t). The error e(t) is fed through the weighting function We (s) (W1 (s) in
our previous considerations). Since the transfer function from the reference signal r(t) to
the error e(t) is known to be the sensitivity function S(s) (refer to Equation 8.5), one can
write
Ze (s) = We (s)S(s)R(s). (8.8)
The measured output y(t) is fed through the weighting function Wy (s) (W2 (s) in our
previous considerations). Since the transfer function from the reference r(t) to the output
y(t) is known to be the complementary sensitivity function T (s) (refer to Equation 8.5),
one can write
Zy (s) = Wy (s)T (s)R(s). (8.9)
The clever reader will notice that a third weighting function is present. The input u(t) is
fed through the weighting function Wu (s). Since the transfer function from the reference

210
Gioele Zardini Control Systems II FS 2018

ze
We (s)

zu
Wu (s)

zy
Wy (s)

y e
P (s)
u −

C(s)

Figure 62: General extended system structure.

r(t) to the input u(t) is known to be C(s)S(s) (refer to Equation 8.5), one can write

Zu (s) = Wu (s)C(s)S(s)R(s). (8.10)

Summarizing, one can write


   
Ze (s) We (s)S(s)
Zu (s) = Wu (s)C(s)S(s) R(s) . (8.11)
Zy (s) Wy (s)T (s)
|{z}
| {z } | {z } W (s)
Z(s) Tzw (s)

8.3 Finding Tzw (s)


8.3.1 General Form
By looking at Figure 60, one can write the system into standard form, which reads
   
Z(s) W (s)
= G(s)
Ỹ (s) Ũ (s)
   (8.12)
G11 (s) G12 (s) W (s)
= .
G21 (s) G22 (s) Ũ (s)
Furthermore, it holds

Ũ (s) = C(s)Ỹ (s)


  (8.13)
= C(s) G21 (s)W (s) + G22 (s)Ũ (s) ,

from which it follows

Ũ (s) = (I − C(s)G22 (s))−1 C(s)G21 (s)W (s). (8.14)

211
Gioele Zardini Control Systems II FS 2018

Combining Equation 8.12 and Equation 8.14 results in


Z(s) = G11 (s)W (s) + G12 (s)Ũ (s)
= G11 (s)W (s) + G12 (s) (I − C(s)G22 (s))−1 C(s)G21 (s)W (s)
(8.15)
= G11 (s) + G12 (s) (I − C(s)G22 (s))−1 C(s)G21 (s) W (s).

| {z }
Tzw (s)

The infinity norm of Tzw (s) is per definition


 
kTzw (jω)k∞ = max max σi (Tzw (jω)) , (8.16)
ω i
which corresponds to the maximum magnitude of its frequency response. Minimizing the
infinity means in practice, minimizing this maximum singular value, i.e. minimizing the
worst-case amplification from w(t) to z(t) at any frequency.

8.3.2 Applying Mixed Sensitivity Approach


If the plant is augmented using the mixed sensitivity approach, it holds
 
We (s)S(s)
Tzw (s) = Wu (s)C(s)S(s) . (8.17)
Wy (s)T (s)
Let’s define Ŝ(s), T̂ (s) and R̂(s) to be acceptable upper bounds for the sensitivity S(s),
the complementary sensitivity T (s) and the transfer function r → u, C(s)S(s). By setting

We (s) = Ŝ(s)−1
Wu (s) = R̂(s)−1 (8.18)
−1
Wy (s) = T̂ (s) ,
one can write the control problem as:

Suboptimal H∞ Control: Find C(s) such that for sufficiently small γ ∈ R+ it holds.
 
We (s)S(s)
k Wu (s)C(s)S(s) k∞ ≤ γ. (8.19)
Wy (s)T (s)
Remark. Note that
 1
σ̄ (Tzw (s)) = σ̄ → σ Tzw (s)−1 = . (8.20)
σ̄
One can then in general define the generalized optimization problem related to this control
problem to be
min γ. (8.21)
kTzw (s)k∞ ≤γ

One can hase essentially two cases:


• If the solution of the optimization problem results into γ ∗ ≤ 1, then the imposed
specifications are filfilled and satisfied by C(s).
• If the solution of the optimization problem results into γ ∗ > 1, the conditions are
not satisfied and one should use relaxed weights.
What is done in practice is an iterative procedure, which solves the optimization problem
by restricting the weighting functions.

212
Gioele Zardini Control Systems II FS 2018

8.4 Implementation
By fixing a γ ∗ , one can solve the optimization problem. The augmented plant G(s) has
to be represented in state space form, i.e. we need to express the weighting functions’
dynamics. In general, one can always write

P (s) = C(sI − A)−1 B + D,


We (s) = Ce (sI − Ae )−1 Be + De ,
(8.22)
Wu (s) = Cu (sI − Au )−1 Bu + Du ,
Wy (s) = Cy (sI − Ay )−1 By + Dy .

8.4.1 State Space Representation


The extended system can be re-written as a dynamic system with Aext , Bext , Cext , Dext as
   
ẋ x(t)  
 ẋe (t) 
 = Aext  xe (t)  + Bext w(t)
 

ẋu (t) xu (t) u(t)
ẋy (t) xy (t)
    (8.23)
ze (t) x(t)  
zu (t)
 = Cext  xe (t)  + Dext w(t) .
 

zy (t) xu (t) u(t)
e(t) xy (t)

By using Figure 62, one can write

ẋe (t) = Ae xe (t) + Be ue (t)


= Ae xe (t) + Be (r(t) − y(t))
= Ae xe (t) + Be (w(t) − Cx(t) − Du(t))
= Ae xe (t) + Be w(t) − Be Cx(t) − Be Du(t),
(8.24)
ze (t) = ye (t)
= Ce xe (t) + De ue (t)
= Ce xe (t) + De (w(t) − Cx(t) − Du(t))
= Ce xe (t) − De Cx(t) + De w(t) − De Du(t),

and
ẋy (t) = Ay xy (t) + By uy (t)
= Ay xy (t) + By y(t)
= Ay xy (t) + By (Cx(t) + Du(t))
= Ay xy (t) + By Cx(t) + By Du(t),
(8.25)
zy (t) = yy (t)
= Cy xy (t) + Dy uy (t)
= Cy xy (t) + Dy (Cx(t) + Du(t))
= Cy xy (t) + Dy Cx(t) + Dy Du(t),

213
Gioele Zardini Control Systems II FS 2018

and
ẋu (t) = Au xu (t) + Bu uu (t)
= Au xu (t) + Bu u(t),
zu (t) = yu (t) (8.26)
= Cu xu (t) + Du uu (t)
= Cu xu (t) + Du u(t).

Combining these results with Equation 8.23, one gets


   
A 0 0 0 0 B
−Be C Ae 0 0 , Bext =  Be −Be D  = Bext,w
  
Aext = 
 0 Bext,u ,
0 Au 0   0 Bu 
By C 0 0 Ay 0 By D
   
−De C Ce 0 0   De −De D  
 0 0 Cu 0   Cext,z  0 Du  Dext,zw Dext,zu
Cext = 
 Dy C = , Dext =  = .
0 0 Cy  Cext,y  0 Dy D  Dext,yw Dext,yu
−C 0 0 0 I −D
(8.27)

Compactly, one can write


   
Aext B1 B2 Aext Bext,w Bext,u
 C1 D11 D12  =  Cext,z Dext,zw Dext,zu  (8.28)
C2 D21 D22 Cext,y Dext,yw Dext,yu

8.4.2 H∞ Solution
Once that one has the extended plant G(s) and the state space description of the system,
one can solve the optimization problem.

Simplified Case
Assuming
|
• Cext,z Dext,zu = 0,
|
• Bext,w Dext,yw = 0, i.e. process noise and sensor noise are uncorrelated,
|
• Dext,zu Dext,zu = I,
|
• Dext,yw Dext,yw = I,

find a controller C(s) such that kTzw k∞ < γ for γ > 0. It turns out that by simplifying
the problem, the solution has similarities with the one of LQG (state feedback). The
procedure to solve this problem is:

214
Gioele Zardini Control Systems II FS 2018

Kochrezept H∞ Control
A controller C(s) which satisfies the objective exists if and only if the conditions contained
in the different steps are fulfilled.

I) Fix a large value for γ.

II) Find the quatratic, real matrix X∞ ≥ 0 which solves the algebraic Riccati equation

 
1
A|ext X∞ + X∞ Aext + |
Cext,z Cext,z + X∞ | |
Bext,w Bext,w − Bext,u Bext,u X∞ (8.29)
γ2

and such that it is stabilizing, i.e.


    
1 | |
Re λi Aext + Bext,w Bext,w − Bext,u Bext,u X∞ < 0 ∀i, (8.30)
γ2

where λi (·) denotes the i−th eigenvalue.

III) Find the quadratic, real matrix Y∞ ≥ 0 which solves the algebraic Riccati equation

 
1 |
Aext Y∞ + Y∞ A|ext + Bext,w Bext,w
|
+ Y∞ C |
Cext,z − Cext,y Cext,y Y∞ = 0, (8.31)
γ 2 ext,z

and it is stabilizing, i.e.


  
1 | |
Re λi (Aext + Y∞ C Cext,z − Cext,y Cext,y < 0 ∀i, (8.32)
γ 2 ext,z

where λi (·) denotes the i−th eigenvalue.

IV) It must holds


γI − Y∞ X∞ > 0, (8.33)
or, equivalently,
max |λi (X∞ Y∞ )| = ρ(X∞ Y∞ ) ≤ γ 2 , (8.34)
i

where ρ denotes the spectral radius.

V) Reduce γ until no solution is found.

VI) If the resulting minimal γ ∗ > 1, the feasibility conditions we intoduced in the
previous chapter are no more valid. In order to make the problem feasible, one need
to relax the weights We (s), Wu (s), Wy (s). If the resulting minimal γ ∗ ≤ 1, the result
is acceptable. One can use the matrices X∞ and Y∞ to calclulate the H∞ control
dynamics. Considering the extended state
 
x(t)
 xe (t) 
x̂(t) = 
xu (t) ,
 (8.35)
xy (t)

215
Gioele Zardini Control Systems II FS 2018

one can write the controller dynamics to be


d
x̂(t) = A∞ x̂(t) − ZLy(t)
dt (8.36)
u(t) = F x̂(t),
where
 
1 |
A∞ = Aext + Bext,w Bext,w X∞ + Bext,u F + ZLCext,y
γ2
|
F = −Bext,u X∞
| (8.37)
L = −Y∞ Cext,y
 −1
1
Z = I − 2 Y∞ X∞ .
γ

Remark. In order to solve this kind of problems, a popular strategy is bisection. Let γ ∗
be the optimal solution. By maintaining lower and upper bounds γ− < γ ∗ < γ+ one uses
the following procedure:
1. Initialize γ− = 0 and γ+ = α, where α is the H∞ norm of the H2 optimal design
(LQG). Let K+ be the optimal LQG controller.
2. Let
γ− + γ+
γ← . (8.38)
2
Check if a controller exists such that kTzw k∞ < γ. If yes, set γ+ = γ and K+ to the
controller just designed. If not, set γ− ← γ.
3. Repear from step 2. until
γ+ − γ− < ε, (8.39)
where ε is a user-defined threshold.
4. Return K+ .

8.4.3 Feasibility Conditions


Conditions for the feasibility of the problem are
(a) The controllability of the extended plant G(s) must be verified: if there are non
controllable states, one needs to make sure that these states remain bounded. The
pair (Aext , B2 ) must be stabilizable.
(b) The extended plant G(s) must be fully observable: if there are not observable
states, one needs to make sure that these are stable. The pair (Cext,y , Aext ) must be
detectable.
(c) The four matrices
   
Aext − γωI Bext,u Aext − γωI Bext,w
Dext,zu , Dext,yw , , (8.40)
Cext,z Dext,zu Cext,y Dext,yw
must have full rank ∀ω.
(d) σ̄(Dext,zw ) < γ.

216
Gioele Zardini Control Systems II FS 2018

Example 58. Given an extended system of a SISO plant with performance output
   
Ze (s) We (s)S(s)
Zu (s) = Wu (s)C(s)S(s) R(s) (8.41)
Zy (s) Wy (s)T (s)
| {z }
=Tzw

where

• We (s), Wu (s) and Wy (s) are weights for the corresponding sensitivities,

• S(s) is the sensitivity and T (s) is the complementary sensitivity,

• R(s) is the reference signal to the feedback loop.

The state space representation of the extended system is given as

ẋ(t) = Ax(t) + B1 w(t) + B2 ũ(t)


z(t) = C1 x(t) + D11 w(t) + D12 ũ(t) (8.42)
ỹ(t) = C2 x(t) + D21 w(t) + D22 ũ(t)

Note: The system matrix A and state vector x(t) correspond to the extended system state
and not only to the plant’s one.

Assume that an H∞ -controller C(s) was found so that


 
We (s)S(s)
Wu (s)C(s)S(s) ≤1 (8.43)
Wy (s)T (s) ∞
holds.

217
Gioele Zardini Control Systems II FS 2018

a) Which of the following magnitude plots are possible for the given extended system?

218
Gioele Zardini Control Systems II FS 2018

 a)
 b)
 c)
 d)

219
Gioele Zardini Control Systems II FS 2018

b) Let We (s) = Ŝ(s)−1 where Ŝ(s) is a designer defined upper boundary for S(s).
Which magnitude plot of S(s) and Ŝ(s) could correspond to the the given system?

 a)
 b)

220
Gioele Zardini Control Systems II FS 2018

Given that all matrices in (8.42) are the same as the one derived in the lecture. One
can show that the eigenvalues of A are the eigenvalues of all separately considered
subsystems. Assume for the next two subtasks that the not considered conditions
(summarized on slide 18, lecture 11) for well-posedness of the H∞ -problem hold.

c) Given stabilizability to the extended system. There exists a solution to the problem
if all eigenvalues of the weight system matrices have negative real part.

 True.
 False.

d) Assume for the given extended system that σ̄(D11 ) = 1.15. There exists a solution.

 True.
 False.

221
Gioele Zardini Control Systems II FS 2018

Solution.

a) 3 a)

 b)
3 c)

 d)
From the lecture we know that if
 
We (s)S(s)
Wu (s)C(s)S(s) ≤1 (8.44)
Wy (s)T (s) ∞

holds, then each individual inequality holds as well. Therefore the Bode magnitude
plots of We (s)S(s), Wu (s)C(s)S(s) and Wy (s)T (s) must not exceed the 0dB-line.
We see that only the plots of a) and c) satisfy this condition.

b) 3 a)

 b)
Since Ŝ(s)−1 S(s) ≤ 1, the magnitude plot of S(s) has to be always below

the one of Ŝ(s). This means, that the H∞ -controller is exact the solution of the
minimization problem which leads to a S(s) satisfying the above condition.

c) 3 True.

 False.
From the lecture we know that a sufficient condition for well-posedness of the prob-
lem is [A, B2 ] stabilizable and [A, C2 ] detectable. Since C2 = −Cs 0 0 0 , where
Cs corresponds to the plant’s LTI-representation, only the states of the plant can
be observed. However, the states of the plant are not influenced by the remaining
one. In other words, there is no possibility to observe the weight’s states. Therefore
the poles of the weights has to be stable in order to have well-posedness.

d)  True.
3 False.

From the lecture we know that a sufficient condition for well-posedness of the prob-
lem is σ̄(D11 ) = γ, where in our case γ = 1. Therefore the problem is not well-posed.

222
Gioele Zardini Control Systems II FS 2018

9 Elements of Nonlinear Control


9.1 Equilibrium Point and Linearization
A nonlinear system can be written as

ẋ(t) = f (x(t), u(t))


(9.1)
y(t) = g(x(t), u(t)),

where f (·) and g(·) are nonlinear functions. Recall that (xe , ue ) represents an equilibrium
point if and only if

0 = f (xe , ue , t)
(9.2)
ye = g(xe , ue , t).

As the analysis of the nonlinear system is often difficult, we previously considered such a
system in a neighbourhood of its equilibrium points. Mathematically, this translates into
considering the Taylor expansion of the functions f (·) and g(·) around the equilibrium
points of the system and neglecting high order terms. Let δx = x − xe and δu = u − ue .
It holds then
δ ẋ = f (xe + δx, ue + δu, t)
∂f ∂f
= δx + δu + high order terms (9.3)
∂x xe ,ue ∂u xe ,ue
= Aδx + Bδu + high order terms.

By proceeding analogously for g(·) and neglecting high order terms, one gets

δ ẋ = Aδx + Bδu
(9.4)
δy = Cδx + Dδu,

∂g ∂g
where C = ∂x
and D = ∂u
.
xe ,ue xe ,ue
Remark.
• Note that in general, matrices A, B, C, D are time-varying. However, if f (·), g(·) do
not depend explicitly on time t, the linearized model will be time-invariant.

• δx, δu, δy describe a deviation from the equilibrium point. The linearized dynamics
are given by

x = xe + δx
y = ye + δy (9.5)
u = ue + δu.

9.2 Nominal Stability


During the course Control Systems I, you learned about different stability concepts. More-
over, you have learned the differences between internal and external stability: let’s recall
them here. Consider a generic nonlinear system defined by the dynamics

ẋ(t) = f (x(t)), t ∈ R, x(t) ∈ Rn , f : Rn × R → Rn . (9.6)

223
Gioele Zardini Control Systems II FS 2018

9.2.1 Internal/Lyapunov Stability


Internal stability, also called Lyapunov stability, characterises the stability of the trajec-
tories of a dynamic system subject to a perturbation near the to equilibrium. Let now
x̂ ∈ Rn be an equilibrium of system (9.6).
Definition 26. An equilibrium x̂ ∈ Rn is said to be Lyapunov stable if

∀ε > 0, ∃δ > 0 s.t. kx(0) − x̂k < δ ⇒ kx(t) − x̂k < ε. (9.7)

In words, an equilibrium point is said to be Lyapunov stable if for any bounded initial
condition and zero input, the state remains bounded.
Definition 27. An equilibrium x̂ ∈ Rn is said to be asymptotically stable in Ω ⊆ Rn if it
is Lyapunov stable and attractive, i.e. if

lim (x(t) − x̂) = 0, ∀x(0) ∈ Ω. (9.8)


t→∞

In words, an equilibrium is said to be asymptotically stable if, for any bounded initial
condition and zero input, the state converges to the equilibrium.
Definition 28. An equilibrium x̂ ∈ Rn is said to be unstable if it is not stable.
Remark. Note that stability is a property of the equilibrium and not of the system in
general.

9.2.2 External/BIBO Stability


External stability, also called BIBO stability (Bounded Input-Bounded Output), charac-
terises the stability of a dynamic system which for bounded inputs gives back bounded
outputs.
Definition 29. A signal s(t) is said to be bounded, if there exists a finite value B > 0
such that the signal magnitude never exceeds B, that is

|s(t)| ≤ B ∀t ∈ R. (9.9)

Definition 30. A system is said to be BIBO-stable if

ku(t)k ≤ ε ∀t ≥ 0, and x(0) = 0 ⇒ ky(t)k < δ ∀t ≥ 0, ε, δ ∈ R. (9.10)

In words, for any bounded input, the output remains bounded.

9.2.3 Stability for LTI Systems


Above, we focused on general nonlinear system. However, in Control Systems I you
learned that the output y(t) for a LTI system of the form

ẋ(t) = Ax(t) + Bu(t)


(9.11)
y(t) = Cx(t) + Du(t),
can be written as
Z t
At
y(t) = Ce x(0) + C eA(t−τ ) Bu(τ )d(τ ) + Du(t). (9.12)
0

224
Gioele Zardini Control Systems II FS 2018

The transfer function relating input to output is a rational function

bn−1 sn−1 + bn−2 sn−2 + . . . + b0


P (s) = C(sI − A)−1 B + D = + d. (9.13)
sn + an−1 sn−1 + . . . + a0
Furthermore, it holds:

• The zeros of the numerator of Equation (9.13) are the zeros of the system, i.e. the
values si which fulfill
P (si ) = 0. (9.14)

• The zeros of the denominator of Equation (9.13) are the poles of the system, i.e.
the values si which fulfill det(si I − A) = 0, or, in other words, the eigenvalues of A.

One can show, that the following Theorem holds:

Theorem 13. The equilibrium x̂ = 0 of a linear time invariant system is stable if and
only if the following two conditions are met:

1. For all λ ∈ σ(A), Re(λ) ≤ 0.

2. The algebraic and geometric multiplicity of all λ ∈ σ(A) such that Re(λ) = 0 are
equal.

Remark. For linear systems, the stability of an equilibrium point does not depend on the
point itself. For nonlinear systems, it does.

9.3 Local Stability


Let x = xe be an equilibrium for the autonomous nonlinear system

ẋ(t) = f (x(t)), (9.15)

where f : D → Rn is a continuously differentiable function and D is a neighborhood of


xe . Let
∂f
A= (x) . (9.16)
∂x x=xe

Then:

1. xe is aymptotically stable if Re(λi ) < 0 for all eigenvalues of A.

2. xe is unstable if Re(λi ) > 0 for one or more of the eigenvalues of A.

It holds:

• In linear systems, local stability ⇔ global stability.

• In nonlinear systems, this is not true.

225
Gioele Zardini Control Systems II FS 2018

9.3.1 Region of Attraction


Definition 31. A function f : Ω ⊆ Rn → Rm is said to be Lipschitz on Ω if for K ≥ 0 it
holds
kf (x) − f (y)k
≤ K, ∀x, y ∈ Ω. (9.17)
kx − yk
Definition 32. Let xe be an asymptotically stable equilibrium point of the system ẋ(t) =
f (x(t)), where f (·) is a locally Lipschitz function defined over a domain D ⊂ Rn and xe is
contained in D. The region of attraction (also known as region of asymptotic stability,
domain of attraction) is the set of all points x0 ∈ D such that the solution of

ẋ(t) = f (x(t)), x(0) = x0 (9.18)

is defined for all t ≥ 0 and converges to xe as t → ∞. Note that xe is said to be globally


asymptotically stable if the region of attraction is the whole space Rn .

9.4 Lyapunov Stability


9.4.1 Lyapunov Principle - General Systems
1. The Lyapunov Principle is valid for all finite-order systems: as long as the linearized
system has no eigenvalues on the imaginary axis.

2. The local stability properties of an arbitrary-order nonlinear system are fully un-
derstood once the eigenvalues of the linearization are known.

3. Particularly, if the linearization of a nonlinear system around an isolated equilib-


rium point xe is asymptotically stable (or unstable), then this equilibrium is an
asymptotically stable (or unstable) equilibrium of the nonlinear system as well. We
can but not say that this holds also for the concept of stable system (Re(λ) = 0).

If we are interested in non-local results or in the case of stable systems, we should use the
Lyapunov’s direct method.
A scalar function α(p) with α : R+ → R+ is a nondecreasing function if α(0) = 0 and
α(p) ≥ α(q)∀p > q. A function V : Rn+1 → R is a candidate global Lyapunov function if
• The function is strictly positive, i.e., V (x, t) > 0 ∀x 6= 0, ∀t and V (0) = 0 and

• there are two nondecreasing functions α and β which satisfy the inequalities

β(kxk) ≤ V (x, t) ≤ α(kxk) (9.19)

Remark. If these conditions are not met, only local assumptions can be made.
Theorem 14. The system

ẋ(t) = f (x(t), t), x(t0 ) = x0 6= 0, (9.20)

is globally/locally stable in the sense of Lyapunov if there is a global/local Lyapunov


function candidate V (x, t) for which the following inequality holds ∀x(t) 6= 0 and ∀t:
∂V (x, t) ∂V (x, t)
V̇ (x(t), t) = + f (x(t), t) ≤ 0 (9.21)
∂t ∂x
226
Gioele Zardini Control Systems II FS 2018

Theorem 15. The same system is globally/locally asymptotically stable if there is a


global/local Lyapunov function candidate V (x, t) such that −V̇ (x(t), t) satisfies all con-
ditions of a global/local Lyapunov function candidate.
Remark. In general it is difficult to find suitable functions. A good way to approach the
problem is to use physical laws (Lyapunov functions can be seen as generalized energy
functions).
For linear systems one can find the Lyapunov function
V (x(t)) = x(t)| · P · x(t), P = P | > 0, (9.22)
where P is the solution of the Lyapunov equation
P A + A| P = −Q. (9.23)
For arbitrary Q = Q| > 0, a solution to this equation exists if and only if A is a Hurwitz
matrix.
Remark. Lyapunov theorems provide sufficient but not necessary conditions!
Example 59. Consider the nonlinear system described by the following differential equa-
tions:
ẋ1 = x1 x22
(9.24)
ẋ2 = x21 x2 + 2x32 − 6x2 .
a) Linearize the system around the equilibrium x1,e = x2,e = 0 and find matrix A.
b) Can you say something about the stability of the nonlinear system?
c) Evaluate the stability of the nonlinear system using the Lyapunov function V =
1
2
(x21 + x22 ) and find the region of attraction about the equilibrium point.
Solution.
1. The linearization matrix A reads
∂ ∂
(x1 x22 ) (x1 x22 )
 
A = ∂ 2 ∂x1 ∂
∂x2
∂x1
(x1 x2 + 2x32 − 6x2 ) ∂x2
(x1 x2 + 2x32 −
2
6x2 ) (0,0)
 2 
x2 2x1 x2
= (9.25)
2x1 x2 x21 + 6x22 − 6 (0,0)
 
0 0
= .
0 −6

2. The eigenvalues of matrix A are λ1 = 0 and λ2 = −6. Using the Lyapunov principle,
we cannot evaluate the stability of the nonlinear system, since the linearized one is
just stable around the equilibrium.
3. The derivative of the Lyapunov function reads
V̇ = x1 ẋ1 + x2 ẋ2
= x1 · (x1 x22 ) + x2 · (x21 x2 + 2x32 − 6x2 )
(9.26)
= x21 x22 + x22 x21 + 2x42 − 6x22
= 2x22 · (x21 + x22 − 3).
In order for V̇ to be negative definite, it must hold x21 + x22 < 3.

227
Gioele Zardini Control Systems II FS 2018

9.5 Gain Scheduling


As for most systems stability is guaranteed in some neighborhood of the equilibrium point,
we are limited when we design a stabilizing controller. A first method to overcome this
problem could be to stabilize the system around each equilibrium point and to design a
local controller to get stability. The procedure can be defined as
I) Given the nonlinear system
ẋ(t) = f (x(t), u(t)), (9.27)
choose n equilibrium points (xe,i , ue,i ), i = 1, . . . , n.
II) For each of these equilibria, linearize the system and design a local control law
ul (x(t)) = ul,e − K(x(t) − xl,e ) (9.28)
for the linearization.
III) The global control law consists of:
• Choosing the correct control law, as a function of the state: i = σ(x).
• Use that control law: u(x) = uσ(x) (x).

9.6 Feedback Linearization


9.6.1 Input-State Feedback Linearization
Input-state feedback linearization is the ability to use feedback to convert a nonlinear
state equation into a linear state equation by canceling nonlinearities. This requires the
nonlinear state equation to have the structure
ẋ(t) = Ax(t) + Bβ −1 (x(t)) [u(t) − α(x(t))] ,
(9.29)
y(t) = h(x(t)).
where
• The pair (A, B) is controllable.

α : Rn → Rp
(9.30)
β : Rn → Rp×p
are defined on the domain Dx ⊂ Rn , which contains the origin.
• The matrix β(x(t)) is assumed to be invertible ∀x ∈ Dx .
If the system is in the form presented in Equation 9.29, one can linearize it using the
feedback law
u(t) = α(x(t)) + β(x(t))v(t). (9.31)
Remark. The form presented in Equation 9.31 has a specific meaning. In fact, it holds
ẋ(t) = Ax(t) + Bβ −1 (x(t)) [u(t) − α(x(t))]
= Ax(t) + Bβ −1 (x(t)) [α(x(t)) + β(x(t))v(t) − α(x(t))] (9.32)
= Ax(t) + Bv(t),
where v(t) can be chosen with respect to the design constraints. This allows to linearize
the dynamics of the system.

228
Gioele Zardini Control Systems II FS 2018

9.6.2 Input-State Linearizability


Let z(x(t)) = T (x(t)) be a change of variables (also known as bijection). If both T and
T −1 are continuously differentiable, we call it a diffeomorphism. A nonlinear system
ẋ(t) = f (x(t)) + Γ(x(t))u(t), (9.33)
where
f : Dx → Rn (9.34)
and
Γ : Dx → Rp×p (9.35)
are sufficiently smooth on a domain Dx ⊂ Rn , is said to be input-state linearizable if
there exists a diffeomorphism
T : Dx ⊂ Rn (9.36)
such that
Dz = T (Dx ) (9.37)
contains the origin and the change of variables z(x(t)) = T (x(t)) transforms the system
into the form
ż(x(t)) = Az(x(t)) + Bβ −1 (x(t)) [u(t) − α(x(t))] , (9.38)
with (A, B) controllable and β(x(t)) invertible for all x ∈ Dx .

Conditions for Linearizability - General Case


But when is this the case? In general, holds
∂T
ż(x(t)) = ẋ(t)
∂x (9.39)
∂T
= [f (x(t)) + Γ(x(t))u(t)] .
∂x
On the other hand, one can also write
ż(t) = Az(t) + Bβ −1 (x(t)) [u(t) − α(x(t))]
(9.40)
= AT (x(t)) + Bβ −1 (x(t)) [u(t) − α(x(t))] .
Using Equation 9.39 and Equation 9.40, one can write the general equality which must
hold for all x(t) and u(t) in the domain of interest:
∂T
[f (x(t)) + Γ(x(t))u(t)] = AT (x(t)) + Bβ −1 (x(t)) [u(t) − α(x(t))] . (9.41)
∂x
If one sets u(t) = 0, one can split the equation into two:
∂T
f (x(t)) = AT (x(t)) − Bβ −1 (x(t))α(x(t))
∂x (9.42)
∂T
Γ(x(t)) = Bβ −1 (x(t)).
∂x
Each correct transformation T (·) must satisfy the partial differential equations given in
Equation 9.42.
Having T (x(t)) which fulfills these partial differential equations is a necessary and suf-
ficient conditions that a transformation from the form in Equation 9.33 to the form in
Equation 9.38 exists.

229
Gioele Zardini Control Systems II FS 2018

Conditions for Linearizability - Single Input


With a single input (p = 1), one can define a linear transformation ξ(x(t)) = M z(x(t))
with M invertible and write
ξ˙ = M AM −1 ξ + M Bβ −1 (x(t)) [u(t) − α(x(t))] . (9.43)
We choose M such that the controller canonical form can be written as
 
0 1 0 ... ... 0 0
 0 0 1 0 ... 0 0 
 
  ..
 . . .. . .
.. .. . .. .
.. 
Ac + Bc γ | Bc
 
= 0 . (9.44)
 
Cc Dc  ... ... 0 1 0 0 
 0 ... ... ... 0 1 0 
 
 −γ0 −γ1 . . . . . . − − γn−2 −γn−1 1 
c0 . . . cm 0 ... 0 0
This means
M AM −1 = Ac + Bc γ | (9.45)
and
M B = Bc . (9.46)
The term
Bc γ | ξ = Bc γ | M T (x(t)) (9.47)
is included into the nonlinearity
Bc β −1 (x(t))α(x(t)), (9.48)
which allows to reformulate the partial differential equations as
 
T2 (x)
 T3 (x) 
 
Ac T (x(t)) − Bc β −1 (x(t))α(x(t)) =  ...  (9.49)
 
 
Tn−1 (x)
Tn (x)
and  
0
 0 
..
 
Bc β −1 (x(t)) =  . (9.50)
 
.
 
 0 
1
β(x(t))
Finally, one can write
∂T1
f (x(t)) = T2 (x(t))
∂x(t)
∂T2
f (x(t)) = T3 (x(t))
∂x(t)
.. (9.51)
.
∂Tn−1
f (x(t)) = Tn (x(t))
∂x(t)
∂Tn α(x(t))
f (x(t)) = −
∂x β(x(t))

230
Gioele Zardini Control Systems II FS 2018

and
∂T1
γ(x(t)) = 0
∂x
∂T2
γ(x(t)) = 0
∂x
..
. (9.52)
∂Tn−1
γ(x(t)) = 0
∂x
∂Tn 1
γ(x(t)) =
∂x β(x(t))

9.7 Examples
Example 60.
a) Consider the continuous-time system

ẋ(t) = 0.5x(t), x(t) ∈ R, (9.53)

and the test function


V (x) = 2x. (9.54)
Which of the following statements is true?

 V (x) is a Lyapunov function for this system and therefore the system is asymp-
totically stable.
 V (x) is not a Lyapunov function for this system and therefore the system is
not stable.
 V (x) is not a Lyapunov function for this system. Furthrermore, given this
information, we cannot conclude anything about the stability of the system.

b) Which of the following functions are positive definite

 V (x) = x1 (t)2 + x2 (t)2 .


 V (x) = x1 (t)2 .
 V (x) = (x1 (t) + x2 (t))2 .
 V (x) = −x1 (t)2 − (3x1 (t) + 2x2 (t))2 .
 V (x) = x1 (t)x2 (t) + x2 (t)2 .
2x2 (t)2
 V (x) = x1 (t)2 + 1+x2 (t)2
.

c) You are given the nonlinear system

ẋ1 (t) = x1 (t)x2 (t)2


(9.55)
ẋ2 (t) = x1 (t)2 x2 (t) + 2x2 (t)3 − 6x2 (t).

Evaluate the stability of the origin using the Lyapunov function


1 2
(x (t) + x2 (t)2 ). (9.56)
2 1
231
Gioele Zardini Control Systems II FS 2018

 The largest region of attraction of the system is {x(t) ∈ R2 |x1 (t)2 +x2 (t)2 ≤ 3}.
2 2 2
√ largest region of attraction of the system is {x(t) ∈ R |x1 (t) + x2 (t) ≤
 The
3}.
 The largest region of attraction of the system is {x(t) ∈ R2 |x1 (t)2 +x2 (t)2 ≤ 2}.
2 2 2
√ largest region of attraction of the system is {x(t) ∈ R |x1 (t) + x2 (t) ≤
 The
2}.
 None of the above.

232
Gioele Zardini Control Systems II FS 2018

Solution.

a)

 V (x) is a Lyapunov function for this system and therefore the system is asymp-
totically stable.
 V (x) is not a Lyapunov function for this system and therefore the system is
not stable.
3 V (x) is not a Lyapunov function for this system. Furthermore, given this

information, we cannot conclude anything about the stability of the system.

Solution: The test function is not a Lyapunov function. One can verify this by
observing that:

• V (x) is not a positive definite function or



dV dx(t)
V̇ (x) =
dx dt
∂V
= ẋ(t) (9.57)
∂x
= 2 · 0.5 · x
=x

is not a negative definite function.

Since V (x) is not a Lyapunov function, we cannot conclude anything about the
stability of the system. Moreover, we know that the system is unstable only from
the positive eigenvalue λ1 = 0.5, and not from V (x).

b)

3 V1 (x(t)) = x1 (t)2 + x2 (t)2 .



 V2 (x(t)) = x1 (t)2 .
 V3 (x(t)) = (x1 (t) + x2 (t))2 .
 V4 (x(t)) = −x1 (t)2 − (3x1 (t) + 2x2 (t))2 .
 V5 (x(t)) = x1 (t)x2 (t) + x2 (t)2 .

Solution:

• V1 (x(t)) > 0 ∀x(t) 6= 0 and V1 (x(t)) = 0 if x = 0.


• V2 (x(t)) > 0 ∀x(t) 6= 0 and V2 (x(t)) = 0 if x1 = 0. This still holds for any
x2 (t) 6= 0, which makes V2 (x(t)) positive semi-definite.
• V3 (x(t)) ≥ 0 ∀x(t), but can be 0 as soon as x1 (t) = −x2 (t).
• V4 (x(t)) < 0 ∀x(t) 6= 0.
• As soon as x1 (t)x2 (t) < x2 (t)2 , V5 (x(t)) < 0.

c)  The largest region of attraction of the system is {x(t) ∈ R2 |x1 (t)2 +x2 (t)2 ≤ 3}.

233
Gioele Zardini Control Systems II FS 2018

2 2 2
√ largest region of attraction of the system is {x(t) ∈ R |x1 (t) + x2 (t) ≤
 The
3}.
 The largest region of attraction of the system is {x(t) ∈ R2 |x1 (t)2 +x2 (t)2 ≤ 2}.
2 2 2
√ largest region of attraction of the system is {x(t) ∈ R |x1 (t) + x2 (t) ≤
 The
2}.
3 None of the above.

Solution: It holds
∂V ∂x1 ∂V ∂x2
V̇ (x1 (t), x2 (t)) = +
∂x1 ∂t ∂x2 ∂t
= x1 (t) x1 (t)x2 (t)2 + x2 (t) x1 (t)2 x2 (t) + 2x2 (t)3 − 6x2 (t) (9.58)
 

= 2x1 (t)2 x2 (t)2 + 2x2 (t)4 − 6x2 (t)2


= 2x2 (t)2 x1 (t)2 + x2 (t)2 − 3 .


In order to find the region of attraction for which the system is asymptotically
stable, V̇ (x) must be negative definite. This is the case if

{x(t) ∈ R2 |x1 (t)2 + x2 (t)2 < 3}. (9.59)

This ensures that the region of attraction for the origin is at least the one presented
in Equation 9.59. However, the choice of another Lyapunov function could result
in a larger region of attraction. This explains why none of the first four answers is
correct.

234
Gioele Zardini Control Systems II FS 2018

Example 61. You are given the system

ẋ(t) = f (x(t)) + gu(t), (9.60)

with    
x2 (t) 0
−a sin(x1 (t)) − b(x1 (t) − x3 (t)) 0
f (x(t)) =  , g=  (9.61)
 x4 (t)  0
c(x1 (t) − x3 (t)) d
where a, b, c and d are positive constants. We want to find a diffeomorphismus such that
T1 (x(t)) fulfills:
∂Ti ∂T4
g = 0, i = 1, 2, 3; g 6= 0. (9.62)
∂x ∂x
The system has clearly an equilibrium point at x = 0. From the first condition
∂T1
g = 0, (9.63)
∂x
one knows that
∂T1
d = 0. (9.64)
∂x4
This means that one must choose T1 (x(t)) independent of x4 (t). Using this, one can write
∂T1 ∂T1 ∂T1
T2 (x(t)) = x2 (t) + (−a sin(x1 (t)) − b(x1 (t) − x3 (t)) + x4 (t). (9.65)
∂x1 ∂x2 ∂x3
From the second condition
∂T2
g = 0, (9.66)
∂x
one knows that
∂T1
= 0. (9.67)
∂x4
This implies
∂T2 ∂T1
=0⇒ = 0. (9.68)
∂x4 ∂x3
T1 (x(t)) needs to be independent of x3 (t) and hence
∂T1 ∂T1
T2 (x(t)) = x2 (t) + (−a sin(x1 (t)) − b(x1 (t) − x3 (t)) , (9.69)
∂x1 ∂x2
and
∂T2 ∂T2 ∂T2
T3 (x(t)) = x2 (t) + (−a sin(x1 (t)) − b(x1 (t) − x3 (t)) + x4 (t). (9.70)
∂x1 ∂x2 ∂x3
From the third condition
∂T3
g = 0, (9.71)
∂x
one knows that
∂T3
= 0. (9.72)
∂x4
This implies
∂T3 ∂T2 ∂T1
=0⇒ =0⇒ = 0. (9.73)
∂x4 ∂x3 ∂x2

235
Gioele Zardini Control Systems II FS 2018

T1 (x(t)) needs to be independent of x2 (t) and hence

∂T3 ∂T3 ∂T3


T4 (x(t)) = x2 (t) + (−a sin(x1 (t)) − b(x1 (t) − x3 (t)) + x4 (t). (9.74)
∂x1 ∂x2 ∂x3
The last condition
∂T4
g 6= 0 (9.75)
∂x
is satisfied if
∂T3 ∂T2 ∂T1
=0⇒ =0⇒ = 0. (9.76)
∂x3 ∂x2 ∂x1
With T1 (x(t)) = x1 (t), one can write

z1 (x(t)) = T1 (x(t)) = x1 (t)


z2 (x(t)) = T2 (x(t)) = x2 (t)
(9.77)
z3 (x(t)) = T3 (x(t)) = −a sin(x1 (t)) − b(x1 (t) − x3 (t))
z4 (x(t)) = T4 (x(t)) = −ax2 (t) cos(x1 (t)) − b(x2 (t) − x4 (t)).

236
Gioele Zardini Control Systems II FS 2018

Example 62. Your SpaghETH startup, which cooks pasta on the polyterrasse everyday,
is growing every week more and although no particular production issues occur you are
concerned about ecology. Since each tank of pasta you cook needs water and a correct
salt seasoning for it to taste that delicious, you need a lot of salt and water, which are
often wasted. For this reason, you open a research branch in your startup which decides
to design a duct-hydraulic system to counteract the waste of water and salt. The idea
is to use a two water tank system, which helps you seasoning the water and changing it,
without substituting the whole pot. The dynamics of the system are given by
p
ẋ1 (t) = 1 + u(t) − 1 + x1 (t)
p p
ẋ2 (t) = 1 + x1 (t) − 1 + x2 (t) (9.78)
y = x2 (t).
a) Linearize the nonlinear system around the equilibrium
 
x1,eq (t) x2,eq (t) ueq (t) = 3 3 1 . (9.79)

b) Determine the coordinate transformation such that the system can be written in
the form
ż1 (t) = z2 (t)
ż2 (t) = α(z) + β(z)u(t) (9.80)
y(t) = z1 (t).

c) Find a feedback control law by exactly linearizing the system.

u(t)

x1 (t)
SALT

x2 (t)

Figure 63: Sketch of the system.

237
Gioele Zardini Control Systems II FS 2018

Solution.
a) It holds
 
− √ 1 0
2 1+x (t)
A= √ 1 1
− √ 1

x1,eq (t)=x2,eq (t)=3
2 1+x1 (t) 2 1+x2 (t)
1
!
−4 0
= 1 1
,
4
− 4 (9.81)
 
1
B= ,
0

C= 0 1 ,
D = 0.

b) By choosing the states    


z1 (t) y(t)
z(t) = = , (9.82)
z2 (t) ẏ(t)
one gets
ż1 (t) = z2 (t)

ż2 (t) = ẏ(t)
∂t

= ẋ2 (t)
∂t
1 1
= p ẋ1 (t) − p ẋ2 (t)
2 1 + x1 (t) 2 1 + x2 (t)
1  p  1 p p 
= p 1 + u(t) − 1 + x1 (t) − p 1 + x1 (t) − 1 + x2 (t)
2 1 + x1 (t) 2 1 + x2 (t)
p !
1 1 1 + x1 (t) u(t)
= p −p + p .
2 1 + x1 (t) 1 + x2 (t) 2 1 + x1 (t)
(9.83)
Furthermore, we know
z1 (t) = y(t) = x2 (t)
z2 (t) = ẏ(t) = ẋ2 (t) (9.84)
p p
= 1 + x1 (t) − 1 + x2 (t),
from which it follows
x2 (t) = z1 (t)
p p (9.85)
1 + x1 (t) = z2 (t) + 1 + z1 (t)
Plugging Equation 9.85 into Equation results in
ż1 (t) = z2 (t)
p !
1 1 z2 (t) + 1 + z1 (t) 1 u(t)
ż2 (t) = p − p + p
2 z2 (t) + 1 + z1 (t) 1 + z1 (t) 2 z2 (t) + 1 + z1 (t)
= α(z(t)) + β(z(t))u(t).
(9.86)

238
Gioele Zardini Control Systems II FS 2018

c) With the form obtained in Equation 9.86, one can write


      
ż1 (t) 0 1 z1 (t) 0
= + v(t), (9.87)
ż2 (t) 0 0 z2 (t) 1

where
1
u(t) = (v(t) − α(z(t)))
β(z(t))
p !!
p 1 1 z2 (t) + 1 + z 1 (t)
= 2(z2 (t) + 1 + z1 (t)) v(t) − p − p .
2 z2 (t) + 1 + z1 (t) 1 + z1 (t)
(9.88)

239
Gioele Zardini Control Systems II FS 2018

Example 63. You are given the system


   
ẋ1 (t) a sin(x2 (t))
= . (9.89)
ẋ2 (t) −x1 (t)2 + u(t)

Use the linearizability conditions for SIMO systems to find the transformation (diffeo-
morphism) z(x(t)) = T (x(t)).

240
Gioele Zardini Control Systems II FS 2018

Solution. We identify the system to be of the form


     
ẋ1 (t) a sin(x2 (t)) 0
= 2 + u(t) = f (x(t)) + γu(t). (9.90)
ẋ2 (t) −x1 (t) 1
From this we deduce  
0
γ= . (9.91)
1
The first condition on T (x(t)) implies
∂T1
γ(x(t)) = 0
∂x  
∂T1 ∂T1
 0
∂x1 ∂x2 =0 (9.92)
1
∂T1
= 0.
∂x2
From the very definition and the result of Equation 9.92, one gets
T2 (x(t)) = ∂T ∂T1

∂x2 f (x(t))
1
∂x1
∂T1 ∂T1
= a sin(x2 (t)) − x1 (t)2
∂x1 ∂x2 (9.93)
|{z}
=0
∂T1
= a sin(x2 (t)).
∂x1
The second condition on T (x(t)) implies
  
∂T2 ∂ ∂T1 0
γ(x(t)) = a sin(x2 (t))
∂x ∂x ∂x1 1
     0
∂ ∂T1 ∂ ∂T1
= ∂x1 ∂x1 a sin(x2 (t)) ∂x2 ∂x1
a sin(x2 (t))
1
 
∂ ∂T1
= a sin(x2 (t))
∂x2 ∂x1
∂T1
T1 is independent of x2 = a cos(x2 (t)) 6= 0.
∂x1
(9.94)
∂T1
In order for Equation 9.94 to hold, cos(x2 (t)) 6= 0 and ∂x1
6= 0. Choosing z1 (x(t)) =
T1 (x(t)) = x1 (t), results in the diffeomorphism
z1 (x(t)) = x1 (t)
(9.95)
z2 (x(t)) = a sin(x2 (t)) = ẋ1 (t).
Note that this is not the only possible choice. Choosing
z1 (x(t)) = x1 (t) + x1 (t)3 (9.96)
would result in
z2 (x(t)) = ẋ1 (t) + 3x1 (t)2 ẋ1 (t)
(9.97)
= a sin(x2 (t)) + 3x1 (t)2 a sin(x2 (t)).

241
Gioele Zardini Control Systems II FS 2018

A Linear Algebra
A.1 Matrix-Inversion
1
A−1 = · adj(A), {adj(A)}ij = (−1)i+j · det(Aij ) (A.1)
det(A)
Special Cases:

• n = 2:    
a b −1 1 d −b
A= ⇒ A = · (A.2)
c d a · d − b · c −c a
• n = 3:
   
a b c e·i−f ·h c·h−b·i b·f −c·e
1 
A = d e f  ⇒ A−1 = f · g − d · i a · i − c · g c · d − a · f
det(A)
g h i d·h−e·g b·g−a·h a·e−b·d
(A.3)

• (a + b)3 = a3 + 3a2 b + 3ab2 + b3

A.2 Differentiation with Matrices


d
A · x = AT
dx
d T
x · A · x = (AT + A) · x
dx

A.3 Matrix Inversion Lemma


1
[M + v · v T ]−1 = M −1 − T −1
· M −1 · v · v T · M −1
1+v ·M ·v

242
Gioele Zardini Control Systems II FS 2018

B Rules
B.1 Trigo
α[◦ ] 0 30 45 60 90 120 180
π π π π 2π
α[rad] 0 6 4 3 2 3
π
√ √ √
1 2 3 3
sin(α) 0 1 0
√2 √2 2 2
3 2 1
cos(α) 1 0 − 12 −1
√3 2
√ 2

3
tan(α) 0 3
1 3 ±∞ − 3 0
√ √
3

3
cot(α) ±∞ 3 1 2
0 − 2
±∞

B.2 Euler-Forms
eix = cos(x) + i · sin(x)
a + i · b = |a + i · b| · ei·∠(a+i·b)
1 ix
sin(x) = (e − e−ix )
2i
1
cos(x) = (eix + e−ix )
2

B.3 Derivatives
1 1
(loga |x|)0 = (loga e) =
x x ln a
cx 0 cx
(a ) = (c ln a)a
1
(tan x)0 = = 1 + tan2 x
cos2 x
1
(arcsin x)0 = √
1 − x2
1
(arccos x)0 = − √
1 − x2
1
(arctan x)0 =
1 + x2

B.4 Logarithms
ln |y| · C = ln |y C |
− ln |r| = ln |r−1 |
ln(1) = log(1) = 0

243
Gioele Zardini Control Systems II FS 2018

B.5 Magnitude and Phase


In the Bode diagram are magnitude and phase separate from each other. Magnitude and
phase for complex fractions are:
r
a+i·b a2 + b 2
=
c+i·d c2 + d2
     
a+i·b a+i·b bc − ad
∠ = arg = arctan
c+i·d c+i·d ac + bd
 
b
arg {a + i · b} = arctan
a
arg {(a + i · b)c } = c · arg {a + i · b}
 
c
arg = arg {c} − arg {a + i · b}
(a + i · b)

B.6 dB-Scale
Typically the unit for the magnitude is dB:

|Σ(jω)|dB = 20 · log10 |Σ(jω)|


|Σ(jω)|dB
|Σ(jω)| = 10 20

1
= −X|dB
X dB
(X · Y )|dB = X|dB + Y |dB

Value 0.001 0.01 0.1 0.5 √1 1 2 2 10 100
2
dB −60 −40 −20 ≈ −6 ≈ −3 0 ≈ 3 ≈ 6 20 40

244
Gioele Zardini Control Systems II FS 2018

C MATLAB
C.1 General Commands

Command Description
A(i,j) Element of A in position i (row) and j (column)
abs(X) Magnitude of all elements of X
angle(X) Phase of all elements of X
X’ Complex conjugate and transpose of X
X.’ Transpose, not complex conjugate of X
conj(X) Complex conjugate of all elements of X
real(X) Realteil von allen Einträge von X
imag(X) Imaginary part of all elements of X
eig(A) Eigenvalues of A
[V,D]=eig(A) Eigenvalues D (diagonal elements), eigenvectors V (column vectors)
s=svd(A) singular values of A
[U,Sigma,V]=svd(A) Singular Values Decomposition of A
rank(A) Rank of A
det(A) Determinant of A
inv(A) Inverse of A
diag([a1,...,an]) Diagonalmatrix with a1,...,an as diagonal elements
zeros(x,y) Zero matrix of dimension x×y
zeros(x) Zero matrix of dimension x×x
eye(x,y) Identity matrix of dimension x×y
eye(x) Identity matrix of dimension x×x
ones(x,y) One-Matrix (all elements = 1) of dimension x×y
ones(x) One-Matrix (all elements = 1) of dimension x×x
max(A) Largest element in vector A (A Matrix: Max in column vectors)
min(A) Smallest element in vector A (A Matrix: Max in column vectors)
sum(A) Sum of elements of A (A Matrix: Sum row pro row)
dim=size(A) Dimension of A (size=[#rows #columns])
dim=size(A,a) a=1: dim=#rows, a=2: dim=#columns, sonst dim=1
t=a:i:b t=[a,a+i,a+2i,...,b-i,b] (row vector)
y=linspace(a,b) row vector with 100 “linear-spaced” points in range [a,b]
y=linspace(a,b,n) row vector with n “linear-spaced” points in range [a,b]
y=logspace(a,b) row vector with 50 “logarithmically-spaced” points in range [10^a,10^b]
y=logspace(a,b,n) row vectors with n “logarithmically-spaced” points in range [10^a,10^b]
I=find(A) I: Index of non zero elements of A
disp(A) Print on screen of A (String: ’name’)

245
Gioele Zardini Control Systems II FS 2018

C.2 Control Systems Commands

Befehl Beschreibung
sys=ss(A,B,C,D) State-Space M. with A,B,C,D in time domain
sys=ss(A,B,C,D,Ts) State-Space M. with A,B,C,D and sampling Ts (discrete-time)
sys=zpk(Z,P,K) State-Space M. with zeros Z, poles P and gain K
sys=zpk(Z,P,K,Ts) State-Space M. with zeros Z, poles P, gain K and sampling Ts
sys=tf([bm ...b0],[an ...a0]) Transfer function with bn in numerator and an in denom.
P=tf(sys) Transfer function of sys
P.iodelay=... Insers to P delay.
pole(sys) Poles of System
zero(sys) Zeros og System
[z,p,k]=zpkdata(sys) z: Zeros, p: Poles, k: static gain
ctrb(sys) or ctrb(A,b) Controllability Matrix
obsv(sys) or obsv(A,c) Observability Matrix
series(sys1,sys2) series of sys1 and sys2
feedback(sys1,sys2) sys1 with sys2 as (negative) Feedback
[Gm,Pm,Wgm,Wpm]=margin(sys) Gm: gain margin, Pm: phase margin, Wpm: crossover freq.
[y,t]=step(sys,Tend) y: step response von sys until T, t: time
[y,t]=impulse(sys,Tend) y: impulse response of sys until Tend, t: time
y=lsim(sys,u,t) Simulation of sys with input u for the timet
sim(’Simulink model’,Tend) Simulation of Simulink Model’ until Tend
p0=dcgain(sys) static gain (P (0))
K=lqr(A,B,Q,R) Gain Matrix K (solution of the LQR-Problem)
[X,L,K]=care(A,B,Q) X: solution of the Riccati equation, G: Gain matrix
Paug=augw(G,W1,W3,W2) Space State M. for H∞
[K,Cl,gamma]=hinfsyn(Paug) H∞ : K: Controller
fr=evalfr(sys,f) sys evaluated in f (s = f )
sysd=c2d(sys,Ts,method) Discretization of sys with method with Sampling Time Ts

246
Gioele Zardini Control Systems II FS 2018

C.3 Plot and Diagrams

Befehl Beschreibung
nyquist(sys) Nyquist diagram of the system sys
nyquist(sys,{a,b}) Nyquis diagramm in interval [a,b] of the system sys
bode(sys) Bode diagram of the system sys
bode(sys,{a,b}) Bode diagram in intervall [a,b] of the system sys
bodemag(sys) Bode diagram (just magnitude) of the system sys
bodemag(sys,{a,b}) Bode diagram (just magnitude) in interval [a,b] of the system. sys
rlocus(sys) Root Locus diagram
impulse(sys) Impulse Response of the system sys
step(sys) Step response of the system sys
pzmap(sys) Poles and zeros mapping of the system sys
svd(sys) Singular values dynamics of the dystem sys
plot(X,Y) Plot of Y as function of X
plot(X,Y,...,Xn,Yn) Plot of Yn as function of Xn (for all n)
stem(X,Y) Discrete plot of Y as function of X
stem(X,Y,...,Xn,Yn) Discrete plot of Yn as function of Xn (for all n)
xlabel(’name’) Name of the x-Axis
ylabel(’name’) Name of the y-Axis
title(’name’) Title of the plot
xlim([a b]) Range for the x-Axis (Plot between a and b)
ylim([a b]) Range for the y-Axis (Plot between a and b)
grid on Grid
title(’name’) Title of the plot
legend(’name1’,...,’name’) Legend
subplot(m,n,p) Grid m×n, Plot in Position p
semilogx(X,Y) Logarithmitic Plot with y-Axis linear

247
Gioele Zardini Control Systems II FS 2018

References
[1] Essentials of Robust Control, Kemin Zhou.

[2] Karl Johan Amstroem, Richard M. Murray Feedback Systems for Scientists and En-
gineers. Princeton University Press, Princeton and Oxford, 2009.

[3] Sigurd Skogestad, Multivariate Feedback Control. John Wiley and Sons, New York,
2001.

[4] Hassan K. Khalil, Nonlinear Systems. Michigan State University.

248

You might also like