Mu-Analysis and Synthesis Toolbox
Mu-Analysis and Synthesis Toolbox
Synthesis Toolbox
For Use with MATLAB ®
Gary J. Balas
John C. Doyle
Keith Glover
Andy Packard
Roy Smith
Computation
Visualization
Programming
User’s Guide
Version 3
How to Contact The MathWorks:
www.mathworks.com Web
comp.soft-sys.matlab Newsgroup
508-647-7000 Phone
508-647-7001 Fax
For contact information about worldwide offices, see the MathWorks Web site.
i
H∞ Control and Model Reduction
3
Optimal Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Performance as Generalized Disturbance Rejection . . . . . . . . . 3-2
Norms of Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Using Weighted Norms to Characterize Performance . . . . . . . . 3-5
H∞ Output Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-36
Disturbance Feedforward and Output Estimation . . . . . . . . . 3-38
Converting Output Feedback to Output Estimation . . . . . . . . 3-39
Relaxing Assumptions A1–A4 . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41
ii Contents
Loop Shaping Using H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . 3-52
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-69
iii
Specifics About Using the mu Command with
Mixed Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-64
Computational Exercise with the mu Command —
Mixed Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-64
Generalized µ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-81
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-84
iv Contents
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-14
v
Robust Control Examples
7
SISO Gain and Phase Margins . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
vi Contents
H∞ Design on the Open-Loop Interconnection . . . . . . . . . . . . . 7-89
Assessing Robust Performance with µ . . . . . . . . . . . . . . . . . . . . . . 7-93
µ Analysis of H∞ Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-94
µ-Analysis of Loop Shape Design . . . . . . . . . . . . . . . . . . . . . . . 7-97
Recapping Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-99
D – K Iteration for HIMAT Using dkit . . . . . . . . 7-100
H∞ Loop Shaping Design for HIMAT . . . . . . . . . . . . . . . . . . . 7-121
H∞ Loop Shaping Feedback Compensator Design . . . . . . . . . 7-123
Assessing Robust Performance with µ . . . . . . . . . . . . . . . . . . . . . 7-125
Reduced Order Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-126
Introducing a Reference Signal . . . . . . . . . . . . . . . . . . . . . . . . 7-128
HIMAT References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-131
vii
Reference
8
Summary of Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
viii Contents
1
1-2
The advanced features of µ-Tools are aimed at:
1-3
1 Overview of the Toolbox
1-4
Organization of This Manual
1-5
1 Overview of the Toolbox
There are three graphical user interfaces, described in detail in Chapter 6. The
interfaces are:
• wsgui is a Workspace Manager. It allows you to select, save and clear the
workspace variables, based on their type (VARYING, SYSTEM,
CONSTANT) and other more complicated selection rules. This tool is useful
during all MATLAB sessions, and is described in the “Workspace User
Interface Tool: wsgui” section of Chapter 6.
• simgui is a time-domain simulation package for uncertain closed-loop
systems. It is powerful enough to build templates for the complex plotting
requirements of a large MIMO control design report. This tool is described in
“LFT Time Simulation User Interface Tool: simgui” section of Chapter 6.
• dkitgui is a control design program to assist you with the DK iteration. It
aids in understanding the DK iteration process. The flexibility allows you to
easily modify performance objectives and uncertainty models during the
iteration. This tool is described in “DK Iteration User Interface Tool: dkitgui”
section of Chapter 6.
1-6
2
This chapter gives a basic introduction, with examples, to the µ-Analysis and
Synthesis Toolbox (µ-Tools) data structure and commands. Introductory
examples are found in the demo programs msdemo1.m and msdemo2.m. Other
demonstration files are introduced in subsequent chapters. You can copy these
files from the mutools/subs source into a local directory and examine the
effects of modifying some of the commands.
2-2
Command Line Display
SYSTEM Matrices
Consider a linear, finite dimensional system, modeled by the state-space
representation
x· = Ax + Bu
y = Cx + Du
nx × nx
If the system R has nx states, nu inputs, and ny outputs, then A ∈ R ,
B ∈ R n x × n u , C ∈ R n y × n x , and D ∈ R n y × n u . Systems of this type are
represented in µ-Tools by a single MATLAB data structure, referred to as a
SYSTEM matrix.
2-3
2 Working with the Toolbox
A B
C D
x· = Ax + Bu
y = Cx + Du
Structural information about the matrix sys can be obtained with the
command minfo.
minfo(sys)
2-4
The Data Structures
A matrix
-0.1500 0.5000
-0.5000 -0.1500
B matrix
-0.2000 4.0000
-0.4000 0
C matrix
5 5
D matrix
-0.1000 -0.1000
The commands minfo and see work on any of the µ-Tools data structures. The
command pss2sys converts CONSTANT matrix data in packed form,
[A B; C D], into a µ-Tools SYSTEM matrix. The command sys2pss transforms
a SYSTEM matrix in a packed CONSTANT matrix. Alternatively, you can
generate a purely random SYSTEM matrix with the command sysrand by
specifying its number of states, inputs and outputs.
The command spoles finds the eigenvalues of the A matrix of a SYSTEM
matrix. The transmission zeros are calculated using szeros. In this example,
sys has no transmission zeros. A formatted display of the system poles is
produced with the µ-Tools command rifd.
2-5
2 Working with the Toolbox
spoles(sys)
ans =
–0.1500 + 0.5000i
–0.1500 – 0.5000i
rifd(spoles(sys))
VARYING Matrices
Matrix-valued functions of a single, independent real variable are common in
systems theory. VARYING matrices The frequency response of a
multiple-input, multiple-output (MIMO) system is a good example of such a
function. The independent variable is frequency, and at each frequency the
transfer function between the inputs and the outputs is a complex matrix.
represents these types of matrix functions with a data structure called a
VARYING matrix.
In general, suppose G is a matrix-valued function of a single real variable
n×m
G:R → C . One method to store this function on the computer is to evaluate
the function G at N discrete values of x ∈ R , call them x1,x2,. . .,xN and store all
of the evaluations. This is the approach taken by µ-Tools.
Consider a simple example.
mat1 = [.1 -.1;.25.5];
iv1 = 0;
mat2 = 2*mat1;
iv2 = 1;
mat3 = 2*mat2;
iv3 = 2;
2-6
The Data Structures
The command vpck creates the VARYING matrix data structure from column
stacked matrix and independent variable data. This is done as follows.
matdata = [mat1; mat2; mat3];
ivdata = [iv1; iv2; iv3];
vmat = vpck(matdata,ivdata)
minfo displays structural characteristics of the matrix and displays the data.
minfo(vmat)
see(vmat)
2 rows 2 columns
iv = 0
0.1000 –0.1000
0.2500 0.5000
iv = 1
0.2000 –0.2000
0.5000 1.0000
iv = 2
0.4000 –0.4000
1.0000 2.0000
Note that variable name iv stands for independent variable in the above
display. The command seeiv displays only the independent variable values of
the VARYING matrix vmat.
seeiv(vmat)
Analogous to vpck the command vunpck unpacks the matrix data and
independent variable data from a VARYING matrix.
2-7
2 Working with the Toolbox
CONSTANT Matrices
If a MATLAB variable is neither a SYSTEM nor a VARYING matrix it is
treated by µ-Tools as a CONSTANT matrix. CONSTANT matrices
CONSTANT matrices can be arguments to functions that normally expect
VARYING or SYSTEM matrix arguments.
The treatment of CONSTANT matrices is consistent with that of a constant
gain linear system. In operations normally performed on SYSTEM matrices,
the CONSTANT matrix is analogous to a linear system with only a D matrix.
In operations where a VARYING interpretation is required, the CONSTANT
matrix is assumed to be constant across all values of the independent variable.
This is consistent with the frequency response (or step response) of a constant
gain linear system.
Acknowledgments
The data structures used in are based on the following paper.
Stein, Gunter, and Stephen Pratt, “LQG Multivariable Design Tools,” AGARD
Lecture Series No 117, Multi-variable Analysis and Design Techniques,
September 1981.
2-8
Accessing Parts of Matrices
For example,
subvmat = sel(vmat,1:2,2);
minfo(subvmat)
selects rows 1 and 2 and column 2 from each matrix in vmat. You can use the
MATLAB colon notation in the specification of the rows and columns. To select
all rows or columns, use the character string ’:’ in single quotes. When sel is
used on a SYSTEM matrix, only the dimensions of the B, C, and D matrices
change. All the states remain, which may result in a (non)minimal system.
Extra states of the system can be removed by performing a balanced realization
(sysbal) or with the commands strunc and sresid.
For VARYING matrices you can access a portion of the independent variables
with the command xtract. For example,
vmat2 = xtract(vmat,0.5,1.5);
2-9
2 Working with the Toolbox
selects the matrices in vmat with independent variables between 0.5 and 1.5.
In this case it is a VARYING matrix with a single data point.
see(vmat2)
2 rows 2 columns
iv = 1
0.2000 –0.2000
0.5000 1.0000
2-10
Interconnecting Matrices
Interconnecting Matrices
µ-Tools provides several functions for connecting matrices together. All of the
functions described here work with interconnections of SYSTEM and
CONSTANT matrices or VARYING and CONSTANT matrices. If the matrices
represented are consistent, the combination is allowed. The interconnection of
a SYSTEM and a VARYING matrix is not allowed in µ-Tools (actually it is
allowed — see veval for examples of VARYING SYSTEM matrices).
The commands madd, msub, and mmult perform the appropriate arithmetic
operations on the matrices. A block diagram representation is shown in the
following figure.
madd(mat1,mat2) mmult(mat1,mat2)
mat1
?
+h mat1 mat2
6
mat2
Note that for multivariate matrices, the order of the arguments is important.
In the VARYING matrix case, the arithmetic operations are performed
matrix-by-matrix, for each value of the independent variable. The following
example illustrates this:
A two-row and one-column VARYING matrix, vmat3, is constructed with three
independent variables values.
vmat3 = vpck([2 2 4 4 8 8]',[0 1 2]');
minfo(vmat3)
2-11
2 Working with the Toolbox
The VARYING matrix vmat3 is multiplied by vmat to form vmat4 and the value
of the resulting matrix at the independent variables between 0 and 0.5 are
displayed.
vmat4 = mmult(vmat,vmat3);
minfo(vmat4) 60.23in
see(xtract(vmat4,0,0.5))
2 rows 1 column
iv = 0
0
1.5000
2-12
Interconnecting Matrices
4 rows 3 columns
iv = 1
0.2000 –0.2000 0
0.5000 1.0000 0
0 0 4.0000
0 0 4.0000
2-13
2 Working with the Toolbox
SYSTEM, CONSTANT
Matrices
Combined Interconnect Individual
System Systems
Frequency Frequency
Response Response
? ?
Interconnect
Combined Individual
System Systems
VARYING Matrix
Consider beginning in the upper-right corner (individual systems of SYSTEM
and CONSTANT matrices) and proceeding to the lower left corner (a single
VARYING matrix). The result will be independent of the path taken. This is
because in linear systems, the frequency response of an interconnection is the
algebraic interconnection of the individual frequency responses. However, due
to numerical roundoff, the calculations actually are not commutative, and
there may be small differences between the two results. For example, it is
sometimes numerically better to interconnect the frequency response of two
systems rather than interconnect the two systems and then take their
frequency response. This can be true when there are a large number of states
in the interconnection structure.
Interconnections involving feedback are performed with sysic, described in
“Interconnection of SYSTEM Matrices: sysic” . The basic feedback loop
interconnection program used by sysic is called starp, and is described in
Chapter 4, “m-Tools Commands for LFTs” on page 4-10.
The commands (sbs, mmult, starp, etc.) to form the interconnection step are
identical whether you are dealing with VARYING or SYSTEM
representations.
2-14
Plotting VARYING Matrices
1.5
Matrix element value
0.5
−0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Independent variable value
Note that every element of the matrix is plotted against the appropriate
independent variable. In the above example vmat is a 2 × 2 VARYING matrix,
giving four elements to be plotted. There are only three values (0, 1, and 2) of
the independent variable, and by default MATLAB draws a line between points
on the plot.
2-15
2 Working with the Toolbox
In the MATLAB plot command, different axis types are accessed by different
functions, loglog, semilogx, and others. In vplot the axis type is set by an
optional string argument. The default, used in the above example, is a linear/
linear scale. The generic vplot function call looks like
vplot('axistype',vmat1,'linetype1',vmat2,'linetype2',...)
2-16
Plotting VARYING Matrices
1.5
Matrix element value
1 *
0.5 *
-0.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
2-17
2 Working with the Toolbox
see(veval('sin',vmat2))
2 rows 2 columns
iv = 1
0.1987 –0.1987
0.4794 0.8415
vmat2dat = var2con(vmat2);
sin(vmat2dat)
ans =
0.1987 –0.1987
0.4794 0.8415
2-18
VARYING Matrix Functions
vebe('sqrt',sin(vmat2dat))
ans =
0.4458 0 + 0.4458i
0.6924 0.9173
A – B –...– H msub(A,B,...,H)
A * B *...* H mmult(A,B,...,H)
A / B vrdiv(A,B)
A \ B vldiv(A,B)
A .* B veval('.*',A,B)
A ./ B veval('./',A,B)
A ^ b veval('^',A,b)
A .^b veval('.^',A,b)
A' cjt(A)
A.' transp(A)
conj(A) cj(A)
sin(A) vebe('sin',A)
In each case, veval could have been used. However, veval can be quite slow,
since it is essentially a for loop of eval commands. For that reason, some
specific commands (madd, mmult, vldiv, etc.) are provided. The complete set of
VARYING operations should allow you to write algorithms more easily using
the data structures in µ-Tools.
2-19
2 Working with the Toolbox
calculates
C(jwiI – A)–1B + D, i = 1, . . ., N
for each independent variable wi and stores it in sys_g whose independent
variables are the N frequency points. You can specify a discrete-time
evaluation by specifying an optional sampling time, T. In this case each matrix
in the VARYING output is
C(ejω,T I – A) B + D.
Consider a simple second order example. The function nd2sys creates the
SYSTEM representation from numerator and denominator polynomials. In the
following example the system sys1 has the transfer function
– 0.5s + 1
--------------------------------- .
2
s + 0.2s + 1
2-20
More Sophisticated SYSTEM Functions
omega = logspace(-1,1,200);
sys1g = frsp(sys1,omega);
minfo(sys1g)
vplot('bode',sys1g)
10 1
Log Magnitude
10 0
10 -1
10 -2
10 -1 10 0 10 1
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -1 10 0 10 1
Frequency (radians/sec)
2-21
2 Working with the Toolbox
You can transform sys1 to the digital domain via a prewarped Tustin
transformation. The command tustin performs this function. In this example
a sample time of one second is used. The prewarping frequency is chosen as one
radian/second. For control purposes it is often better to choose the crossover
point as the prewarp frequency.
dsys1 = tustin(sys1,1,1);
dsys1z = frsp(dsys1,omega,1);
vplot('bode',sys1g,dsys1z)
10 1
10 0
Log Magnitude
10 -1
10 -2
10 -3
10 -4
10 -1 10 0 10 1
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -1 10 0 10 1
Frequency (radians/sec)
2-22
More Sophisticated SYSTEM Functions
u = siggen('min(pi,sqrt(t)+0.25*rand(size(t)))',[0:.1:40]);
y = trsp(sys1,u);
vplot(u,y)
title('Response of sys1 (dashed) to input, u (solid)')
xlabel('Time (seconds)')
3.5
2.5
1.5
0.5
-0.5
0 5 10 15 20 25 30 35 40
Time (seconds)
trsp calculates a default step-size based on the minimum spacing in the input
vector and the highest frequency eigenvalue of the A matrix. For high order
systems, we recommended you use some form of model reduction (see the
“Model Reduction” section in Chapter 3) to remove high frequency modes
which do not have a significant effect on the output.
2-23
2 Working with the Toolbox
trsp assumes that the input is constant between the values specified in the
input vector. The following example illustrates the consequences of this
assumption.
sys1a = pck(-1,1,1);
minfo(sys1a)
50
45
40
35
30
25
input
20
15
output
10
0
0 10 20 30 40 50 60
Time: seconds
At first glance the output does not seem to be consistent with the plotted input.
Remember that trsp assumes that the input is held constant between specified
values. The vplot and plot commands display a linear interpolation between
points. This can be seen by displaying the input signal interpolated to at least
2-24
More Sophisticated SYSTEM Functions
as small a step-size as the default integration step (here 0.1 seconds). The
µ-Tools function vinterp performs zero-order hold or linear interpolation of the
independent variable.
vplot(u1a,'-.',vinterp(u1a,0.1),'--',y1a,'-')
xlabel('Time: seconds')
text(5,44,'dash-dot: input')
text(5,40,'dashed: interpolated input')
text(5,36,'solid: output')
50
45 dash-dot: input
dashed: interpolated input
40
solid: output
35
30
25
20
15
10
0
0 10 20 30 40 50 60
Time: seconds
The staircase nature of the input is now evident. To have a ramp input, you can
use the function vinterp to provide linear interpolation as shown by the
following example.
uramp = vinterp(u1a,0.1,60,1);
minfo(uramp)
2-25
2 Working with the Toolbox
50
45
40
35
30
25
20 input
15 output
10
0
0 10 20 30 40 50 60
Time: seconds
Note that because the input is regularly spaced, with spacing less than or equal
to the default integration time, trsp does not interpolate the input. No final
time was specified in the trsp argument list. However 60 seconds was specified
to vinterp as the final time, and this became the last time in the input vector
uramp.
2-26
More Sophisticated SYSTEM Functions
50
45
40
35
30
25
20
15
10
0
0 10 20 30 40 50 60
Time: seconds
2-27
2 Working with the Toolbox
y1f = vfft(y1);
minfo(y1f)
2-28
More Sophisticated SYSTEM Functions
0.4
0.3
0.2
0.1
-0.1
-0.2
-0.3
-0.4
0 20 40 60 80 100 120
Time: seconds
10 2
10 1
Magnitude
10 0
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2
Frequency: radians/second
2-29
2 Working with the Toolbox
The µ-Tools function vplot displays this warning message since there is a data
value at zero frequency that cannot be plotted on the log frequency scale.
The Signal Processing Toolbox provides a means of performing spectral
analysis with the function spectrum. The µ-Tools function vspect operates in
a similar manner on VARYING matrices. Given a signal x and a signal y,
vspect can calculate the power spectral density of x (Pxx), the power spectral
density of y (Pyy), the cross spectral density (Pxy), the transfer function from x
to y (Txy), and the coherence (Cxy). The VARYING matrix result will have the
following five columns, [Pxx, Pyy, Pxy, Txy, Cxy]. The command vspect(x,m) will
calculate the power spectral density of each element of the VARYING matrix,
x, using averaged FFTs of length m. The algorithm is exactly that used for
spectrum. See the Signal Processing Toolbox for further information.
2-30
More Sophisticated SYSTEM Functions
Log Magnitude
10 0
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
2-31
2 Working with the Toolbox
estdata = sel(P1,[1:2],4);
sysord3 = fitsys(xtract(estdata,0.1,10),3);
vplot('bode',estdata,'-',frsp(sysord3,estdata),'-.')
tmp1 = 'sysord3 (dashed) and estimated';
tmp2 = ' transfer function (solid)';
title([ tmp1 tmp2 ])
rifd(spoles(sysord3))
rifd(spoles(sys3))
2-32
More Sophisticated SYSTEM Functions
0
10
2
10
4
10 1 0 1 2
10 10 10 10
Frequency (radians/sec)
0
Phase (degrees)
200
400 1 0 1 2
10 10 10 10
Frequency (radians/sec)
2-33
2 Working with the Toolbox
y1
noise
y2
T deltemp
setpoint
y1 57.3
wt deltemp
y2 g, p
6+ g6++ act
noise
-
- k
setpoint
2-34
Interconnection of SYSTEM Matrices: sysic
Variable Descriptions
Following are descriptions of the variables required by sysic.
systemnames
This variable is a character string, which contains the names of the matrices
(ie., the subsystems) used in the interconnection. The names must be separated
by spaces and/or tabs, and there should be no additional punctuation. The
order in which the names appear is not important. Each named system must
exist in the MATLAB workspace at the time the program sysic is run.
For the interconnection shown, with four components, k, p, act, and wt, the
following is an appropriate definition for the variable systemnames.
systemnames = ' k p act wt ';
The name of SYSTEM variables used within the sysic program is limited to 10
characters. This limitation is due to the MATLAB 19 character limitation on
the workspace variable names.
inputvar
This variable is a character string, with names of the various external inputs
that are present in the final interconnection. The input names are separated
by semicolons, and the entire list of input names is enclosed in square brackets
[ ]. Inputs can be multivariable signals, for example a windgust input with
three directions (x, y, and z) is specified by using windgust{3}. This indicates
three-variable input to the interconnection called windgust. Alternatively, this
could be specified as three separate, scalar inputs, say, wingustx, windgusty,
and windgustz. The order that the input names appear in the variable
inputvar is the order that the inputs are placed in the interconnection.
This simple interconnection has three external scalar inputs: sensor noise,
temperature disturbance, and a reference input.
inputvar = '[ noise; deltemp; setpoint]';
outputvar
This variable is a character string, describing the external outputs of the
interconnection, which must be linear combinations of the subsystem outputs
and the external inputs. Semicolons separate the channels of the output
variables. Between semicolons, signals can be added and subtracted, and
2-35
2 Working with the Toolbox
input_to_sys
This variable denotes the inputs to a specific system. Each subsystem named
in the variable systemnames must have a variable set to define the inputs to the
subsystem. If the system name is controller, then call the variable that must
be set using input_to_controller. Specify it in the same manner that the
variable outputvar is set, with inputs consisting of linear combinations of
subsystem outputs and external inputs. Separate channels are separated by
semicolons, and the order of the inputs in the variable should match the order
of the inputs in the system itself.
Corresponding to the systemnames variable set above, there are four input_to_
statements required, which are
input_to_k = '[ noise + p(2); setpoint ]';
input_to_act = '[ k ]';
input_to_wt = '[ deltemp ]';
input_to_p = '[ wt; act ]';
This means that the input to the controller consists of the sensor noise plus the
second output of the plant, and the reference input. The input to the actuator
is the output of the controller. The input to the weighting function is the
temperature disturbance, and the input to the plant consists of the output of
the weighting function, followed by the output of the actuator.
sysoutname
This character string variable is optional. If it exists in the MATLAB
workspace when sysic is run, the interconnection that is created by sysic is
2-36
Interconnection of SYSTEM Matrices: sysic
will cause sysic to store the final interconnection in a SYSTEM matrix called
T.
cleanupsysic
This variable is used to clean up the workspace. After running sysic, all of the
above variables that describe the interconnection are left in the workspace.
These will be automatically cleared if the optional variable cleanupsysic is set
to the character string yes. The default value of the variable is no, which does
not result in any of the user-defined sysic descriptions being cleared. The
MATLAB matrices listed in the variable systemnames are never automatically
cleared.
Running sysic
If the variables systemnames, inputvar, and outputvar are set, and for each
name name_i appearing in systemnames, the variable input_to_name_i is set,
then the interconnection is created by running the M-file sysic. Depending on
the existence/nonexistence of the variable sysoutname, the resulting
interconnection is stored in a user-specified MATLAB variable or the default
MATLAB variable ic_ms.
Within sysic, error-checking of the consistency and availability of subsystem
matrices and their inputs aid in debugging faulty sysic interconnection
descriptions.
The input/output dimensions of the final interconnection are defined by
inputvar and outputvar variables.
Returning to the initial example, the following sysic commands were used to
generate the three-input, two-output SYSTEM matrix clp. (Note that the
dimensions of the variables k, p, act, and wt must be consistent with the
problem description.)
2-37
2 Working with the Toolbox
The syntax of sysic is limited, and for the most part restricted to what is
shown here. Some additional features are illustrated in the more complicated
demonstration problems.
Given that there are four SYSTEM matrices, named himat, wdel, wp, and k, in
the MATLAB workspace, each with two inputs and two outputs, the following
10 lines form the sysic commands to make the interconnection structure
shown below, which is placed in the variable clp. These can be executed at the
2-38
Interconnection of SYSTEM Matrices: sysic
command line (as shown) or placed in a script file. The command mkhimat needs
to be run initially to create himat, wdel, and wp.
mkhimat
himatic
k = zeros(2,2);
systemnames = ' himat wdel wp k ';
inputvar = '[ pertin(2) ; dist(2) ]';
outputvar = '[ wdel ; wp ]';
input_to_himat = '[ k + pertin ]';
input_to_wp = '[ dist + himat ]';
input_to_wdel = '[ k ]';
input_to_k = '[ -dist - himat ]';
sysoutname = 'clp';
cleanupsysic = 'yes';
sysic;
The final interconnection structure is located in clp with two sets of inputs,
pertin and dist, and two sets of outputs w and e, corresponding to the
perturbation and error outputs.
w1 ; w2 pertin
e1 ; e 2
clp dist
2-39
2 Working with the Toolbox
2-40
3
H Control and
Model Reduction
3 H∞ Control and Model Reduction
control
input 6 external force
disturbance
- -
reference
K - G - e?, - tracking
- error
?
e noise
3-2
Optimal Feedback Control
let T denote the closed-loop mapping from the outside influences to the
regulated variables,
reference
tracking error = T
external force
control input
noise
regulated variables
outside influences
1
---
∞ 2
e 2 := ∫ e ( t ) dt
2
–∞
e1 ( t )
e2 ( t )
e(t) =
…
en ( t )
3-3
3 H∞ Control and Model Reduction
1
---
∞ 2
∫
2
e 2 := ( e(t) 2 dt )
–∞ 1
---
∞ T 2
= ( ∫ –∞
e ( t )e ( t )dt )
In µ-Tools the dynamical systems we deal with are exclusively linear, with
state-space model
x· = A B x
e C D d
1
1 ∞ ---
∫
2
T 2 := ------ T ( jω ) F dω 2 T ∞ := maxσ [ T ( jω ) ]
2π – ∞ w∈R
where the Frobenious norm (see the MATLAB norm command) of a complex
matrix M is
M F := trace ( M*M )
x· = A B x
e C D d
3-4
Optimal Feedback Control
then:
e 2
max -----------
d≠0 d 2
e~ T d~
Figure 3-2: Unweighted MIMO System
Suppose T is a MIMO stable linear system, with transfer function matrix T(s).
For a given driving signal d̃ ( t ) , define ẽ as the output, as shown in Figure 3-2.
3-5
3 H∞ Control and Model Reduction
Note that it is more traditional to write the diagram in Figure 3-2 with the
arrows going from left to right as in Figure 3-3.
d~ - T -e~
Figure 3-3: Unweighted MIMO System: Vectors from Left to Right
The diagrams in Figure 3-2 and Figure 3-3 represent the exact same system.
We prefer to write these block diagrams with the arrows going right to left to
be consistent with matrix and operator composition.
Assume that the dimensions of T are ne × nd. Let β > 0 be defined as
β := T ∞ := maxσ [ T ( jω ) ]
w∈R (3-1)
Now consider a response, starting from initial condition equal to 0. In that case,
Parseval’s theorem gives that
∞ T 1⁄2
ẽ 2 ∫
[ ẽ ( t )ẽ ( t ) dt ]
0
----------- = ----------------------------------------------------- ≤ β
∞ T 1⁄2
d̃ 2
∫0 d̃ ( t )d̃ ( t ) dt
ẽ 2
Moreover, there are specific disturbances d that result in the ratio -----------
d̃ 2
arbitrarily close to β. Because of this, ||T||∞ is referred to as the L2 (or RMS) gain
of the system.
As you would expect, a sinusoidal, steady-state interpretation of ||T||∞ is also
possible: For any frequency ω ∈ R , any vector of amplitudes a ∈ R n d , and any
vector of phases φ ∈ R n d , with ||a||2 ≤ 1, define a time signal
a 1 sin ( ωt + φ 1 )
d̃ ( t ) =
…
a n d sin ( ωt + φ n d )
3-6
Optimal Feedback Control
…
b n e sin ( ωt + n e )
L1 0 … 0 R1 0 … 0
0 L2 … 0 0 R2 … 0
WL = , WR =
.. ..
…
…
.
…
…
…
.
…
0 0 … Ln 0 0 … Rn
e d
e WL e~ T d~ WR d
e = WL e~ = WL T d~ = WL T WR d
3-7
3 H∞ Control and Model Reduction
Bounds on the quantity ||WLTWR||∞ will imply bounds about the sinusoidal
steady-state behavior of the signals d̃ and ẽ(= Td̃) in Figure 3-2. Specifically,
for sinusoidal signal d̃ , the steady-state relationship between ẽ(= Td̃) , d̃ and
||WLTWR||∞ is as follows: The steady-state solution ẽ ss , denoted as
ẽ 1 sin ( ωt + ϕ 1 )
ẽ ss ( t ) =
…
ẽ n sin ( ωt + ϕ n )
e d
(3-2)
∑
ne 2
satisfies W L ( jw )ẽ i ≤ 1 for all sinusoidal input signals d̃ of the form
i=1 i
d̃ 1 sin ( ωt + φ i )
d̃ ( t ) =
…
d̃ n d sin ( ωt + φ n d )
(3-3)
satisfying
nd 2
d̃ i
∑ --------------------------- ≤ 1
W R i ( jω )
2
i=1
d̃ i ≤ W R ( jω )
i
1
ẽ i ≤ ------------------------
W L ( jω )
i
3-8
Optimal Feedback Control
This shows how one could pick performance weights to reflect the desired
frequency-dependent performance objective. Use WR to represent the relative
1
magnitude of sinusoids disturbances that might be present, and use ---------
W
to
L
represent the desired upper bound on the subsequent errors that are produced.
Remember, though, the weighted H∞ norm does not actually give element-
by-element bounds on the components of ẽ based on element-by-element
bounds on the components of d̃ . The precise bound it gives is in terms of
Euclidean norms of the components of ẽ and d̃ (weighted appropriately by
WL(j ω ) and WR(j ω )).
3-9
3 H∞ Control and Model Reduction
The blocks in Figure 3-5 might be scalar (SISO) and/or multivariable (MIMO),
depending on the specific example. The mathematical objective of H∞ control
is to make the closed-loop MIMO transfer function Ted satisfy ||Ted||∞ < 1. The
weighting functions are used to scale the input/output transfer functions such
that when ||Ted||∞ < 1, the relationship between d̃ and ẽ is suitable.
3-10
Interconnection with Typical MIMO Performance Objectives
This shows the interpretation of the signals, weighting functions and models.
Signal Meaning
Wcmd
Wcmd is used in problems requiring tracking of a reference command. Wcmd
shapes (magnitude and frequency) the normalized reference command signals
into the actual (or typical) reference signals that we expect to occur. It describes
the magnitude and the frequency dependence of the reference commands
generated by the normalized reference signal. Normally Wcmd is flat at low
frequency and rolls off at high frequency. For example, in a flight control
problem, fighter pilots can (and will) generate stick input reference commands
up to a bandwidth of about 2Hz. Suppose that the stick has a maximum travel
of three inches. Pilot commands could be modeled as normalized signals passed
through a first order filter
3
W act = ---------------------------
1
--------------s + 1
2 ⋅ 2π
3-11
3 H∞ Control and Model Reduction
Wmodel
represents a desired ideal model for the closed-looped system, used for
problems with tracking requirements. For good command tracking response,
we might desire our closed-loop system to respond like a well-damped
second-order system. The ideal model would then be
2
ω
W model = 10 -------------------------------------
2 2
-
s + 2ζω + ω
for specific desired natural frequency ω and desired damping ratio ζ. Unit
conversions might be necessary too. In the fighter pilot example, suppose that
roll-rate is being commanded, and 10°/second response is desired for each inch
of stick motion. Then, in these units, the appropriate model is
·
2
ω
W model = 10 -------------------------------------
2 2
-
s + 2ζω + ω
Wdist
Wdist shapes the frequency content and magnitude of the exogenous
disturbances affecting the plant. For example, consider an electron microscope
as the plant. The dominant performance objective is to mechanically isolate the
microscope from outside mechanical disturbances, such as the ground
excitations, sound (pressure) waves, and air currents. The spectrum and
relative magnitudes of these disturbances are captured in the transfer function
weighting matrix Wdist.
Wperf1
Wperf1 weights the difference between the response of the plant and the
response of the ideal model, Wmodel. Often we desire accurate matching of the
ideal model at low frequency and require less accurate matching at higher
frequency, in which case Wperf1 is flat at low frequency, rolls off at first or
second order, and flattens out at a small, nonzero value at high frequency. The
inverse of the weight should be related to the allowable size of tracking errors,
in the face of the reference commands and disturbances described by Wref and
Wdist.
3-12
Interconnection with Typical MIMO Performance Objectives
Wperf2
Wperf2 penalizes variables internal to the process G, such as actuator states
that are internal to G, or other variables that are not part of the tracking
objective.
Wact
Wact is used to shape the penalty on control signal use. Wact is a frequency
varying weighting function used to penalize limits on the deflection/position,
deflection rate/velocity, etc., response of the control signals, in the face of the
tracking and disturbance rejection objectives defined above. Each control
signal is usually penalized independently.
Wsnois
Wsnois represents frequency domain models of sensor noise. Each sensor
measurement feedback to the controller has some noise, which is often higher
in one frequency range than another. The Wsnois weight tries to capture this
information, derived from laboratory experiments or based on manufacturer
measurements, in the control problem. For example, medium grade
accelerometers have substantial noise at low frequency and high frequency.
Therefore the corresponding Wsnois weight would be larger at low and high
frequency and have a smaller magnitude in the mid-frequency range.
Displacement or rotation measurement is often quite accurate at low frequency
and in steady-state, but responds poorly as frequency increases. The weighting
function for this sensor would be small at low frequency, gradually increase in
magnitude as a first or second system, and level out at high frequency.
Hsens
Hsens represents a model of the sensor dynamics or an external anti-aliasing
filter. The transfer functions used to describe Hsens are based on physical
characteristics of the individual components. These models might also be
lumped into the plant model G.
This generic block diagram has tremendous flexibility and many control
performance objectives can be formulated using this block diagram description.
In Chapter 4, we see how to incorporate uncertainty into the model of G (and
possibly Hsens as well), and how to analyze the implications on performance due
to uncertainty. Chapter 7 presents a number of examples, which explain in
detail how individual performance weighting functions are selected.
3-13
3 H∞ Control and Model Reduction
H2 norm
The H2 norm of a stable, strictly proper continuous-time SYSTEM matrix can
be calculated using the command h2norm. Its calling sequence is
out = h2norm(sys)
The output variable, out, is a scalar whose value is the two-norm of the
SYSTEM sys. Given a state-space description of a system as
sys = A B
C D
The H2 norm of the SYSTEM follows from the solution to the Lyapunov
equation
AX + XA´ + BB´ = 0.
with sys 2 = [ tr ( CXC )' ] .
H∞ norm
The H∞ norm of a stable, continuous-time SYSTEM, sys, can be calculated
using the command hinfnorm. Its calling sequence is
out = hinfnorm(sys,tol)
The output from hinfnorm is a 1 × 3 vector, out, which is made up (in order) of
a lower bound for ||sys||∞, an upper bound for ||sys||∞, and a frequency, ωo, at
which the lower bound is achieved.
3-14
Commands to Calculate the H2 and H∞ Norm
calls pkvnorm to find the maximum singular value of the VARYING matrix
across frequency.
The H∞ norm of a frequency VARYING matrix, sysg, can be calculated using
pkvnorm or vnorm. The calling sequences are
[peak,indv,index] = pkvnorm(matin)
out = vnorm(matin)
pkvnorm sweeps through the independent variable and calculates the largest
singular value of matin. The three output arguments all pertain to the peak
norm across frequency and its location: peak value, peak, the independent
variable’s value, indv, and the independent variable’s index, index.
vnorm is a VARYING matrix version of MATLAB’s norm command. The
operation is identical, except that it also works on CONSTANT and VARYING
matrices, producing a CONSTANT or VARYING output. vnorm returns the
matrix out with its norm at each independent variable value. The default is the
largest singular value of matin at each independent variable value.
Discrete-Time H∞ Norm
The H∞ norm of a discrete-time SYSTEM can be calculated using the command
dhfnorm. Its calling sequence is
out = dhfnorm(sys)
3-15
3 H∞ Control and Model Reduction
A B1 B2
P = C 1 D 11 D 12 = A B = C ( sI – A ) B + D
–1
CD
C 2 D 21 D 22
The H∞ output feedback control design problem is: Does there exist a linear
controller, K, with internal structure
AK BK
K =
CK DK
e d
P
y
-K u
is stable and the ∞-norm of FL(P, K) is less than γ? Note that the above block
diagram represents a linear fractional transformation (LFT). LFTs are
described in more detail in the “Representing Uncertainty” section in Chapter
4. The LFT equation FL(P,K) is given by P11 + P12K(I – P22K)–1P21.
The standard state-space technique to calculate H∞ output feedback
controllers is to select a value of γ and determine if there exists a controller K
such that ||FL(P,K)||∞ < γ. This value of γ is updated based on a modified
3-16
Commands to Design H∞ Output Feedback Controllers
A – jωI B 2
(A3) has full column rank for all ω.
C 1 D 12
A – jωI B 1
(A4) has full row rank for all ω.
C 2 D 21
hinfsyn and hinfsyne return the H∞ controller, the closed-loop system, and
the γ level achieved.
The hinfsyn and hinfsyne programs provide a γ iteration using a modified
bisection method. You select a high and low value of γ, gamma_max and
gamma_min. The bisection method iterates on the value of γ in an effort to
approach the optimal H∞ control design. If the value gamma_max equals
gamma_min, only one γ value is tested. The bisection algorithm stops when the
difference between the smallest value of γ that has passed and the largest value
of γ that has failed is less than tol.
3-17
3 H∞ Control and Model Reduction
H∞ Design Example
The objective is to design an H∞ (sub)optimal control law for SYSTEM
interconnection structure given by the block diagram in Figure 3-6. The
HIMAT plant model and weightings are described in more detail in the
“HIMAT Robust Performance Design Example” section in Chapter 7.
3-18
Commands to Design H∞ Output Feedback Controllers
displayed denoting that the γ value either passed or failed. Upon finishing,
hinfsyn and hinfsyne print out the lowest γ value achieved. A # sign is used in
the printout to denote which of the five conditions for the existence of an H∞
(sub)optimal controller failed.
nmeas = 2;
ncont = 2;
gmn = 1;
gmx = 10;
tol = 0.1;
mkhimat
himatic
minfo(himat_ic)
system: 8 states 6 outputs 6 inputs
p = himat_ic;
[k,clp] = hinfsyn(p,ncont,nmeas,gmn,gmx,tol);
Test bounds: 1.0000 < gamma<=10.0000
3-19
3 H∞ Control and Model Reduction
∞ 1⁄2
P 2 := ------ trace [ P ( jω )*P ( jω ) ]dω
1
∫
2π – ∞
P ∞ := sup σ([)P ( jω )] (σ ( : ) = maximum singular value)
ω
The former arises when the exogenous signals either are fixed or have a fixed
power spectrum; the latter arises from (weighted) balls of exogenous signals.
H2-optimal control theory was heavily studied in the 1960’s as the Linear
Quadratic Gaussian (LQG) optimal control problem; H∞-optimal control theory
is continuing to be developed. We assume the reader either is familiar with the
engineering motivation for these problems, or is interested in the results of this
chapter for some other reason.
The basic block diagram used in this chapter is
e d
P
y
-K u
where P is the generalized plant and K is the controller. Only finite
dimensional linear time-invariant (LTI) systems and controllers will be
considered. The generalized plant P contains what is usually called the plant
3-20
H∞ Optimal Control Theory
in a control problem plus all weighting functions. The signal d contains all
external inputs, including disturbances, sensor noise, and commands, the
output e is an error signal, y is the measured variables, and u is the control
input. The diagram is also referred to as a linear fractional transformation
(LFT) on K and P is called the coefficient matrix for the LFT. The resulting
closed loop transfer function from d to e is denoted by Ted = FL(P,K).
The main H∞ output feedback results are presented in the “H• Output
Feedback” section. The proofs of these results exploit the separation structure
of the controller. If perfect measurements of the states (x) and the disturbances
(d) are available (this is defined as the Full Information problem), then the
central controller is simply a gain matrix F∞, obtained through finding a
certain stable invariant subspace of a Hamiltonian matrix. Also, the optimal
output estimator is an observer whose gain is obtained in a similar way from a
dual Hamiltonian matrix. These special cases are described in the “H• Full
Information and Full Control Problems” section. In the general output
feedback case the controller can be interpreted as an optimal estimator for F∞x.
Furthermore, the two Hamiltonians involved in this solution can be associated
with full information and output estimation problems.
As mentioned, this material is taken primarily from [GloD2], which is a direct
generalization of [DoyGKF], and contains a substantial repetition of material.
Roughly speaking, [GloD2] proves those results in [GloD] which were stated
without proof, using [DoyGKF] machinery, which considered a less general
problem. An alternative approach in relaxing some of the assumptions in
[DoyGKF] is to use loop-shifting techniques as in [ZhouK], [GloD], and more
completely in [SafLC]. We also consider some aspects of generalizations to the
≤ case, primarily to indicate the problems encountered in the optimal case. A
detailed derivation of the necessity of the generalized conditions for the Full
Information problem is given. In keeping with the style of [GloD] and
[DoyGKF], we don’t present a complete treatment of the ≤ case. Complete
derivations of the optimal output feedback case can be found in [GlovM] using
different techniques.
3-21
3 H∞ Control and Model Reduction
Historical Perspective
This section is not intended as a review of the literature in H∞ theory, but
rather an attempt to outline some of the work that led up to and most closely
touches on [DoyGKF], [GloD], and [GloD2]. Control, history For a more
extensive bibliography and review of earlier literature, the interested reader
might see [Fran1] and [FranD].
Zames’ [Zame] original formulation of H∞ optimal control theory was in an
input-output setting. Most solution techniques available at that time involved
analytic functions (Nevanlinna-Pick interpolation) or operator-theoretic
methods [Sara], [AdAK], and [BallH]. An earlier state-space solution was
presented in [Doy1], in which the steps were as follows: parametrize all
internally stabilizing controllers via Youla [YouJB]; obtain realizations of the
closed-loop transfer matrix; convert the resulting model-matching problem into
the equivalent 2 × 2-block general distance or best approximation problem
involving mixed Hankel-Toeplitz operators; reduce to the Nehari problem
(Hankel only); and solve the Nehari problem by the procedure of [Glo1]. Both
[Fran1] and [FranD] give expositions of this approach, which will be referred
to as the “1984” approach.
In a mathematical sense, the 1984 procedure solved the general rational H∞
optimal control problem and much of the subsequent work in H∞ control theory
focused on the 2 × 2-block problems, either in the model-matching or general
distance forms. Unfortunately, the associated complexity of computation was
substantial, involving several Riccati equations of increasing dimension, and
formulae for the resulting controllers tended to be very complicated and have
high state dimension. Encouragement came from [LimH] who showed, for
problems transformable to 2 × 1-block problems, that a subsequent minimal
realization of the controller has state dimension no greater than that of the
generalized plant G. This suggested the likely existence of similarly low
dimension optimal controllers in the general 2 × 2 case.
Additional progress on the 2 × 2-block problems came from [BallC], who gave a
state-space solution involving three Riccati equations. [JonJ] showed a
connection between the 2 × 1-block problem. [FoisT] developed an interesting
class of operators called skew Toeplitz to study the 2 × 2-block problem. Other
approaches have been derived by [Hung] using an interpolation theory
approach, [Kwak] using a polynomial approach, and [Kim] using a method
based on conjugation.
3-22
H∞ Optimal Control Theory
Notation
⊥
The notation is fairly standard. The Hardy spaces H2 and H 2 consist of
square-integrable functions on the imaginary axis with analytic continuation
into, respectively, the right and left half-plane. The Hardy space H∞ consists
of bounded functions with analytic continuation into the right half-plane. The
Lebesgue spaces L2 = L2(–∞,∞), L2+ = L2[0,∞), and L2– = L2(–∞,0] consist
respectively of square-integrable functions on (–∞,∞), [0,∞), and (–∞,0], and
L∞ consists of bounded functions on (–∞,∞). As interpreted in this chapter, L∞
3-23
3 H∞ Control and Model Reduction
will consist of functions of frequency, L2+ and L2– functions of time, and L2 will
be used for both.
We will make liberal use of the Hilbert space isomorphism, via the Laplace
transform and the Paley-Wiener theorem, of L2 = L2+ ⊕ L2– in the time-domain
⊥
with L2 = H2 ⊕ H 2 in the frequency-domain and of L2+ with H2 and L2– with
⊥
H 2 . In fact, we will normally not make any distinction between a time-domain
signal and its transform. Thus we may write d ∈ L 2+ and then treat d as if
d ∈ H 2 . This style streamlines the development, as well as the notation, but
when any possibility of confusion could arise, we will make it clear whether we
are working in the time- or frequency-domain.
All matrices and vectors will be assumed to be complex. A transfer matrix in
terms of state-space data is denoted
A B := C ( sI – A ) –1 B + D
C D
p×r
For a matrix M ∈ C , M´ denotes its conjugate transpose,
σ ( M ) = ρ ( M′M ) 1 ⁄ 2 denotes its maximum singular value, ρ(M)denotes its
spectral radius (if p = r), and M† denotes the Moore-Penrose pseudo-inverse of
M. Im denotes image, ker denotes kernel, and P ~ ( s ) := P ( – s )′ . For operators,
Γ* denotes the adjoint of Γ. The prefix B denotes the open unit ball and the
prefix Rc denotes complex-rational.
⊥
The orthogonal projections P+ and P– map L2 to, respectively, H2 and H 2 (or
L2+ and L2–). For P ∈ L ∞ , the Laurent or multiplication operator M P : L 2 → L 2
for frequency-domain d ∈ L 2 is defined by MPd = Pd. The norms on L∞ and L2
in the frequency-domain were defined in the “Performance as Generalized
Disturbance Rejection” section. Note that both norms apply to matrix or
vector-valued functions. The unsubscripted norm || • || will denote the standard
Euclidean norm on vectors. We will omit all vector and matrix dimensions
throughout, and assume that all quantities have compatible dimensions.
3-24
H∞ Optimal Control Theory
Problem Statement
Consider the system described by the block diagram
e d
P
y
-K u
Both P and K are complex-rational and proper, and K is constrained to provide
internal stability. We will denote the transfer functions from d to e as Ted in
general and for a linear fractional transformation feedback connection as above
we also write Ted = FL(P,K). This section discusses the assumptions on P that
will be used. In our application we have state models of P and K. Then internal
stability will mean that the states of P and K go to zero from all initial values
when d = 0.
Since we will restrict our attention exclusively to proper, complex-rational
controllers that are stabilizable and detectable, these properties will be
assumed throughout. Thus the term controller will be taken to mean a
controller that satisfies these properties. Controllers that have the additional
property of being internally stabilizing will be said to be admissible. Although
we are taking everything to be complex, in the special case where the original
data is real (e.g., P is real-rational) then all of the results (such as K) will also
be real.
The problem to be considered is to find all admissible K(s) such that
||Ted||∞ < γ(≤ γ). The realization of the transfer matrix P is taken to be of the form
A B1 B2
P ( s ) = C 1 D 11 D 12 = A B
C D
C 2 D 21 D 22
p1 p2 m1
compatible with the dimensions e ( t ) ∈ C , y(t) ∈ C , d(t) ∈ C ,
m2 n
u(t) ∈ C , and the state x ( t ) ∈ C . The following assumptions are made:
(A1) (A,B2) is stabilizable and (C2,A) is detectable.
3-25
3 H∞ Control and Model Reduction
(A2) D12 is full column rank with D 12 D ⊥ unitary and D21 is full row rank
D 21
with unitary.
D̃ ⊥
A – jωI B 2
(A3) has full column rank for all ω.
C 1 D 12
A – jωI B 1
(A4) has full row rank for all ω.
C 2 D 21
FL P – ,K
0 0
<γ
0 D 22 ∞
3-26
H∞ Optimal Control Theory
then
FL(P,K(I + D22K)–1) = P11 + P12K(I + D22K – P22K)–1P21
0 0
= FL P – ,K ⋅
0 D 22
0 0
P–
0 D 22
–1
yields a controller K̃ = K ( I + D 22 K ) for P. The µ-Tools commands hinfsyn
and hinfsyne handle the nonzero D22 case.
When D22 ≠ 0 there is a possibility of the feedback system becoming ill-posed
due to det(I +D22 K̃ ( ∞ ) ) = 0 (or more stringent conditions if we require
well-posedness in the face of infinitesimal time delays [Will1]). Such
possibilities need to be excluded.
It can be assumed, without loss of generality, that γ = 1 since this is achieved
by the scalings γ–1D11, γ–1/2B1, γ–1/2C1, γ1/2B2, γ1/2C2, and γ–1K. This will be
done implicitly for many of statements of this chapter.
Preliminaries
This section reviews some mathematical preliminaries, in particular the
computation of the various norms of a transfer matrix P. Consider the transfer
matrix
P(s) = A B
C D
(3-4)
3-27
3 H∞ Control and Model Reduction
||P||∞ is the induced norm of the multiplication operator MP, as well as the
Toeplitz operator P + M P : H 2 → H 2 .
H := A R
Q – A′
X1
χ – ( H ) = Im
X2
(3-5)
n×n
where X 1 ,X 2 ∈ C , and
X1 X1
H = T X , Re λ i ( Tx ) < 0 ∀i
X2 X2
(3-6)
3-28
H∞ Optimal Control Theory
χ– ( H ) , I m 0
I
(3-7)
a X is Hermitian
A´X + XA + XRX – Q = 0
c A + RX is stable
H = A – BB’
– C'C – A'
3-29
3 H∞ Control and Model Reduction
–1 –1
H := A + BR D’C BR B’
–1 –1
– C’ ( I – DD’ ) C – ( A + BR D’C )’
(3-8)
= A 0 + B R–1
D′C B′
– C′C – A′ – C′D
(3-9)
where R = I – D´D. The following lemma is essentially from [And], [Will1], and
[BoyBK].
Lemma 3.4. Let σ ( D ) < 1 , then the following conditions are equivalent:
a ||P||
∞<1
b H has no eigenvalues on the imaginary axis
c H ∈ dom(Ric)
d H ∈ dom(Ric) and Ric(H) ≥ 0 (Ric(H) > 0 if (C,A) is observable)
3-30
H∞ Full Information and Full Control Problems
e d
P
y
-K u
but with different structures for P. The problems are labeled
FI Full information
FC Full control
DF Disturbance feedforward (to be considered in the “Disturbance
Feedforward and Output Estimation” section)
OE Output estimation (to be considered in the “Disturbance
Feedforward and Output Estimation” section)
FC and OE are natural duals of FI and DF, respectively. The DF solution can
be easily obtained from the FI solution, as shown in the “Disturbance
Feedforward and Output Estimation” section. The output feedback solutions
will be constructed out of the FI and OE results. A dual derivation could use
the FC and DF results.
The FI and FC problems are not, strictly speaking, special cases of the output
feedback problem, as they do not satisfy all of the assumptions. Each of the four
problems inherit certain of the assumptions A1–A4 from the “Problem
Statement” section as appropriate. The terminology and assumptions will be
discussed in the subsections for each problem. In each of the four cases, the
results are necessary and sufficient conditions for the existence of a controller
such that ||Ted||∞ < γ and the family of all controllers such that ||Ted||∞ < γ. In all
cases, K must be admissible.
3-31
3 H∞ Control and Model Reduction
The H∞ solution involves two Hamiltonian matrices, H∞ and J∞, which are
defined as follows:
2
R := D′ 1• D 1• – γ I m1 0 , where D 1• := D 11 D 12
0 0
2 D 11
R̃ := D •1 D′ •1 – γ I p1 0 , where D •1 :=
0 0 D 21
A 0 B –1
H ∞ := – R D′ 1• C 1 B′
– C′ 1 C 1 A′ – C′ 1 D 1•
(3-10)
A′ 0 C′ –1
J ∞ := – R̃ D •1 B′ 1 C
– B 1 B′ 1 – A – B 1 D′ •1
(3-11)
X1 X1
H∞ = TX , X ′1 X 2 = X ′2 X 1, Re λ i ( T X ) ð 0 ∀i
X2 X2
(3-12)
Y1 Y1
J∞ = TY , Y ′1 Y 2 = Y ′2 Y 1 Re λ i ( T Y ) ð 0 ∀i
Y2 Y2
(3-13)
Also define
–1 –1
X ∞ := X 2 X 1 , Y ∞ := Y 2 Y 1
(3-14)
3-32
H∞ Full Information and Full Control Problems
F1 –1
F := := – R [ D ′ 1• C 1 + B ′ X ∞ ]
F2
(3-15)
–1
L := L 1 L 2 := – [ B 1 D ′ •1 + Y ∞ C ′ ]R̃
(3-16)
A B1 B2
C 1 D 11 D 12
P( s ) =
I 0 0
0 I 0
(3-17)
The assumptions relevant to the FI problem, which are inherited from the
output feedback problem, are
(A1) (A,B2) is stabilizable.
(A2) D12 is full column rank with D 12 D ⊥ unitary.
A – jωI B 2
(A3) has full column rank for all ω.
C 1 D 12
3-33
3 H∞ Control and Model Reduction
T1 0 F1 –I
K ( s ) = –Q ( s ) I
T2 I F2 0
A B1 I 0
P ( s ) = C 1 D 11 0 I
C 2 D 21 0 0
(3-18)
and is the dual of the Full Information case: the P for the FC problem has the
same form as the transpose of P for the FI problem. The term Full Control is
used because the controller has full access to both the state through output
injection and to the output e. The only restriction on the controller is that it
must work with the measurement y. The assumptions that the FC problem
inherits from the output feedback problem are just the dual of those in the FI
problem:
(A1) (C2,A) is detectable.
D 21
(A2) D21 is full row rank with unitary.
D̃ ⊥
A – jωI B 1
(A4) has full row rank for all ω.
C 2 D 21
3-34
H∞ Full Information and Full Control Problems
Necessary and sufficient conditions for the FC case are given in the following
corollary. The family of all controllers can be obtained from the dual of
Theorem 3.5 but these controllers will not be required in the sequel and are
hence omitted.
Corollary 3.6. Suppose P is given by (3-18) and satisfies A1, A2 and A4. Then,
3-35
3 H∞ Control and Model Reduction
H∞ Output Feedback
The solution to the Full Information problem of the “H• Full Information and
Full Control Problems” section is used in this section to solve the output
feedback problem. First in Theorem 3.8 a so-called disturbance feedforward
problem is solved. In this problem one component of the disturbance, d2, can be
estimated exactly from y using an observer, and the other component of the
disturbance, d1, does not affect the state or the output. The conditions for the
existence of a controller satisfying a closed-loop H∞-norm constraint is then
identical to the FI case.
The solution to the general output feedback problem can then be derived from
the transpose of Theorem 3.7 (Corollary 3.9) by a suitable change of variables
which is based on X∞ and the completion of the squares argument (see
[GloD2]).
The main result is now stated in terms of the matrices defined in the “H• Full
Information and Full Control Problems” section involving the solutions of the
X∞ and Y∞ Riccati equations together with the state feedback and output
injection matrices F and L. Assume that unitary changes of coordinates on ω
and z have been carried out to give the following partitions of D, F1 and L1.
F′ 11
F′ 12 F′ 2
F′ = L′ 11 D 1111 D 1112 0
L′ D L′ 12 D 1121 D 1122 I
L′ 2 0 I 0
(3-19)
3-36
H∞ Output Feedback
e Given that the conditions of part (a) are satisfied, then all rational
internally stabilizing controllers K(s) satisfying ||FL(P,K)||∞ < γ are given
by
ˆ ˆ ˆ
A B1 B2
Ka = ˆ ˆ ˆ
C1 D 11 D 12
ˆ ˆ
C2 D 21 0
ˆ 2 –1
D 11 = – D 1121 D ′ 1111 ( γ I – D 1111 D′ 1111 ) D 1112 – D 1122 ,
ˆ m2 × m2 ˆ p2 × p2
D 12 ∈ C and D 21 ∈ C are any matrices (e.g., Cholesky factors)
satisfying
ˆ ˆ 2 –1
D 12 D 12 = I – D 1121 ( γ I – D ′ 1111 D 1111 ) D 1121 ,
ˆ ˆ 2 –1
D 21 D 21 = I – D ′ 1112 ( γ I – D 1111 D′ 1111 ) D 1112 ,
and
ˆ –1 ˆ
B 2 = Z ∞ ( B 2 + L 12 ) D 12 ,
ˆ ˆ
C 2 = – D 21 ( C 2 + F 12 ),
ˆ –1 ˆ ˆ –1
B1 = –Z∞ L2 + B2 D 12 D̂ 11 ,
ˆ ˆ ˆ –1 ˆ
C = F + D D C ,
1 2 11 21 2
ˆ ˆ ˆ –1 ˆ
A = A + BF + B 1 D 21 C 2 ,
where
–2
Z∞ = ( I – γ Y∞ X∞ ) .
3-37
3 H∞ Control and Model Reduction
The proof of this main result is via some special problems that are simpler
special cases of the general problem and can be derived from the FI and FC
problems. A separation type argument can then give the solution to the general
problem from these special problems. It can be assumed, without loss of
generality, that γ = 1 since this is achieved by the scalings γ–1D11, γ–1/2B1,
γ–1/2C1, γ1/2B2, γ1/2C2, γ-1X∞, γ-1Y∞ and γ–1K. All the proofs will be given for the
case γ = 1.
In this case,
Y ∞ = 0, Z = I, L = – 0 B 1 D′ 21
The transpose of Theorem 3.8 can now be stated to obtain another special case
of Theorem 3.7.
D ′ ⊥ C 1 = 0, A – B 2 D ′ 12 C 1 is stable.
In this case
0
X ∞ = 0, Z = I, F = –
D′ 12 C 1
3-38
H∞ Output Feedback
2 2 2 2
e 2– d 2 = v 2– r 2
where
v = u + T2 d – T2 , I Fx
r = T1 ( 2 ) ( d – F1 x )
–1
x· = ( A + B 1 F 1 )x + B 1 T 1 r + B 2 u
–1
v = u + T2 T1 r – F2 x
–1
y = C 2 x + D 21 T 1 r + D 21 F 1 x
–1
A + B1 F1 B1 T1 B2
P vyru ( s ) := –F2 T2 T1
–1
I
–1
C 2 + D 21 F 1 D 21 T 1 0
(3-21)
Similarly substituting v for u in the equation for P gives that the transfer
function from to is H as defined by
d e
v r
3-39
3 H∞ Control and Model Reduction
e d 2
AF B1 , B2 T2 B2
3
H H = 64 C1F1 D D D11 D12 75 0
r v ?
-Q
?
,T1 F1 T1 0
(3-22)
2 2 2 2
It can be shown that H~H = I (since e 2 – d 2 = v 2 – r 2 ) and that AF is
stable.
We can show with a little algebra the equivalence of the first two of the
following block diagrams, with Tvr = FL(Pvyru,K) given by the third one.
e d e d v P r
P H vyru
y u r v y u
-K -T vr
-K
Lemma 3.10. Let P satisfy A1–A4, and assume that X∞ exists and X∞ ≥ 0. Then
the following are equivalent:
a K internally stabilizes P and ||FL(P,K)||
∞ < 1,
b K internally stabilizes Pvyru and ||FL(Pvyru,K)|| < 1
∞
c K internally stabilizes Ptmp and ||FL(Ptmp,K)||∞ < 1,
A + B1 F1 B1 B2
P tmp := – D 12 F 2 D 11 D 12 .
C 2 + D 21 F 1 D 21 0
The importance of the above constructions for Pvyru and Ptmp is that they
satisfy the assumptions for the output estimation problem (Corollary 3.9) since
A + BF is stable.
3-40
H∞ Output Feedback
Relaxing A3 and A4
Suppose that,
0 0 1
P = 0 0 1
1 1 0
Relaxing A1
If assumption A1 is violated, then it is obvious that no admissible controllers
exist. Suppose A1 is relaxed to allow unstabilizable and/or undetectable modes
on the jω axis, and internal stability is also relaxed to also allow closed-loop jω
axis poles, but A2–A4 is still satisfied. It can be easily shown that under these
3-41
3 H∞ Control and Model Reduction
Relaxing A2
In the case that either D12 is not full column rank or D21 is not full row rank,
then improper controllers can give bounded H∞-norm for Ted, although will not
be admissible as defined in the “Problem Statement” section. Such singular
filtering and control problems have been well-studied in H2 theory and many
of the same techniques go over to the H∞ -case (e.g., [Will2], [WilKS] and
[HauS]). In particular the structure algorithm of [Silv] could be used to make
the terms D12 and D21 full rank by the introduction of suitable differentiators
in the controller.
3-42
Discrete-Time and Sampled-Data H∞ Control
x k + 1 = Ax k + Bu k
y k = Cx k + Du k
jθ
H ∞ := supσ ( H ( e ) ) < γ
θ
for any γ sufficiently large. This can be accomplished either directly in terms of
the original data (A,B,C,D) or via the bilinear transformation,
1 + 1--2- sh
-,
z = ------------------
1 – 1--2- sh
which maps the unit disk in the z-plane into the left half of the s-plane for any
h > 0. The µ-Tools command to synthesize discrete-time H∞ controllers,
dhinfsyn, uses this bilinear transformation and the corresponding
continuous-time µ-Tools commands.
An additional consideration is which of the controllers Kd that make ||H||∞ < γ
should be chosen. The controller that maximizes the entropy integral,
–2
γ π
2 1 – zo
∫
–2 jθ jθ
I = ------ log det ( I – γ H ( e )*H ( e ) ) - dθ
--------------------------
2π –π jθ –1 2
e – zo
for any |zo| > 1 can be calculated. The usual central controller, the default for
hinfsyne, is taken as the one corresponding to zo = ∞ and gives a measure of
how far H(ejθ) is less than γ.”
3-43
3 H∞ Control and Model Reduction
Sampled-Data Systems
Continuous-time systems where measurements are sampled and then the
control signal calculated by a discrete-time controller followed by a hold are
termed sampled-data systems. Two possible H∞-type approaches to sampled
data control law design are available in µ-Tools software.
Consider the system in Figure 3-7, where P is the continuous-time generalized
plant, d is a continuous-time disturbance signal, e is a continuous-time error
signal, y is the measurement to be sampled by the sampler S, with sampling
period h, and u is the control signal, which is the output of the hold device, H,
and is constant between sampling points.
e d
P
y u
- S - Kd - H
Figure 3-7: Sampled Data System Block Diagram
e L
sup ------------2- < γ
w d L 2
This will handle both the above difficulties and has been studied in detail by
Bamieh and Pearson [BamP] along with a number of other researchers
3-44
Discrete-Time and Sampled-Data H∞ Control
1
F 1 ( s ) = -----------------------
( 1 + τ1 s )
2
ωo
F 2 ( s ) = ----------------------------------------------
2 2
( s + 2cω o s + ω o )
1
F 3 ( s ) = -----------------------
( 1 + τ3 s )
e F1 f F2 d
6 ?
F3
H Kd S
u y
Figure 3-8: Sampled Data System Block Diagram for Simple Example
P 11 ( s ) P 12 ( s ) F1 F2 F1
P = ,=
P 21 ( s ) P 22 ( s ) F3 0
3-45
3 H∞ Control and Model Reduction
and the closed-loop is trying to match the output of F2 by the controller output
based on the sampled input to F2 filtered by F3. P can be defined as follows.
h = 0.1;
tau1 = 0.001;
om_o = 2*pi; c = 0.05;
tau3 = 0.01;
F1 = nd2sys(1,[tau1 1]);
F2 = nd2sys(1,[omo_o(-2) 2*c/omo_o 1]);
F3 = nd2sys(1,[tau3 1]);
p_ic = abv(mmult(F1,sbs(F2,1)),sbs(F3,0));
ncon = 1; nmeas = 1;
minfo(p_ic)
system: 4 states 2 outputs 2 inputs
The zero controller will result in a purely continuous-time system with induced
norm given by ||P11(s)||∞.
hinfnorm(sel(p_ic,1,1))
norm between 10.01 and 10.02
achieved near 6.267
Now let us design a controller for this sampled-data system using the
corresponding sample and hold discrete-time system. The variable delay
corresponds to the number of sample delays in the controller.
gmin =.001; delay = 0;
gmax = 1;
tol = 0.001; tol2 = 0.001;
p_ic_sh = samhld(p_ic,h);
if delay>0,
p_ic_sh=mmult(daug(1,nd2sys(1,eye(1,delay+1))),p_ic_sh;
end
[k_d,g_d,gfin_d] = ...
dhfsyn(p_ic_sh,nmeas,ncon,gmin,gmax,tol,h,inf,-1,-2);
3-46
Discrete-Time and Sampled-Data H∞ Control
dhfnorm(g_d,tol2,h)
norm between 0.1835 and 0.1837
achieved near 31.42
3-47
3 H∞ Control and Model Reduction
[k_sd,gfin_sd]=sdhfsyn(p_ic,1,1,gmin,gamu_d,tol,h,delay,-2);
Test bounds:0.0010 < gamma <=20.2670
The initial design gave very low estimates of the possible gain in the system.
The latter design indicates that no controller can give a low gain with this
sampled-data problem. The main difficulty with this particular problem is that
3-48
Discrete-Time and Sampled-Data H∞ Control
the filter F3(s) has too high a bandwidth and this gives a high potential gain for
F3 followed by the sampler. In contrast to the continuous-time case, the
calculation of a worst-case disturbance in the sampled-data case is not
straightforward. However the time domain simulation of the system is now
performed to illustrate the reason for its high gain.
tfinal = 10;
t1 = (0:h/100:tfinal)';
[nr,nc] = size(t1);
w1 = zeros(nr,nc);
for i = 1:length(t1)/200,
w1(100*i-20:100*i) = ...
cos(i*pi/5-pi/6)*exp(h*(-20:1:0)/(100*tau3))';...
end
w = vpck(w1,t1);
[z_d,y_d,u_d] = sdtrsp(p_ic,k-d,w,h,tfinal,h/100);
vplot(z_d,'-',y_d,'.')
gain_d = norm(vunpck(z-d),2)/vnorm(vunpck(w),2)
gain_d =
1.2924e+01
The time responses are given in Figure 3-9 and the high gain achieved by the
disturbance being large just before the sampling instant and zero elsewhere,
hence having a relatively low total energy.
3-49
3 H∞ Control and Model Reduction
-1
-2
-3
-4
0 1 2 3 4 5 6 7 8 9 10
The suboptimal sampled-data controller can be simulated with the same input
as follows:
[z_sd,y_sd,u_sd] = sdtrsp(p_ic,k_sd,w,h,tfinal,h/100);
vplot(z_sd,'-',y_sd,'.')
gain_sd = norm(vunpck(z_sd),2)/vnorm(vunpck(w),2)
gain_sd =
6.3332e-01
The time responses are given in Figure 3-10. The gain in this instance is much
lower for this particular input. However other inputs can give a gain of close to
nine.
The same example can be repeated for different values of the sampling period,
h, and the controller delay, and for variations in the time constants. The two
controllers often give very similar results, however, the discrete-time results
obtained from samhld and dhfsyn can give optimistic gain estimates when
compared with those obtained by sdhfsyn.
3-50
Discrete-Time and Sampled-Data H∞ Control
0.6
0.4
0.2
-0.2
-0.4
-0.6
0 1 2 3 4 5 6 7 8 9 10
3-51
3 H∞ Control and Model Reduction
d1 d2
6e1 = u
?
i W2 G W1 ?i
-
e2 = y - K1
reference ( ) r
Figure 3-11: H∞ Loop Shaping Standard Block Diagram
The first step is to design a pre-compensator W1(s), so that the gain of
W2(s)G(s)W1(s) is sufficiently high at frequencies where good disturbance
attenuation is required and is sufficiently low at frequencies where good robust
stability is required. The second step is to design a feedback controller, K∞, so
that
1 I –1 1
---------------------------------------- := ( I – W 2 GW 1 ,K ∞ ) [ W 2 GW 1, I ] ð ---
b ( W 2 GW 1 ,K ∞ ) K∞ ε
∞
which will also give robust stability of the perturbed weighted plant
–1 ∆1
( N + ∆1 ) ( M + ∆2 ) for < b ( W 2 GW 1 ,K ∞ )
∆2
∞
3-52
Loop Shaping Using H∞ Synthesis
G
- W2 - K1 - W1
r
-
reference ( )
where
–1
b ( G, K ) = I ( I – GK ) – 1 .
G I
K ∞
The nugap is always less than or equal to the gap, so its predictions using the
above robustness results are tighter. To make use of the gap metrics in robust
3-53
3 H∞ Control and Model Reduction
3-54
Model Reduction
Model Reduction
It is often desirable to approximate a state-space representation of a system
with a lower order state-space representation. This procedure is referred to as
a model reduction. The µ-Analysis and Synthesis Toolbox (µ-Tools) provides
several commands to aid in reducing the order of a system. These are discussed
in this section along with an example to illustrate their use.
Given a SYSTEM matrix [A B; C D], the simplest method of model reduction
is to truncate a part of the SYSTEM A matrix and remove the corresponding
columns and rows of the B and C matrices. The command strunc performs this
function. You should be careful to order the modes of the A matrix and truncate
modes that do not significantly affect the system response. The command
strans is useful in this context as it transforms the A matrix into block
diagonal form with 1 × 1 or 2 × 2 blocks corresponding to the respectively real
and complex poles in order of increasing magnitude. This is often done prior to
truncating high frequency modes.
Truncating high frequency modes will also affect the low frequency response of
the various transfer functions. The command sresid can be used to residualize
the truncated modes and compensate for the zero frequency contribution of
each truncated mode with an additional D matrix term in the resulting reduced
order SYSTEM matrix.
More advanced model reduction techniques for stable systems can be
performed with the µ-Tools commands sysbal and hankmr. sysbal performs a
balanced realization on the input SYSTEM matrix, which entails balancing the
observability and controllability Grammians (for a more detailed discussion
see [Enn], [Glo1] and [Moo]). In its simplest form, this command will remove
all unobservable and/or uncontrollable modes. sysbal also returns a vector of
the Hankel singular values of the system, which can be used to further
truncate the modes of the SYSTEM.
The following example illustrates how sysbal can be used to remove
unobservable and uncontrollable modes. Two systems, P and C, are created and
then interconnected with unity gain negative feedback. The closed-loop system,
clp, is given by
PC
clp = ----------------
1 + PC
3-55
3 H∞ Control and Model Reduction
Suppose that
P = nd2sys(1,[1,1,1]);
C = nd2sys(1,[1,1]);
The closed-loop SYSTEM matrix clp has three states. However, suppose
instead the closed-loop system is formed as follows.
clp = mmult(P,C,minv(madd(1,mmult(P,C))));
minfo(clp)
system: 6 states 1 outputs 1 inputs
rifd(spoles(clp))
The closed-loop system, clp, contains the open-loop poles of p as well as the
closed-loop poles. Interconnecting systems with the commands mmult and madd
often lead to nonminimal realizations. You can see that the open-loop poles are
unobservable and/or uncontrollable by using sysbal with its second input, the
truncation tolerance, set to zero. The output gives the Hankel singular values
that are strictly greater than this tolerance together with a truncated balanced
realization of this order. (A strictly positive default tolerance is also available.)
Note that only five values are returned, the sixth being calculated as being
identically zero, and the fourth and fifth are both zero to machine accuracy.
strunc is then run to remove these two modes. Finally the H∞-norm of the
3-56
Model Reduction
error between the system with the nonminimal modes truncated and the
system formed using starp is calculated as being essentially zero.
[clpr,hanksv] = sysbal(clp,0);
disp(hanksv')
6.7732e-014.7580e-014.8484e-028.0919e-17
1.5746e-18
clpr = strunc(clpr,3);
minfo(clpr)
system: 3 states 1 outputs 1 inputs
rifd(spoles(clpr))
hinfnorm(msub(clpc,clpr))
norm between 4.163e-16 and 4.167e-16
achieved near 0
The sysh is stable with k poles, sysu is unstable (or anticausal), and ||sys -
sysh - sysu||∞ = sig(k + 1). This answer is optimal. If the 'd' option of
hankmr is used then the following bound is guaranteed,
sig(k + 1) ≤ ||sys - sysh|| ∞≤ sig(k + 1) + sig(k + 2) + ...
The lower bound holds for any sysh of degree k and for truncated balanced
realizations the upper bound needs to be doubled.
3-57
3 H∞ Control and Model Reduction
The following example creates a 15-state system and examines three, 7-state
reduced-order models generated by sysbal, hankmr, and sresid. First generate
the system, which consists of 10 real poles, a resonant pole pair, and first-order
and second-order high frequency all-pass terms.
a = -diag([.03.05.1.2.3.4 1 3 5 10]);
b = [.03.05.1.2.3.4 1 3 5 10]'; c = ones(1,10);
d = 0.001;
sys1 = pck(a,b,c,d);
sys2 = nd2sys([1.1.4],[1.1.1]);
sys3 = nd2sys([1 -3 1000],[1 3 1000]);
sys4 = nd2sys([1 -20],[1 20]);
sys = mmult(sys1,sys2,sys3,sys4);
minfo(sys)
system: 15 states 1 outputs 1 inputs
3-58
Model Reduction
Log Magnitude
10 1
10 0
10 -1
10 -2
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (radians)
100
-100
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
Next the balanced realization is formed and truncated to seven states. Its
H∞-norm error is compared with the upper and lower bounds.
3-59
3 H∞ Control and Model Reduction
The Bode plot of the original system and the seven state, balanced realization
model sysb7 is shown in Figure 3-14.
[sysb,sig] = sysbal(sys);
[mattype,p,m,n] = minfo(sysb);
disp(sig')
Columns 1 through 5
3.6422e+01 2.4906e+01 6.1381e+00 2.0261e+00 7.1689e-01
Columns 6 through 11
6.9421e-01 6.0281e-01 4.2291e-01 1.1884e-01 3.4276e-02
Columns 12 through 15
9.0070e-03 2.4834e-03 4.0909e-04 1.4544e-04 3.9050e-06
k = 7;
sysb7 = strunc(sysb,k);
sysb7_g = frsp(sysb7,omega);
vplot('bode',sys_g,sysb7_g) title(['Model reduction example:
Frequency domain'])
tmp = hinfnorm(msub(sys_g,sysb7_g));
disp(['H-inf error = ' num2str(tmp)])
H-inf error = 0.8553
disp(['lower bound = ' num2str(sig(k+1))])
lower bound = 0.4229
disp(['upper bound = ' num2str(2*sum(sig(k+1:n)))])
upper bound = 1.176
3-60
Model Reduction
10 1
Log Magnitude
10 0
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (radians)
100
-100
-200
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
3-61
3 H∞ Control and Model Reduction
10 1
Log Magnitude
10 0
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (radians)
100
-100
-200
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
Now obtain a truncated residualization. It turns out that the first seven poles
with smallest modulus also have the largest H∞-norms and hence no
reordering of the poles after strans is required.
sysr = strans(sys);
sysrt7 = sresid(sysr,7);
sysrt7_g = frsp(sysrt7,omega);
tmp = hinfnorm(msub(sys_g,sysrt7_g));
disp(['H-inf error = ' num2str(tmp)])
H-inf error = 7.782
vplot('bode',sys_g,sysb7_g,sysha7_g,sysrt7_g)
title(['Four different model reduction techniques'])
3-62
Model Reduction
10 1
Log Magnitude
10 0
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (radians)
100
-100
-200
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
The four different model Bode plots are shown in Figure 3-16. The output from
hankmr is nearly H∞-optimal and that from sysbal has a similar error and does
not change the D matrix. The frequency responses are plotted below with
sys-solid, sysbal-dashed, hankmr-dotted, and resid-dash/dot. Note that as
expected sresid matches well at low frequency but not at high frequency.
The time responses of the four systems, shown in Figure 3-17, are now
compared in response to a 1 second pulse. The same line types are used for the
display of the time domain responses. The response of sysrt7 does not match
sys well over the first 2 seconds but after 2 seconds the match is good.
3-63
3 H∞ Control and Model Reduction
pulse = siggen('t<1',[0:.1:10]);
ysys = trsp(sys,pulse);
integration step size: 0.003162
interpolating input vector (zero order hold)
ysysb7 = trsp(sysb7,pulse);
integration step size: 0.004127
interpolating input vector (zero order hold)
ysysha7 = trsp(sysha7,pulse);
integration step size: 0.003548
interpolating input vector (zero order hold)
ysysrt7 = trsp(sysrt7,pulse);
integration step size: 0.1
vplot(ysys,'-',ysysb7,':',ysysha7,'--',ysysrt7,'-.');
xlabel('Time: seconds')
title('Model reduction example: time domain')
-1
-2
-3
-4
-5
-6
-7
0 1 2 3 4 5 6 7 8 9 10
Time: seconds
Figure 3-17: Time Response of the Original System and the Three Reduced
Order Models
3-64
Model Reduction
Note You should not draw general conclusions from this one example as to
the relative merits of the different schemes.
The H∞ norm of the error is not always appropriate, for example, in the system
above none of the methods accurately matches the Bode diagram at high
frequencies. It is therefore desirable to generate reduced-order models whose
frequency weighted error is small. Two methods are available in µ-Tools to
assist in this. The first method is based on frequency weighted Hankel norm
approximation as proposed in Latham and Anderson [LatA] and finds Ĝ of
~– 1 ~– 1
degree k to minimize the Hankel norm of the stable part of W 1 ( G – Ĝ )W 2 .
Note that W1(s)~ is defined as. W1(–s)´. This is implemented in the functions
sfrwtbal and sfrwtbld. G is required to be stable and the weights need to be
square, stable, and minimum phase. sfrwtbal then finds a balanced
~– 1 ~– 1
realization of the stable part of W 1 GW 2 together with its Hankel singular
values, which in this case also provide lower bounds on the achievable error.
The resulting balanced system is approximated using hankmr (or another
method if preferred) and Ĝ constructed using sfrwtbld. This is illustrated on
the 15-state example with a sixth order approximation and with the relative
error criterion, although it is not restricted to this case.
wt1 = mmult(sys1,sys2); wt2 = 1; k = 6; n = 15;
[sysfrwtbal,sigfrwt] = sfrwtbal(sys,wt1,wt2);
disp(sigfrwt')
Columns 1 through 5
1.0000e+00 1.0000e+00 1.0000e+00 9.9697e-01 9.3961e-01
Columns 6 through 11
9.1022e-01 2.2177e-01 5.9077e-02 1.7448e-02 4.5526e-03
Columns 12 through 15
1.2419e-03 2.1046e-04 8.1357e-05 1.2386e-05 2.9765e-07
sysfrwtk = hankmr(sysfrwtbal,sigfrwt,k,'d');
sysfrwthat = sfrwtbld(sysfrwtk,wt1,wt2);
sysfrwthat_g = frsp(sysfrwthat,omega);
disp(hinfnorm(msub(1,vrdiv(sysfrwthat_g,sys_g))))
3.2401e-01
vplot('bode',sys_g,'-',sysfrwthat_g,':')
title('sys_g (-) and sysfrwthat_g (:)')
3-65
3 H∞ Control and Model Reduction
Log Magnitude
10 1
10 0
10 -1
10 -2
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (degrees)
100
-100
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
3-66
Model Reduction
must not have fewer columns than rows. The resulting realization can be
truncated as before.
[sysrelbal,sigrel] = srelbal(sys);
sysrelbalk = strunc(sysrelbal,k);
sysrelbalk_g = frsp(sysrelbalk,omega);
disp(hinfnorm(msub(1,vrdiv(sysrelbalk_g,sys_g))));
4.7480e-01
vplot('bode',sys_g,'-',sysrelbalk_g,'--',sysfrwthat_g,':')
title('model reduction example: relative error')
10 1
10 0
10 -1
10 -2
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
200
Phase (degrees)
100
-100
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
Figure 3-19: Frequency Response of the Original System and Reduced Order
Models Using Relative Error Methods
It can be checked that sigrel equals sigfrwt above in this case. Both methods
perform well with results close to the lower bound and similar frequency
responses as seen in Figure 3-19. Glover [Glo2] suggests a combination of
additive and relative error by performing relative error model reduction of the
3-67
3 H∞ Control and Model Reduction
Ĝ = ( I + ∆ r )G + α∆ a where
∆ r ∆ a has been made small. This is easily
∞
implemented using srelbal as follows to give a seventh order fit with a
performance between the two extremes.
k = 7; alpha = 15;
[sysrelbal,sigrel] = srelbal(abv(sys,alpha*eye(1)));
sysrelbalk = sel(strunc(sysrelbal,k),1,1);
You can evaluate the error in the same manner as was done in the frequency
weighted balanced reduction case.
3-68
References
References
[AdAK:] Adamjan, V.M., D.Z. Arov, and M.G. Krein, “Infinite block Hankel
matrices and related extension problems,” AMS Transl., vol. 111, pp. 133-156,
1978.
[And:] Anderson, B.D.O., “An algebraic solution to the spectral factorization
problem,” IEEE Transactions on Automatic Control, vol. AC-12, pp. 410-414,
1967.
[BallC:] Ball, J.A. and N. Cohen, “Sensitivity minimization in an H∞ norm:
Parametrization of all suboptimal solutions,” International Journal of Control,
vol. 46, pp. 785-816, 1987.
[BallH:] Ball, J.A. and J.W. Helton, “A Beurling-Lax theorem for the Lie group
U(m,n) which contains most classical interpolation theory,” J. Op. Theory, vol.
9, pp. 107-142, 1983.
[BamP:] Bamieh, B.A. and J.B. Pearson, “A general framework for linear
periodic systems with applications to H∞ sampled-data control,” IEEE
Transactions on Automatic Control, vol. AC-37, pp. 418–435, 1992.
[BoyBk:] Boyd, S., V. Balakrishnan, and P. Kabamba, “A bisection method for
computing the H∞ norm of a transfer matrix and related problems,” Math.
Control, Signals, and Systems, vol. 2, no. 3, pp. 207-220, 1989.
[ClemG:] Clements, D.J. and K. Glover, “Spectral Factorization via Hermitian
Pencils,” Linear Algebra and its Applications, Linear Systems Special Issue,
pp. 797-846, Sept. 1989.
[DesP:] Desai, U.B. and D. Pal, “A transformation approach to stochastic model
reduction,” IEEE Transactions on Automatic Control, vol. AC-29, pp.
1097-1100, 1984.
[Doy1:] Doyle, J.C., “Lecture notes in advances in multivariable control,”
ONR/Honeywell Workshop, Minneapolis, 1984.
[DoyFT:] Doyle, J.C. , B.A. Francis, and A.R. Tannenbaum, Feedback Control
Theory, Macmillian Publishing Company, New York, Toronto, 1992.
[DoyGKF:] Doyle, J.C., K. Glover, P. Khargonekar, and B. Francis,
“State-space solutions to standard H2 and H∞ control problems,” IEEE
Transactions on Automatic Control, vol. AC-34, no. 8, pp. 831-847, August
1989.
3-69
3 H∞ Control and Model Reduction
[Enn:] Enns, D., “Model reduction for control system design,” Ph.D.
dissertation, Stanford University, June, 1984.
[FoisT:] Foias, C., and A. Tannenbaum, “On the four-block problem, I,” in
Operator Theory: Advances and Applications, vol. 32, pp. 93-122, 1988 and “On
the four-block problem, II: The singular system,” Int. Equations Operator
Theory, vol. 11, pp. 726-767, 1988.
[Fran1:] Francis, B.A., A course in H∞ control theory, Lecture Notes in Control
and Information Sciences, vol. 88, Springer-Verlag, Berlin, 1987.
[FranD:] Francis, B.A. and J.C. Doyle, “Linear control theory with an H∞
optimality criterion,” SIAM J. Control and Opt., vol. 25, pp. 815-844, 1987.
[GeoS:] T. Georgiou and M. Smith, “Robust stabilization in the gap metric:
Controller design for distributed plants,” IEEE Transactions on Automatic
Control, pp. 1133-1143, vol. AC-37., no. 8, Aug. 1992.
[Glo1:] Glover, K., “All optimal Hankel-norm approximations of linear
multivariable systems and their L∞ error bounds,” International Journal of
Control, vol. 39, pp. 1115-1193, 1984.
[Glo2:] K. Glover, “Multiplicative approximation of linear multivariable
systems with L∞ error bounds,” Proceedings of the American Control
Conference, Seattle, pp. 1705-1709, 1986.
[Glo3:] Glover, K., “Tutorial on Hankel-norm approximation,” in From Data to
Model (J.C. Willems ed.), Springer-Verlag, pp. 26-48, 1989.
[GloD:] Glover, K. and J.C. Doyle, “State-space formulae for all stabilizing
controllers that satisfy an H∞ norm bound and relations to risk sensitivity,”
Systems and Control Letters, vol. 11, pp. 167-172, August 1989. International
Journal of Control, vol. 39, pp. 1115–1193, 1984.
[GloD2:] Glover, K. and J. Doyle, “A state-space approach to H∞ optimal
control,” in Three Decades of Mathematical Systems Theory: A Collection of
Surveys at the Occasion of the 50th Birthday of Jan C. Willems, H. Nijmeijer
and J.M. Schumacher (Eds.), Springer-Verlag Lecture Notes in Control and
Information Sciences vol. 135, 1989.
[GloM:] Glover, K. and D. Mustafa, “Derivation of the Maximum Entropy
H∞-controller and a State-space formula for its Entropy,” International
Journal of Control, vol. 50, no. 3, pp. 899-916, Sept. 1989.
3-70
References
[GloLDKS:] Glover, K., D.J.N. Limebeer, J.C. Doyle, E.M. Kasenally, and M.G.
Safonov, “A characterization of all solutions to the four block general distance
problem,” SIAM J. Control and Opt., vol. 29, no. 2, pp. 283-324, March, 1991.
[GohLR:] Gohberg, I., P. Lancaster and L. Rodman, “On the Hermitian
solutions of the symmetric algebraic Riccati equation,” SIAM J. Control and
Opt., vol. 24, no. 6, pp. 1323-1334, 1986.
[Gre:] Green, M., “A relative error bound for balanced stochastic truncation,”
IEEE Transactions on Automatic Control, vol. AC-33, pp. 961-965, 1988.
[GGLD:] Green, M., K. Glover, D. Limebeer, and J.C. Doyle, “A J-spectral
factorization approach to H∞ control,” SIAM J. Control and Opt., Vol. 28(6), pp.
1350-1371, 1990.
[HarK:] Hara, S. and P.T. Kabamba, “On computing the induced norm of
sampled data feedback systems,” Proceedings of the American Control
Conference, Green Valley, AZ, pp. 319-320, 1990.
[HauS:] Hautus, M.L.J. and L.M. Silverman, “System structure and singular
control.” Linear Algebra Applications, vol. 50, pp. 369-402, 1983.
[Hung:] Hung, Y.S., “RH∞-optimal control-Part I: Model matching, Part II:
Solution for controllers,” International Journal of Control, vol. 49, pp. 675-684,
1998.
[JonJ:] Jonckheere, E.A., and J.C. Juang, “Fast computation of achievable
performance in mixed sensitivity H∞ design,” IEEE Transactions on Automatic
Control, vol. AC-32, pp. 896-906, 1987.
[KabH:] Kabamba, P.T. and S. Hara, “Worst case analysis and design of
sampled data control systems,” IEEE Transactions on Automatic Control, vol.
AC-38, no. 9, pp. 1337-1357, Sept. 1993.
[KhaPZ:] Kargoneckar, P.P., I.R. Peterson, and M.A. Rotea, “H∞ optimal
control with state-feedback,” IEEE Transactions on Automatic Control, vol.
AC-33, pp. 786-788, 1988.
[Kim:] Kimura, H., “Synthesis of H∞ controllers based on conjugation,”
Proceedings of IEEE Conference on Decision and Control, Austin, TX., pp.
1207-1211, 1988.
[Kwak:] Kwakernaak, H., “A polynomial approach to minimax frequency
domain optimization of mutlivariable feedback systems,” International
Journal of Control, pp. 117-156, 1986.
3-71
3 H∞ Control and Model Reduction
3-72
References
[Sara:] Sarason, D., “Generalized interpolation in H∞,” AMS. Trans., vol. 127,
pp. 179-203, 1967.
[Silv:] Silverman, L.M., “Inversion of multivariable linear systems,” IEEE
Transactions on Automatic Control, vol. AC-14, pp. 270-276, 1969.
[Toi:] Toivonen, H.T., “Sampled-data control of continuous-time systems with
an H∞ optimality criterion,” Report 90-1, Dept. of Chemical Engineering, Abo
Akademi, Finland, Jan., 1990.
[VanD:] Van Dooren, P., “A generalized eigenvalue approach for solving Riccati
equations,” SIAM J. Sci. Comput., vol. 2, pp. 121-135, 1981.
[Vin:] Vinnicombe, G., “Measuring Robustness of Feedback Systems,” PhD
dissertation, Department of Engineering, University of Cambridge, 1993.
[WanS:] Wang, W. and M.G. Safonov, “A tighter relative error bound for
balanced stochastic truncation,” Systems and Control Letters , vol. 14, pp.
307-317, 1990.
[Will1:] Willems, J.C., “Least-squares stationary optimal control and the
algebraic Riccati equation,” IEEE Transactions on Automatic Control, vol.
AC-16, pp. 621-634, 1971.
[Will2:] Willems, J.C., “Almost invariant subspaces: an approach to high gain
feedback design – Part I: almost controlled invariant subspaces.” IEEE
Transactions on Automatic Control, vol. AC-26, pp. 235-252, 1981.
[WilKS:] Willems, J.C., A. Kitapci and L.M. Silverman, “Singular optimal
control: a geometric approach.” SIAM J. Control and Opt., vol. 24, pp. 323-337,
1986.
[Wonh:] Wonham, W.M., Linear Multivariable Control: A Geometric Approach,
third edition, Springer-Verlag, New York, 1985.
[YouJB:] Youla, D.C., H.A. Jabr, and J.J. Bongiorno, “Modern Wiener-Hopf
design of optimal controllers: part II,” IEEE Transactions on Automatic
Control, vol. AC-21, pp. 319-338, 1976.
[Yam:] Yamamamoto, Y., “A new approach to sampled-data control systems—
A function space viewpoint with applications to tracking problems,”
Proceedings of the 29th Conference on Decision and Control, pp. 1882-1887,
1990.
3-73
3 H∞ Control and Model Reduction
3-74
4
Modeling and Analysis of
Uncertain Systems
4 Modeling and Analysis of Uncertain Systems
The advanced features of the µ-Analysis and Synthesis Toolbox (µ-Tools) are
aimed at
4-2
Representing Uncertainty
Representing Uncertainty
In this section, we describe the linear fractional representation of uncertainty
that is used in µ-Tools. The basic idea in modeling an uncertain system is to
separate what is known from what is unknown in a feedback-like connection,
and bound the possible values of the unknown elements. This is a direct
generalization of the notion of a state-space realization, where a linear
dynamical system is written as a feedback interconnection of a constant matrix
and a very simple dynamic element made up of a diagonal matrix of delays or
integrators. This realization greatly facilitates manipulation and computation
of linear systems, and linear fractional transformations provide the same
capability for uncertain systems.
v M r v = Mr
If r and v are partitioned into a top part and bottom part, then we can draw the
relationship in more detail, explicitly showing the partitioned matrix M.
v2 - - r2 r2 = v2
The linear fractional transformation of M by ∆ interconnects these two
elements, as follows,
4-3
4 Modeling and Analysis of Uncertain Systems
v1 M11 M12 r1
M21 M22
-
Eliminate v2 and r2, leaving the relationship between r1 and v1
–1
v 1 = [ M 11 + M 12 ∆ ( I – M 22 ∆ ) M 21 ]r 1
F ( M, ∆ )
L
= F L ( M, ∆ )r 1
The notation FL indicates that the lower loop of M is closed with ∆. It is more
traditional to write a block diagram with the arrows reversed, as in
r1 - - v1
- M
This still represents the same formula, v1 = FL(M,∆)r1, and the choice of
directions is a matter of taste. We prefer to write as much as is convenient of a
block diagram with the arrows going right to left to be consistent with matrix
and operator composition, which goes the same way. This simple convention
reduces the confusion in going between block diagrams and equations,
particularly when blocks have multiple inputs.
If the upper loop of M is closed with Ω, then we have
= U (M;
) r2
M11 M12 v2 F
v2 M21 M r2
22
4-4
Representing Uncertainty
where
Parametric Uncertainty
How do we use LFTs to represent an uncertain parameter? Suppose c is a
parameter, and it is known to take on values
Write this as c = 2.4+0.4δc where δc ∈ [–1,1]. This is a linear fractional
transformation. Indeed, check that
c = F L 2.4 0.4 , δ c
1 0
2:4 0:4
1 0
- c
If the gain c–1 also appears, the LFT representation can still be used, because
inverses of LFTs are LFTs (on the same δ). Note that
–1 1
c = ----------------------------
2.4 + 0.4δ c
1 – 1--6- δ c
= -------- 1 + ------------------------
-
2.4 1 – ( – 1--6- )δ c
1
– 1--6-
= FL , δ c
--------
2.4
1
--------
2.4
– 1--6-
4-5
4 Modeling and Analysis of Uncertain Systems
1
--------
2.4 – 1--6-
1
--------
2.4 – 1--6-
δc
The general case for inverses can be solved with the matrix inversion lemma.
Specifically, given a matrix H, there exists matrices HLI and HUI such that for
all ∆ and Ω
[FL(H,∆)]–1 = FL(HLI,∆), [FU(H,Ω)]–1 = FU(HUI,Ω)
H 11 H 12
In fact, with H = , the formulas for HLI and HUI are just
H 21 H 22
–1 –1
H 11 – H 11 H 12
H LI =
–1 –1
H 21 H 11 H 22 – H 21 H 11 H 12
–1 –1
H 11 – H 12 H 22 H 21 H 12 H 22
H LI =
–1 –1
– H 22 H 21 H 22
with –1 ≤ δm, δc, δk ≤ 1. Note that this represents 50% uncertainty in m, 30%
uncertainty in c, 40% uncertainty in k. A block diagram is shown in Figure 4-1.
4-6
Representing Uncertainty
-, c, - 1 x - R x_ - R x -y
6@
I
u m
@
@ c
k
Figure 4-1 Second Order Mass/Damper/Spring System
Define matrices
1
– 0.5 -----
M mi := m
, M c := c 0.3c , M k := k 0.4k
– 0.5 1
----- 1 0 1 0
m
Since we will eventually separate what is known (Mmi, Mc, MK, and
integrators) from what is unknown (δm, δc, δk), redraw the original block
diagram with the LFT representation of the uncertain elements, leaving out
4-7
4 Modeling and Analysis of Uncertain Systems
the δs, but label the signals which go to and from the δs. This is shown in
Figure 4-3.
wm - -zm R
u -, c@ M
- mi x - x- R x-y
,
6I@
_
@ M
zc c wc
M
zk k wk
Figure 4-3: Known Part of Uncertain System
Let Gmck be the four-input (wm, wc, wk, u), four-output (zm, zc, zk, y), two-state
system shown in Figure 4-3 and depicted in Figure 4-4.
zm wm
zc wc
zk Gmck wk
y u
Figure 4-4: Macro View of Known System
Note that Gmck only depends on m, c, k, , 0.5, 0.4, and 0.3 and the original
differential equation which relates u to y. Hence, Gmck is known. Also, the
uncertain behavior of the original system is characterized by an upper linear
fractional transformation, FU, of Gmck with a diagonal uncertainty matrix as
shown in Figure 4-5.
4-8
Representing Uncertainty
δm 0 0
y = F U Gmck , 0 δc 0 u
0 0 δk
∆
m 0 0
-0 c 0
0 0 k
y Gmck
u
4-9
4 Modeling and Analysis of Uncertain Systems
T 11 T 12 B 11 B 12
T= , B =
T 21 T 22 B 21 B 22
such that the matrix product T22B11is well defined, and in fact, square.
If I – T22B11 is invertible, define the star product of T and B to be
–1
F L ( T, B 11 ) T 12 ( I – B 11 T 22 ) B 12
S ( T, B ) :=
–1
B 21 ( I – T 22 B 11 ) T 21 F U ( B, T 22 )
n1 X X
T
n2 XX
B
S
4-10
µ-Tools Commands for LFTs
B 11
Here n1 is the row dimension of [T21 T22] (and the column dimension of ),
B 21
which is the number of signals which are fed from T to B. Similarly, n2 is the
T 12
row dimension of [B11 B12] (and the column dimension of ), which is the
T 22
number of signals which are fed from B to T. The remaining inputs and outputs
appear in the output matrix S in the same order as they are in T and B. If the
dimension arguments n1 and n2 are omitted, then the following takes place:
• n1 = min([ynum(T) unum(B)])
• n2 = min([unum(T) ynum(B)])
so all possible loop closures are made. Hence, LFTs (which are special cases of
star products) are easily computed using starp without dimension arguments.
M = crandn(10,8);
Delta = crandn(3,4);
Omega = crandn(6,3);
flmd = starp(M,Delta);
fumo = starp(Omega,M);
4-11
4 Modeling and Analysis of Uncertain Systems
Gmck_g = frsp(Gmck,logspace(-1,1,100));
delnom = diag([0;0;0]);
rifd(spoles(starp(delnom,Gmck)))
delnpn = diag([-1;1;-1]);
rifd(spoles(starp(delnpn,Gmck)))
delpnp = diag([1;-1;1]);
rifd(spoles(starp(delpnp,Gmck)))
vplot('bode',starp(delnom,Gmck_g),'-',...
starp(delnpn,Gmck_g),'.-',starp(delpnp,Gmck_g),'--')
0
10
2
10
4
10 1 0 1
10 10 10
Frequency (radians/sec)
Frequency Response Phases of Perturbed MCK System
0
Phase (degrees)
100
200 1 0 1
10 10 10
Frequency (radians/sec)
4-12
Interconnections of LFTs
Interconnections of LFTs
By now, you probably have noticed an extremely important property of LFTs
— typical algebraic operations such as frequency response, cascade
connections, parallel connections, and feedback connections preserve the LFT
structure. This means that normal interconnections of LFTs are still in the
form of an LFT. Hence, the LFT is an excellent choice for a general hierarchical
representation of uncertainty. For illustrative purposes, we consider a few
additional examples in this section.
Consider a cascade connection of FL(M,∆) with FU(G,Ω), so that y =
FL(M,∆)FU(G,Ω)u. This is shown below.
Draw a box around M and G, isolating them from ∆ and Ω, calling the boxed
items Q.
Q is made up of the elements of M and G, and relates the variables (u, w∆, wΩ)
to (y, z∆, zΩ)
4-13
4 Modeling and Analysis of Uncertain Systems
y M 11 G 22 M 12 M 11 G 21 u
z ∆ = M 21 G 22 M 22 M 21 G 21 w ∆
zΩ G 12 0 G 11 wΩ
z∆ w∆
Since ∆ 0 is the matrix that relates → , it is clear that the cascade
0 Ω zΩ wΩ
connection of FL(M,∆) and FU(G,Ω) is yet another LFT, namely
F L ( M, ∆ )F U ( G, Ω ) = F L Q, ∆ 0
0 Ω
as shown below
y u
Q
-
-
e d1
1
G1
G2
u1
- 1 d3 - 2
-
- y3-
- G3
3
4-14
Interconnections of LFTs
can be drawn as a single LFT on the diagonal matrix containing the three
individual perturbations,
Here, P depends only on G1, G2, G3, and the interconnection diagram and is
easy to calculate with the interconnection program sysic. For instance, if every
line in the diagram represents a scalar signal, then correct sysic commands to
create P are as follows:
G1 = sysrand(8,3,4);
G2 = sysrand(7,2,2);
G3 = sysrand(6,3,3);
systemnames = 'G1 G2 G3';
sysoutname = 'P';
inputvar = '[w1;w2;w3;d1;d3;u]';
input_to_G1 = '[d1;G2(1);u1;w1]';
input_to_G2 = '[G3(1);w2]';
input_to_G3 = '[d3;G1(2);w3]';
outputvar = '[G1(3);G2(2);G3(3);G1(1);G3(2)]';
cleanupsysic = 'yes';
sysic
4-15
4 Modeling and Analysis of Uncertain Systems
{
{
lag Pade
(4-1)
Assume that each of the terms K, γ, and τ is uncertain, with K ∈ [1 3], γ ∈ [0.05
0.15], and τ ∈ [1 2]. Further assume that K and γ are linearly related, so that
as K takes on values from 1 → 3, γ simultaneously takes on values from 0.05 →
0.15. Represent these variations with two uncertainties, δ1 and δ2, with
K = 2 + δ1, γ = 0.1 + 0.05δ1, τ = 1.5 + 0.5δ2
where –1 ≤ δ1, δ2 ≤ 1.
– γs + 1
A block diagram of -------------------
γs + 1
is
1
Similarly, a block diagram of ---------------
τs + 1
is
1 1 2
– --- 10 – --- ---
M k := 0 1 , M γI := 2
, M τI := 3 3
1 2 1 1 2
– ---
2
10 – --- ---
3 3
4-16
Interconnections of LFTs
The first-order Pade system is of the form FU(GP,δ1), and the first-order lag is
of the form FU(GL,δ2), where GP and GL are known, two-input, two-output,
one-state systems
GP is shown in detail in Figure 4-6 and can be built easily using sysic.
For instance
mgammai = [-0.5 10; -0.5 10];
int = nd2sys([1],[1 0]);
systemnames = 'mgammai int';
sysoutname = 'GP';
inputvar = '[w_gamma;u1]';
input_to_mgammai = '[w_gamma;2*u1-int]';
input_to_int = '[mgammai(2)]';
outputvar = '[mgammai(1);int-u1]';
sysic;
4-17
4 Modeling and Analysis of Uncertain Systems
Similar calculations are necessary for GL, whose internal structure is shown in
Figure 4-7.
mtaui = [-1/3 2/3; -1/3 2/3];
int = nd2sys([1],[1 0]);
systemnames = 'mtaui int';
sysoutname = 'GL';
inputvar = '[w_tau;u2]';
input_to_mtaui = '[w_tau;u2-int]';
input_to_int = '[mtaui(2)]';
outputvar = '[mtaui(1);int]';
sysic;
z
w
zk wk
z Gproc w
y u
Figure 4-8: Known Part of Uncertain Gain/Lag/Pade
w
z
6 w z 6 wk zk 6
? 6 ?- 6 ?- 6
-
GP GL MK
u- - - - - -y
Figure 4-9: Cascade of Pade, Lag, and Gain LFTs
4-18
Interconnections of LFTs
δ I 0
y = F U G proc, 1 2 u
0 δ
2
4-19
4 Modeling and Analysis of Uncertain Systems
2.5
1.5
Output
0.5
0.5
0 0.5 1 1.5 2 2.5 3 3.5 4
Time
m m
A0 + ∑ δi Ai B0 + ∑ δi Bi
x· ( t ) = i=1 i=1 x(t)
y(t) m m u(t)
C0 + ∑ δi Ci D0 + ∑ δi Di
i=1 i=1
m
A B A B
= 0 0 + δi i i x ( t )
∑
C D C i D i u ( t )
0 0 i
(4-2)
4-20
Interconnections of LFTs
Ai Bi ( n+n u ) × ( n+n y )
∈R
Ci Di
Let
Ai Bi
r i := rank
Ci Di
Ai Bi Ei
= Gi Hi
Ci Di Fi
where
Ei ( n+n y ) × r i r i × ( n+n y )
∈R , Gi Hi ∈ R
Fi
Now, define a linear system Gss, with extra inputs and outputs via the state
equations as shown in Figure 4-10.
zzy1
uw
2 Gss w12
zm
wm
4-21
4 Modeling and Analysis of Uncertain Systems
x· A0 B0 E1 … Em x
u C0 D0 F1 … Fm u
z1 = G1 H1 0 … 0 w1
… ..
…
…
…
…
…
zm Gm Hm 0 … 0 wm
∆ = {diag[δ1 I r1 ,. . .,δm I rm ] : δi ∈ R}
This approach, developed in [MorM], has its roots in the Gilbert realization,
which is discussed in [Kai].
As an example, consider a two-state, single-input, single-output system with a
single parameter dependence.
0 1 0 0 0 0
A ( δ ) B ( δ ) := + δ
– 16 – 0.16 1 6.4 0 0
C(δ) D(δ)
16 0 0 0 0 0
0 0 0 0
6.4 0 0 = 6.4 1 0 0
0 0 0 0
4-22
Interconnections of LFTs
x· 1 x1
0 1 0 0
x·2 – 16 – .16 1 6.4 x 2
=
u 16 0 0 0 u
z1 1 0 0 0 w
1
Unmodeled Dynamics
Models of uncertainty are not limited to parametric uncertainty. Often, a low
order, nominal model, which suitably describes the low-mid frequency range
behavior of the plant is available, but the high-frequency plant behavior is
uncertain. In this situation, even the dynamic order of the actual plant is not
known, and something richer than parametric uncertainty is needed to
represent this uncertainty. One common approach for this type of uncertainty
is to use a multiplicative uncertainty model. Roughly, this allows you to specify
a frequency-dependent percentage uncertainty in the actual plant behavior.
In order to specify the uncertainty set, we need to choose two things:
( jω ) – G ( jω )
M ( G, W u ) := G̃ : G̃
-------------------------------------- ≤ W u ( jω )
G ( jω )
with the additional restriction that the number of right-half plane (RHP) poles
of G̃ be equal to the number of right-half plane poles of G. At each frequency,
|Wu(jω)| represents the maximum potential percentage difference between all
of the plants represented by M(G,Wu) and the nominal plant model G. In that
sense, M(G,Wu) represents a ball of possible plants, centered at G. On a
Nyquist plot, a disk of radius |Wu(jω)G(jω)|, centered at G(jω) is the set of
possible values that G̃ ( jω ) can take on, due to the uncertainty description.
4-23
4 Modeling and Analysis of Uncertain Systems
1
G := ------------
s–1
--- ( --- s + 1 )
1 1
4 2
W u := ----------------------
1
------ s + 1
32
G = nd2sys([1],[1 -1]);
Wu = nd2sys([0.5 1],[0.03125 1],0.25);
omega = logspace(-2,2,80);
vplot('liv,lm',frsp(G,omega),frsp(wu,omega))
0
10
Magnitude
1
10
2
10 2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
4-24
Interconnections of LFTs
It is instructive to consider sets of models that are similar to the nominal model
G, and see to what extent the sets are contained in M(G,Wu). Consider the
following problem. Determine the smallest β such that
β
- : β > β ⊂ M ( G, W u )
G -----------
s + β
G̃ ( jω ) – G ( jω )
--------------------------------------
G ( jω )W u ( jω )
for various values of β, and determine the lower limit β by comparing the plot’s
magnitude relative to 1. Using the command ex_unc, it is easy to carry out this
procedure. For instance,
beta = 1;
Gtilde = mmult(nd2sys(beta,[1 beta]),G)
ex_unc(G,Gtilde,wu,omega); % above 1
beta = 10;
Gtilde = mmult(nd2sys(beta,[1 beta]),G)
ex_unc(G,Gtilde,wu,omega); % below 1
file: ex_unc.m
function ex_unc(G,Gtilde,Wu,omega)
Gg = frsp(G,omega);
Gtildeg = frsp(Gtilde,omega);
Wug = frsp(Wu,omega);
percdiff = vabs(vrdiv(msub(Gtildeg,Gg),mmult(Gg,Wug)));
vplot('liv,m',percdiff,1);
4-25
4 Modeling and Analysis of Uncertain Systems
A few more trials reveal β ≈ 6.1. As an exercise, carry out similar calculations
on the following examples:
• Define δ > 0 and δ > 0 as the smallest and largest numbers such that
1+δ
-------------------- : δ ≤ δ ≤ δ ⊂ M ( G, W u )
s – 1 – δ
– τs + 1
G ------------------- : τ ≤ τ ⊂ M ( G, W u )
τs + 1
• Define r > 0 and r > 0 as the smallest and largest numbers such that
2
70
- : ξ ≤ ξ ≤ ξ ⊂ M ( G, W u )
G -------------------------------------------
2 2
s + 2ξ70s + 70
We return to this example, and these specific extreme plants later in the
“Analysis of Controllers for an Unstable System” section in Chapter 7.
G̃ – G
Now, by defining ∆ := --------------- , each G̃ can be drawn as in Figure 4-11.
GW u
4-26
Interconnections of LFTs
- Behavior of
plants in M( )
G; Wu
-+ ?f+-
W u
u 6 G
- y
( -----
32
1
- s + 1 )δs
∆ := ------------------------------------------------
-
--- ( --- s + 1 ) ( s – 1 – δ )
1 1
4 2
(4-3)
--- ( --- s + 1 )
1 1
1 4 2
G ( 1 + W u ∆ ) = ------------ 1 + ----------------------∆
s – 1 ( -----
1
32
-s + 1)
(s – 1)(1 + δ)
= -------------------------------------------
(s – 1)(s – 1 – δ)
1+δ
= --------------------
s–1–δ
max ∆ ( jω ) ≤ 1
ω∈R
4-27
4 Modeling and Analysis of Uncertain Systems
-
z ?w
6
Wu
Hmult
u - 6 -+ f?+- G - -y
Known, 2-input,
2-output, linear
system
The uncertain component is now represented as y = FU(Hmult,∆)u, with Hmult
having the value
z = 0 Wu w
y G G u
H
mult
4-28
Interconnections of LFTs
A ( G, W u ) := { G̃ : G̃ ( jω ) – G ( jω ) ≤ W u ( jω ) }
Behavior of
- Wu - plants in A( G; W u )
u - G
-?
f - y
4-29
4 Modeling and Analysis of Uncertain Systems
Mixed Uncertainty
Uncertainty may be mixed, including both parametric uncertainty and
unmodeled dynamics. For an example, take the uncertain second order system
from “Parametric Uncertainty” section, with parametric uncertainty in the
parameters m, c, and k and additional high-frequency unmodeled dynamics
using the multiplicative uncertainty model. The block diagram is
m 0 0
- 0 c 0 (s)
0 0 k
6 ? ? 6
Wu
Hmix
y Gmck
?
c 6 u
4-30
Analyzing the Effect of LFT Uncertainty
-
M
Figure 4-13: General Diagram for Robust Stability Analysis
4-31
4 Modeling and Analysis of Uncertain Systems
-
Hmix
-K
Figure 4-14: Uncertain Closed-Loop System
4-32
Analyzing the Effect of LFT Uncertainty
4-33
4 Modeling and Analysis of Uncertain Systems
Note that the ordering of the uncertainty elements is (and must be) consistent
with the ordering of the input/output channels of the known systems (Gmck,
Gproc, and Hmix).
As a notational convention, once the uncertainty structure has been defined for
any given problem, we will use the symbol ∆ to represent all perturbation
matrices with the appropriate structure. For example, in the mixed
uncertainty example of the “Mixed Uncertainty” section, we have
δ1 0 0 0
0 δ2 0 0
∆ := : δ1 ∈ R , δ2 ∈ R , δ3 ∈ R , δ4 ( s )
0 0 δ3 0
0 0 0 δ4
Now that the uncertainty structure has been represented, we can compute the
size of perturbations to which the system is robustly stable. We need to
calculate a frequency response of M, and then compute the structured singular
value (µ) of M with respect to the uncertainty set ∆. At each frequency, the
matrix M(jω) is passed to the µ algorithm, and bounds for µ(M(jω)) are
computed, giving upper and lower bound functions of frequency, which are
plotted. The notation for µ will be µ∆(M(jω)), to emphasize the dependency of
the function not only on M, but also on the uncertainty set ∆.
Suppose the peak (across frequency ω) of the µ∆(M(jω)) is β. This means that
for all perturbation matrices ∆ with the appropriate structure (ie., any ∆ ∈ ∆),
and satisfying maxω
σ [ ∆ ( jω ) ] < 1--β- , the perturbed system is stable. Moreover,
there is a particular perturbation matrix ∆ ∈ ∆ satisfying max σ [ ∆ ( jω ) ] < 1--β-
ω
that causes instability. Hence, we think of
1
---------------------------------------------
max µ ∆ ( M ( jω ) )
ω
4-34
Analyzing the Effect of LFT Uncertainty
frequency) of the upper bound for µ, and βl be the peak of the lower bound for
µ. Then:
• For all perturbation matrices ∆ ∈ ∆ satisfying
1
max σ [ ∆ ( jω ) ] < ------
ω βu
1
max σ [ ∆ ( jω ) ] = -----
ω βl
0 1 0 0
AH BH – 16 – 0.16 1 1
:=
CH DH 6.4 0 0 0
16 0 0 0
4-35
4 Modeling and Analysis of Uncertain Systems
Uncertain Plant
- 1
z1 w1
+w2 2 z2 Wu
1 H ?e+ e
+
6+
e 8 d
n - 1
5
-++ e? - K
y u
16
F U ( H, δ 1 ) = ---------------------------------------------------------------------
2
s + 0.16s + 16 ( 1 + 0.4δ 1 )
7s + 8.5
W u = ---------------------
s + 42
2
– 12.56 s + 17.32s + 67.28
K = ------------------------------------------------------------------------------------- .
3 2
s + 20.37s + 136.74s + 179.46
Let M(s) in Figure 4-17 denote the closed-loop transfer function matrix from
Figure 4-16, after omitting δ1 and δ2. The dimensions of M are six states, four
inputs and three outputs.
4-36
Analyzing the Effect of LFT Uncertainty
zz12
Ms
ww12
e
dn
( )
In the robust stability analysis, we are only concerned about the stability of the
perturbed closed-loop system, and in that case, only the transfer function that
δ1 0
the perturbation matrix ∆ := sees is important. For notational
0 δ2 ( s )
purposes, drop the s from M(s), and partition M into
M 11 M 12
M =
M 21 M 22
(4-4)
-
M11
Figure 4-18: Example: Robust Stability
Hence, for robust stability calculations we only need the submatrix M11 (for
robust performance calculations, presented in the “Robust Performance”
section, we will need the entire matrix M).
The computational steps for analysis are given below:
4-37
4 Modeling and Analysis of Uncertain Systems
2 Calculate frequency response, and select the first two input and output
channels for the robust stability test.
omega = logspace(-2,2,200);
M_g = frsp(M,omega);
M11_g = sel(M_g,1:2,1:2);
0.8
0.7
Mu upper and lower bounds
0.6
0.5
0.4
0.3
0.2
0.1
0 2 1 0 1 2
10 10 10 10 10
Frequency (rads/sec)
The peak is about 0.89 (upper and lower bounds are very close in this example)
and occurs at ω = 5.4 radians/second. Hence, stability is guaranteed for all
perturbations with appropriate structure, and max σ [ ∆ ( jω ) ] < ----------
- ≈ 1.12 .
1
0.89 ω
4-38
Analyzing the Effect of LFT Uncertainty
7 Verify that the perturbed system is indeed unstable. Note the location of the
perturbed closed-right-half-plane pole and its relationship with the peak of
the µ-lower bound plot.
pertM = starp(pert,M);
spoles(pertM)
4-39
4 Modeling and Analysis of Uncertain Systems
-
e M d
Figure 4-19: Robust Performance Formulation
T ∞ := max σ ( T ( jω ) ) ≤ 1
ω∈R
4-40
Using µ to Analyze Robust Performance
T
, T 1 1
- F
Stable for all kF k1 < 1
Figure 4-20: Performance as Robustness
4-41
4 Modeling and Analysis of Uncertain Systems
-
M
- F
Figure 4-21: Robust Stability with Augmented Uncertainty
∆ 0
∆P =
∆ 0F
4-42
Using µ to Analyze Robust Performance
Suppose the peak of the µ-plot is β. This means that for all perturbation
matrices ∆ ∈ ∆ satisfying max ω
σ [ ∆ ( jω ) ] < 1--β- , the perturbed system is stable
and ||FU(M,∆)||∞ ≤ β. Moreover, there is a particular perturbation ∆ ∈ ∆
satisfying max σ [ ∆ ( jω ) ] = 1--β- that causes either ||FU(M,∆)||∞ = β, or instability.
ω
However, the software does not compute µ exactly, but upper and lower
bounds. Let βu be the peak (across frequency) of the upper bound for µ, and βl
be the peak (across frequency) of the lower bound for µ. Then:
1
max σ [ ∆ ( jω ) ] < ------ ,
ω βu
1
max σ [ ∆ ( jω ) ] = -----
ω βl
4-43
4 Modeling and Analysis of Uncertain Systems
output of the plant. The (weighted) open-loop transfer functions satisfy 2.6 ≤
Ted ≤ 4.0 depending on the value of 1 ≥ δ1 ≥ –1, and of course Ten = 0. Hence the
objective of control is to satisfy
1
---
2 2 2
max [ T ed ( jω ) + T en ( jω ) ] ≤ 1
ω
δ 0
T = F U M, 1
0 δ2 ( s )
Now, the nominal value of T is simply FU(M,02×2), which is just M22. Plot the
magnitude (versus frequency) of these elements to see the nominal closed-loop
transfer function. In the next command, we extract M22 in two different but
equivalent manners.
vplot('liv,lm',sel(M_g,3,3:4),starp(zeros(2,2),M_g))
The robust performance µ calculation gives information about how much these
transfer functions are affected by the linear fractional perturbations δ1and δ2.
4-44
Using µ to Analyze Robust Performance
The calculation is carried out on the entire matrix M, using the augmented
uncertainty structure.
[rpbnds,rpdvec,rpsens,rppvec,rpgvec] = mu(M_g,aug_deltaset);
vplot('liv,m',rpbnds);
1
Mu upper and lower bounds
0.8
0.6
0.4
0.2
0 2 1 0 1 2
10 10 10 10 10
Frequency (rads/sec)
The peak value (of both lower and upper bounds) is about 1.02. This implies
that robust performance is not quite achieved. In other words, for every
perturbation ∆ = diag [δ1,δ2(s)] satisfying max σ [ ∆ ( jω ) ] < 1
-----------
1.02
, we are
ω
guaranteed stability and FU(M,∆) ≤ 1.02. Moreover, there is a perturbation ∆ =
diag [δ1,δ2(s)] with max σ [ ∆ ( jω ) ] ≈ ----------
1.02
1
- < 1 such that
ω
4-45
4 Modeling and Analysis of Uncertain Systems
This can be constructed with dypert, and put into the closed-loop system to
verify the degradation of performance.
pert = dypert(rppvec,aug_deltaset,rpbnds,[1;2]);
spoles(pert)
seesys(pert)
hinfnorm(pert)
hinfnorm(starp(pert,M))
vplot('liv,lm',vnorm(starp(pert,M_g)))
vplot('liv,lm',starp(pert,M_g))
W ( M, ∆, α ) := max F U ( M, ∆ ) ∞
∆∈∆
max σ [ ∆ ( jω ) ] ≤ α
ω
4-46
Using µ to Analyze Robust Performance
0.8, and compute the performance degradation curves with at least 10 points is
given below. The degradation curves are shown in Figure 4-22.
alpha = 0.8;
npts = 10;
[deltabad,lowbnd,uppbnd] = wcperf(M_g,deltaset,alpha,npts);
seesys(deltabad)
hinfnorm(deltabad)
hinfnorm(starp(deltabad,M))
vplot(lowbnd,uppbnd)
grid
2
Performance Norm Bounds
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1 1.2
Size of DELTA
Because µ can only be bounded above and below, the worst-case performance
degradation can also only be bounded. The VARYING matrices lowbnd and
uppbnd bound the worst-case performance degradation.
4-47
4 Modeling and Analysis of Uncertain Systems
Summary
At this point, you should be well suited to explore the examples involving
µ-analysis, namely the “Analysis of Controllers for an Unstable System”,
“MIMO Margins Using m” and the “Space Shuttle Robustness Analysis”
sections in Chapter 7. The rest of this chapter is a theoretical development of
the properties of µ, and is not necessary reading at this point. Later, as you
work with the µ software, and get comfortable with the interpretation of µ as a
robust stability and robust performance measure, we encourage you to
complete reading this chapter and learn more about this powerful framework.
4-48
Structured Singular Value Theory
4-49
4 Modeling and Analysis of Uncertain Systems
mj × mj
∆ = diag [ δ 1 I r ,…,δ s I r ,∆ 1 ,…,∆ F ] : δ i ∈ C, ∆ j ∈ C
1 S
(4-5)
S F
∑ ri + ∑ mj = n.
i=1 j=1
Often, we will need norm bounded subsets of ∆, and we introduce the following
notation
B∆ = { ∆ ∈ ∆ : σ ( ∆ ) ≤ 1 }
(4-6)
Note that in Figure 4-5 all of the repeated scalar blocks appear first. This is just
to keep the notation as simple as possible, in fact they can come in any order.
Also, the full blocks do not have to be square, but restricting them as such saves
4-50
Complex Structured Singular Value
a great deal in terms of notation. The software will handle nonsquare full
blocks, as well as any order to the blocks.
Definition 4.1: For M ∈ Cn×n, µ∆(M) is defined
1
µ ∆ ( M ) := -------------------------------------------------------------------------------------------------
min { σ ( ∆ ) : ∆ ∈ ∆ , det ( I – M∆ ) = 0 } (4-7)
∆˜ := yx*
M
u v
-
4-51
4 Modeling and Analysis of Uncertain Systems
u = Mv
v = ∆u.
(4-8)
µ∆ ( M ) = max ρ ( M∆ )
∆ ∈ B∆
4-52
Complex Structured Singular Value
σ(∆) = 1
-------------- and I – M∆ is obviously singular. Hence, µ ∆ ( M ) ≥ σ ( M ) .
σ(M)
From the definition of µ, and the two special cases above, we conclude that
ρ ( M ) ≤ µ∆ ( M ) ≤ σ ( M )
(4-10)
These bounds alone are not sufficient for our purposes, because the gap
between ρ and σ can be arbitrarily large. They are refined by considering
transformations on M that do not affect µ∆(M), but affect ρ and σ . To do this,
define the following two subsets of Cn×n.
Q ∆ = { Q ∈ ∆ : Q*Q = I n }
(4-11)
, D i = D *i > 0, d j ∈ R, d j > 0
.
(4-12)
Q* ∈ Q ∆ Q∆ ∈ ∆ ∆Q ∈ ∆ σ ( Q∆ ) = σ ( ∆Q ) = σ ( ∆ )
(4-13)
D∆ = ∆D.
(4-14)
4-53
4 Modeling and Analysis of Uncertain Systems
Consequently,
Theorem 4.3: For all Q ∈ Q∆ and D ∈ D∆
–1
µ ∆ ( MQ ) = µ ∆ ( QM ) = µ ∆ ( M ) = µ ∆ ( DMD )
(4-15)
–1
max ρ ( QM ) ≤ max ρ ( ∆M ) = µ ∆ ( M ) ≤ inf σ ( DMD )
Q ∈ Q∆ ∆ ∈ B∆ D ∈ D∆
(4-16)
where the equality comes from Lemma 2.2. Note that the last element in the D
matrices in equation Figure 4-12 is normalized to 1 since for any nonzero scalar
γ, DMD–1 = (γD)M(γD)–1.
Bounds
In this section we will concentrate on the bounds
–1
max ρ ( QM ) ≤ µ ∆ ( M ) ≤ inf σ ( DMD )
Q ∈ Q∆ D ∈ D∆
4-54
Complex Structured Singular Value
F
0 1 2 3 4
The above bounds are much more than just computational schemes, although
that is their primary role in this toolbox. They are also theoretically rich, and
can unify a number of apparently quite different results in linear systems
4-55
4 Modeling and Analysis of Uncertain Systems
theory. There are several connections with Lyapunov stability, two of which
were hinted at above, but there are further connections between the upper
bound scalings and solutions to Lyapunov and Riccati equations. Indeed, many
major theorems in linear systems theory follow from the upper bounds and
some results for linear fractional transformations (see “Linear Fractional
Transformations”). The lower bound can be viewed as a natural generalization
of the maximum modulus theorem [BoyD]. While a complete exposition of these
ideas is beyond the scope of this tutorial, some of the more elementary concepts
will be explored in later sections.
For the purposes of this toolbox, the most important use of the upper bound is
as a computational scheme when combined with the lower bound. For reliable
use of the µ theory it is essential to have upper and lower bounds. The other
important feature of the upper bound is that it can be combined with H∞
controller synthesis methods to yield an ad-hoc µ-synthesis method. Note that
the upper bound, when applied to transfer functions, and maxed across
frequencies, is simply a scaled H∞ norm. This is exploited in the µ-synthesis
techniques in this toolbox.
4-56
Complex Structured Singular Value
Syntax for mu
[bnds,dvec,sens,pvec] = mu(M,deltaset);
Description
δ1 0 0 0
0 δ2 0 0
∆ := : δi ∈ C
0 0 δ3 0
0 0 0 δ4
1 1
deltaset =
1 1 .
1 1
1 1
4-57
4 Modeling and Analysis of Uncertain Systems
∆1 0 0
3×2 4×5
∆ := 0 ∆ 2 0 : ∆ 1 ∈ C ,∆ 2 ∈ C ,δ 3 ∈ C ,
0 0 δ3
3 2
deltaset = 4 5 .
1 1
δ1 I3 0 0
2 × 2
∆ := 0 ∆ 2 0 : δ 1, δ 3 ∈ C,∆ 2 ∈ C
0 0 δ3 I2
3 0
deltaset = 2 2 .
2 0
4-58
Complex Structured Singular Value
entries, they are stored in dvec as a row vector. They can be unwrapped into
the block diagonal form using unwrapd.
[Dl,Dr] = unwrapd(dvec,deltaset);
For the most part, these two matrices are the same, in fact, if the block
structure deltaset has no nonsquare full blocks, then Dl = Dr. In any event,
the following are always true
D l = D *l > 0 ,D r = D *r > 0, D r ∆ = ∆ D l ∀∆ ∈ ∆
–1
and µ ∆ ( M ) ≤ σ ( D l MD r ) .
Sensitivity (used in µ-synthesis): a sensitivity measure of the maximum
–1
singular value of D l MD r with respect to the values in Dl (and Dr). It is
calculated in an ad-hoc manner, and is mainly used when fitting frequency
varying D’s with rational functions via the routine fitsys. We will not make
use of it in this example.
Perturbation (gives lower bound): The matrix pvec contains the
perturbation matrix ∆ which makes I – M∆ singular. This perturbation
corresponds to the lower bound in bnds. It is of the same type as M. Since the
structured set usually contains many zero elements, the perturbation matrix ∆
is stored efficiently in pvec as a row vector. It can be unwrapped into the block
diagonal form using unwrapp.
delta = unwrapp(pvec,deltaset);
Note that σ ( ∆ ) is equal to the reciprocal of the lower bound (in bnds), and
∆∈∆
det ( I – M∆ ) = 0
4-59
4 Modeling and Analysis of Uncertain Systems
Consider the block structure defined by the array deltasete. Run the mu
command, and unwrap the ds and the perturbation.
[bnds,dvec,sens,pvec] = mu(M,deltasete);
[Dl,Dr] = unwrapd(dvec,deltasete);
delta = unwrapp(pvec,deltasete);
Verify that:
• ∆ ∈ ∆; print out the matrix delta, and check that its structure corresponds
to that given by the array deltasete.
• Compare the norm of the matrix delta with the lower bound from bnds.
bnds
norm(delta);
• Note that the σ ( ∆ ) should equal the inverse of the lower bound
(1 bnds(1,2)).
• Verify that det(I5 – M∆) = 0.
det(eye(5)-M*delta)
or
eig(M*delta) % should have an eigenvalue at 1
• Look at the scaling matrices Dl and Dr. Note that Dr∆ = ∆Dl for all ∆ ∈ ∆.
Check that the upper bound in bnds comes from these.
bnds(1,1)
norm(Dl*M/Dr)
Try the other examples, and verify the consistency of all aspects of the results.
4-60
Mixed Real/Complex Structured Singular Value
The general structured singular value theory also includes full real blocks, but
these are difficult to motivate from the physics of real problems, and
convenient upper and lower bounds for structures with these types of blocks
are not well developed. This is an ongoing area of research, and later versions
of µ-Tools may support these types of blocks.
The theory for a mixed real/complex upper bound is more complicated to
describe than the upper bound theory for complex µ. In addition to D matrices,
which exploit the block diagonal structure of the perturbation, there are G
matrices, which exploit the real structure of the perturbations. For illustrative
purposes, consider a specific block structure. Suppose that
δ1 I2 × 2 0 0
3 × 3
∆ := 0 δ 2 I 4 × 4 0 : δ 1 ∈ R,δ 2 ∈ C ,∆ 3 ∈ C
0 0 ∆3
4-61
4 Modeling and Analysis of Uncertain Systems
A zero (0) in the second column signifies a repeated scalar block (as before). The
negative sign (–) indicates a real block.
Associated with ∆, define the sets
D1 0 0
2×2
D∆ = 0 D2 0 : D1 ∈ C ,det ( D 1 ) ≠ 0
0 0 d3 I3 × 3
4×4
D2 ∈ C ,det ( D 2 ) ≠ 0,d 3 ∈ C ,d 3 ≠ 0
and
diag [ g i ] i = 1, 2 0 0
G∆ = 0 04 × 4 0 : g i ∈ R
0 0 02 × 2
2 – 1--- 1 2 – 1---
σ ( I + G ) 4 --- DMD – jG ( I + G ) 4 ≤ 1
–1
β
(4-17)
then
µ∆(M) ≤ β
This bound, [YouND1], is a derivative of an earlier bound in [FanTD]. The
smallest β < 0 for which D and G matrices exist which satisfy this constraint is
what µ-Tools calls the mixed µ upper bound for M. Using manipulations that
are now standard in robust control theory, the computation of the best such β
is reformulated into an Affine Matrix Inequality (AMI) and solved with special
purpose convex programming techniques. For perturbation sets with multiple
4-62
Mixed Real/Complex Structured Singular Value
blocks, the general structure of the sets D∆ and G∆ remains the same, with one
scaling block for each uncertainty block.
In mu, only the complex full blocks can be nonsquare. This causes the D scaling
on the left of M to be slightly different from the D scaling on the right. The
single d variable associated with the full block is repeated a certain number of
times on the left, and a different number of times on the right, leading to
nonsquare D scaling matrices (the Dl and Dr that we have already seen). Of
course, the G scaling comes in different sizes too. Note that for any complex full
blocks, the associated blocks of G are zeros, since G is only nonzero in the blocks
associated with the real uncertainties. However, the dimension of the zero
blocks of G must line up with the correct rows/columns of M. Hence, in equation
Figure 4-17, there are three different Gs, all having exactly the same nonzero
elements, but different sizes of zero blocks associated with any nonsquare full
blocks. The different Gs are denoted Gl, Gm, and Gr. The sufficient condition
for the µ∆(M) < β is rewritten as
2 – 1--- 1 2 – 1---
σ ( I + G l ) 4 --- D l MD r – jG m ( I + G r ) 4 ≤ 1.
–1
β
(4-18)
The lower bound generated in mu comes from a power iteration which finds
matrices ∆ ∈ ∆ that make I – M∆ singular. The power iteration for mu is a
generalization of the power iteration used in earliest versions of µ-Tools. The
generalization is described in detail in [YouD].
The combination of upper and lower bounds makes the mu software unique. The
upper bounds give a guarantee on the worst-case performance degradation
that can occur under perturbation. The lower bounds actually exhibit a
perturbation that causes significant performance degradation. This
perturbation can then be used in time-domain simulations to better
understand its effect.
4-63
4 Modeling and Analysis of Uncertain Systems
Other functions associated with mu (such as unwrapd and unwrapp, which have
been illustrated already) are detailed in Chapter 8, “Reference” under mu.
δ1 0 0 0
0 δ2 0 0
∆ := : δ i ∈ C,i = 1, 2, δ j ∈ R ,j = 3, 4.
0 0 δ3 0
0 0 0 δ4
4-64
Mixed Real/Complex Structured Singular Value
3×2 4×5
∆ = diag [∆ 1 ∆ 2 δ 3 I 3 × 3 ] : ∆ 1 ∈ C ,∆ 2 ∈ C ,δ 3 ∈ R ,
δ1 I3 × 3 0 0
2 × 2
∆ := 0 ∆2 0 : δ 1 ∈ C,δ 3 ∈ R ,∆ 2 ∈ C
0 0 δ3 I2 × 2
The correctness of the upper bound can easily be checked with the inequality
in equation Figure 4-18. The correctness of the lower bound can be verified by
calculating the perturbation, ∆, that mu returns, verifying its block structure
and norm, checking that the matrix M∆ has an eigenvalue exactly at 1 (which
is equivalent to I – M∆ being singular).
Try some examples on a constant 5 × 5 matrix.
simprmu
4-65
4 Modeling and Analysis of Uncertain Systems
Verify that:
• ∆ ∈ ∆; print out delta, and check that its block structure corresponds to that
given by deltasete.
deltasete
delta
• Verify the upper bound by checking the structure of dl, dr, gl, gm, and gr,
and the inequality in equation Figure 4-18.
deltasete
dl
dr
gl
gm
gr
oobdmdimjg = 1/bnds(1,1)*dl*mat/dr - sqrt(-1)*gm;
gscl_l = inv(sqrtm(sqrtm(eye(5) + gl*gl))));
gscl_r = inv(sqrtm(sqrtm((eye(5) + gr*gr))));
norm(gscl_l*oobdmdimjg*gscl_r)
Try the other block structures, and verify the consistency of all aspects of the
bounds, scaling matrices, and perturbations.
4-66
Linear Fractional Transformations
M 11 M 12
M =
M 21 M 22
(4-19)
e = M 11 d + M 12 w
z = M 21 d + M 22 w
w = ∆2 z
(4-20)
This set of equations Figure 4-20 is called well posed if for any vector d, there
exist unique vectors w, z, and e satisfying the loop equations. It is easy to see
that the set of equations is well posed if and only if the inverse of I – M22∆2
exists. If not, then depending on d and M, there is either no solution to the loop
equations, or there are an infinite number of solutions. When the inverse does
indeed exist, the vectors e and d must satisfy e = FL(M,∆2)d, where
–1
F L ( M, ∆ 2 ) = M 11 + M 12 ∆ 2 ( I – M 22 ∆ 2 ) M 21
(4-21)
e
M d
z w
- 2
4-67
4 Modeling and Analysis of Uncertain Systems
• Determine whether the LFT is well posed for all ∆2 ∈ B2, and,
• If so, then determine how large FL(M,∆2) can get for this norm-bounded set
of perturbations.
The next section has three simple theorems which answer this problem.
M 11 M 12
M =
M 21 M 22
(4-22)
and suppose there are two defined block structures, ∆1 and ∆2, which are
compatible in size with M11 and M22 respectively. Define a third structure ∆ as
∆ 0
∆ = 1 : ∆ 1 ∈ ∆ 1, ∆ 2 ∈ ∆ 2 .
0 ∆2
(4-23)
Now there are three structures with which we may compute µ. The notation we
use to keep track of this is as follows: µ1(⋅) is with respect to ∆1, µ2(⋅) is with
respect to ∆2, : µ∆ (⋅) is with respect to ∆. In view of this, µ1(M11), µ2(M22), and
µ∆(M) all make sense, though for instance, µ1(M) does not. Again, define the
norm-bounded perturbation sets as
4-68
Linear Fractional Transformations
B i : = { ∆ i ∈ ∆ i : σ ( ∆ i ) ≤ 1 }.
1 µ∆(M) <1
max µ 1 ( F L ( M, ∆ 2 ) ) < 1
∆ ∈B
2 2
max µ 2 ( F U ( M, ∆ 1 ) ) < 1
∆ ∈B
1 1
4-69
4 Modeling and Analysis of Uncertain Systems
∆1 0
∆ :=
0 ∆2
Obviously, ∆ ∈ ∆, and σ ( ∆ ) ≤ 1 .
Now
I – M 11 ∆ 1 – M 12 ∆ 2
det ( I – M∆ ) = det
M 21 ∆ 1 I – M 22 ∆ 2
det ( I – M∆ ) =
–1
det ( I – M 11 ∆ 1 )det ( I – M 22 ∆ 2 + – M 21 ∆ 1 ( I – M 11 ∆ 1 ) M 12 ∆ 2 )
4-70
Frequency Domain µ Review
Robust Stability
The most well-known use of µ as a robustness analysis tool is in the frequency
domain. Suppose M(s) is a stable, multi-input, multi-output transfer function
of a linear system, M. For clarity, assume M has nz inputs and nw outputs. Let
∆ be a block structure, as in equation Figure 4-5, and assume that the
n ×n
dimensions are such that ∆ ⊂ C z w . We want to consider feedback
perturbations to M which are themselves dynamical systems, with the
block-diagonal structure of the set ∆.
In this section, we outline the proofs for the situation where the perturbations
are assumed to be stable. This is not a restriction with parametric real
uncertainty, as constant parameters are clearly stable. However, when using a
multiplicative or additive unmodeled dynamics perturbation to model
uncertainty in an unstable component (see the example in the “Unmodeled
Dynamics” section), it is useful to allow unstable perturbations, with the
restriction that the number of right-half-plane poles of the component remains
constant. An alternate approach is to allow a block of the perturbation matrix
to be an unstable transfer function. However, we restrict it to only take on
values that preserve the number of right-half-plane poles of the perturbed
component with which it is associated. In this case, the theorems we state are
still correct, though the proofs must be modified. In fact, even more
sophisticated assumptions about the perturbed systems can be made, including
structured coprime factor uncertainty and gap metric uncertainty, but these
are beyond the scope of this tutorial.
Let S denote the set of real-rational, proper (no poles at s = ∞), stable, transfer
matrices. Associated with any block structure ∆, let S∆ denote the set of all
block diagonal, stable rational transfer functions, with block structure like ∆.
S ∆ := { ∆ ∈ S : ∆ ( s o ) ∈ ∆ for all s o ∈ C + }
4-71
4 Modeling and Analysis of Uncertain Systems
Theorem 4.6: Let β > 0. The loop shown in Figure 4-23 is well-posed and
internally stable for all ∆ ∈ S∆ with ∆ ∞ < 1--β- if and only if
M ∆ := sup µ $∆ ( M ( jω ) ) ≤ β
ω∈R
- M
Figure 4-23: Robust Stability
4-72
Frequency Domain µ Review
Robust Performance
Stability is not the only property of a closed-loop system that must be robust to
perturbations. Typically there are exogenous disturbances acting on the
system (wind gusts, sensor noise) which result in tracking and regulation
errors. Under perturbation, the effect that these disturbances have on error
signals can greatly increase. In most cases, long before the onset of instability,
the closed-loop performance will degrade to the point of unacceptability. Hence
the need for a robust performance test. Such a test will indicate the worst-case
level of performance degradation associated with a given level of perturbations.
Assume M is a stable, real-rational, proper transfer function, with nz + nd
inputs, and nw + ne outputs. Partition M in the obvious manner, so that M11
has nz inputs and nw outputs, and so on. Let ∆ ⊂ C n w × n z be a block structure,
as in equation Figure 4-5. Define an augmented block structure
∆ 0 nd × ne
∆ P := : ∆ ∈ ∆, ∆ F ∈ C
0 ∆F
M ∆ := sup µ ∆ ( M ( jω ) ) ≤ β
P P
ω∈R
-
e M d
Figure 4-24: Robust Performance
The proof of this is exactly along the lines of the earlier proof, but also applying
Theorem 4.5. See [DoyWS] and [PacD] for details.
4-73
4 Modeling and Analysis of Uncertain Systems
'$
disk shown in Figure 4-25.
Im 6 C
&%
0:8
0:4
-
,0:8 ,0:4 0:4 0:8 1:2 1:6 2:0
Re
,0:4
,0:8
Figure 4-25: Complex Disc Covering Real Interval: Restriction of the Nyquist
Plot of ĉ
Using complex parameters, the uncertain model for c represents a stable linear
system whose characteristics are similar to an uncertain real gain, but deviate
in a manner quantified by the disc-shaped constraint on its frequency response.
In general, using disks instead of intervals leads to more conservative
robustness properties. With mu, it is easy and fast to explore the differences in
the robustness properties as the uncertainty model changes.
4-74
Real vs. Complex Parameters
- R
M
Figure 4-26: Robust Stability with Real Uncertainty
While the upper bound from mu will be effective, the lower bound potentially
will have convergence problems, yielding little information in terms of bad
perturbations. A fix, which has both engineering and mathematical
justification, is available.
4-75
4 Modeling and Analysis of Uncertain Systems
What does all of this mean? The block structure has been expanded (to twice
the original size) by including complex blocks which are exactly the same
dimension as the original real blocks. The matrix has been expanded (to twice
the original size) by the multiplication on the left and right. The scale factors
of α = 0.1 imply that the input/output channels, which the complex uncertainty
affects, are each scaled down by a factor of 10, giving an overall scaling of the
complex blocks of 0.01. The µ calculation determines upper and lower bounds
for robust stability in the uncertain system shown in Figure 4-27.
- R - R + 2C
M c?
6 M
- C
Figure 4-27: Replacing Real Uncertainty with Real+Complex Uncertainty
In this process, each real parameter δR has been replaced by a real parameter
plus a smaller complex parameter (α2δC). Rather than computing robustness
margins to purely real parameters, the modified problem determines the
robust stability characteristics of the system with respect to predominantly
4-76
Real vs. Complex Parameters
4-77
4 Modeling and Analysis of Uncertain Systems
load shut_rs
minfo(clp_muRS)
blkrs_R = [-1 0;-1 0;-1 0;-1 0;-1 0;-1 0;-1 0;-1 0;-1 0];
clp_muRSg = frsp(clp_muRS,logspace(-2,2,40));
fix1 = [eye(9) ; 0.1*eye(9)];
fix2 = [eye(9) ; 0.2*eye(9)];
fix3 = [eye(9) ; 0.3*eye(9)];
blk_RC = [blkrs_R ; abs(blkrs_R)];
m1 = mmult(fix1,clp_muRSg,fix1');
m2 = mmult(fix2,clp_muRSg,fix2');
m3 = mmult(fix3,clp_muRSg,fix3');
[rbnd,rp] = mu(clp_muRSg,blkrs_R);
[bnd1,rd1,s1,rp1] = mu(m1,blk_RC);
[bnd2,rd2,s2,rp2] = mu(m2,blk_RC);
[bnd3,rd3,s3,rp3] = mu(m3,blk_RC);
allbnds = abv(rbnd,bnd1,bnd2,bnd3);
vplot('liv,d',sel(allbnds,':',1))
title('UPPER BOUNDS: 0%, 1%, 4%, 9% COMPLEX')
xlabel('FREQUENCY, RAD/SEC')
ylabel('UPPER BOUND')
vplot('liv,d',sel(allbnds,':',2))
title('LOWER BOUNDS: 0%, 1%, 4%, 9% COMPLEX')
xlabel('FREQUENCY, RAD/SEC')
ylabel('LOWER BOUND')
4-78
Real vs. Complex Parameters
0.9
0.8
0.7
0.6
UPPER BOUND
0.5
0.4
0.3 0% − SOLID
1% − DASHED
4% − DOTTED
0.2 9% − DASHED/DOTTED
0.1
0
−2 −1 0 1 2
10 10 10 10 10
FREQUENCY, RAD/SEC
0% − SOLID
0.8
0% − SOLID
1% − DASHED
4% − DOTTED
0.7 9% − DASHED/DOTTED
0.6
LOWER BOUND
0.5
0.4
0.3
0.2
0.1
0
−2 −1 0 1 2
10 10 10 10 10
FREQUENCY, RAD/SEC
For each case, the upper bound shows a slight increase as the percentage of
allowable complex perturbation is increased. This is expected. The lower bound
behaves similarly, though the introduction of very small complex terms has a
more dramatic effect. For 0%, the lower bound from mu is zero — the program
is simply unable to find purely real perturbations which cause singularity.
However, upon introducing a 1% complex term in each perturbation, the lower
4-79
4 Modeling and Analysis of Uncertain Systems
1 1
δ R ≤ -----------, δ C ≤ -----------
0.76 0.76
4-80
Generalized µ
Generalized µ
Generalized µ allows us to put additional constraints on the directions that
I - M∆ becomes singular. Given a matrix M ∈ Cn×n, and C ∈ Cm×n, find the
smallest ∆ ∈ ∆ such that
I – ∆M
C
1
µ ∆ ( M, C ) : = -----------------------------------------------------------------------------------------------------
min σ ( ∆ ) : ∆ ∈ ∆ ,rank I – ∆M < n
C
This quantity can be bounded above easily, using standard µ ideas. Suppose
µ∆(M,C) ≥ β
Then, there is a ∆ ∈ ∆, σ ( ∆ ) ≤ 1--β- and a nonzero vector x such that
(I – ∆M)x = θn, Cx = θm
Hence, for every matrix Q ∈ Cn×m, it follows that
(I – ∆(M + QC))x = θn
so that for every matrix Q ∈ Cn×m, µ∆(M + QC) ≥ β.
By contrapositive, if there exists a matrix Q such that
µ∆(M + QC) < β
then µ∆(M,C) < β.
Hence, we have
µ ∆ ( M, C ) ≤ min µ ∆ ( M + QC )
n×m
Q∈C
4-81
4 Modeling and Analysis of Uncertain Systems
4-82
Using the mu Software
4-83
4 Modeling and Analysis of Uncertain Systems
References
Additional information on the structured singular value, linear fractional
transformations and their use in robustness analysis on uncertain linear
systems, as well as a historical account of the development of the theory can be
found in the following references.
[AndAJM:] B. Anderson, P. Agathoklis, E. Jury, and M. Mansour, “Stability
and the matrix Lyapunov equation for discrete 2-dimensional systems,” IEEE
Transactions on Circuits and Systems, Vol. 33, no. 3, pp. 261-267, March 1986.
[BarKST:] B. Barmish, P. Khargonekar, Z. Shi, and R. Tempo, “Robustness
margin need not be a continuous function of the problem data,” Systems and
Control Letters, Vol. 15, pp. 91-98, 1989.
[BoyD:] S. Boyd and C. Desoer, “Subharmonic functions and performance
bounds on linear time-invariant feedback systems,” IMA Journal of
Mathematical Control and Information, Vol. 2, pp. 153-170, 1985.
[BoyE:] Boyd, S., and El Ghaoui, L., “Method of centers for minimizing
generalized eigenvalues,” Linear Algebra and Its Applications, special issue on
Numerical Linear Algebra Methods in Control, Signals and Systems, vol. 188,
pp. 63-111, July 1993.
[CheD:] M.J. Chen and C.A. Desoer, “Necessary and sufficient condition for
robust stability of linear distributed feedback systems,” International Journal
of Control, Vol. 35, no. 2, pp. 255-267, 1982.
[Doy:] J.C. Doyle, “Analysis of feedback systems with structured
uncertainties,” IEEE Proceedings, Vol. 129, Part D, no. 6, pp. 242-250, Nov.
1982.
[DoyPZ:] J. Doyle, A. Packard, and K. Zhou, “Review of LFTs, LMIs and µ,”
Proceedings of the 30th IEEE Conference on Decision and Control, pp.
1227-1232, 1991.
[DoyS:] J.C. Doyle and G. Stein, “Multivariable feedback design: Concepts for
a classical/modern synthesis,” IEEE Transactions on Automatic Control, Vol.
AC-26, pp. 4-16, Feb. 1981.
[DoyWS:] J.C. Doyle, J. Wall and G. Stein, “Performance and robustness
analysis for structured uncertainty,” Proceedings of the 21st IEEE Conference
on Decision and Control, pp. 629-636, Dec. 1982.
4-84
References
4-85
4 Modeling and Analysis of Uncertain Systems
4-86
5
Control Design via
Synthesis
5 Control Design via µ Synthesis
The structured singular value, µ, is the appropriate tool for analyzing the
robustness (both stability and performance) of a system subjected to
structured, LFT perturbations. This is evident from the discussions and
examples in Chapter 4. In this section, we cover the mechanics of a controller
design methodology based on structured singular value objectives. We rely
heavily on the upper bound for µ.
5-2
Problem Setting
Problem Setting
In order to apply the general structured singular value theory to control system
design, the control problem has been recast into the linear fractional
transformation (LFT) setting as in Figure 5-1.
- pert
w z
e P
d
y u
- K
Figure 5-1 LFT Description of Control Problem
The system labeled P is the open-loop interconnection and contains all of the
known elements including the nominal plant model and performance and
uncertainty weighting functions. The ∆pert block is the uncertain element from
the set ∆pert , which parametrizes all of the assumed model uncertainty in the
problem. The controller is K. Three sets of inputs enter P: perturbation inputs
!w, disturbances d, and controls u. Three sets of outputs are generated:
perturbation outputs z, errors e, and measurements y.
The set of systems to be controlled is described by the LFT
The design objective is to find a stabilizing controller K, such that for all such
perturbations ∆pert, the closed-loop system is stable and satisfies
||FL[FU(P,∆pert),K]||∞ ð 1.
perturbed plant
5-3
5 Control Design via µ Synthesis
U (P; pert)
w
- pert z
F
- pert
w z
e P
d e P
d
y
- u y
- u
L (P; K )
K K
F
∆ pert 0 nd × ne
∆ := : ∆ pert ∈ ∆ pert, ∆ F ∈ C .
0 ∆F
The structured singular value provides the correct test for robust performance.
We know from the discussion in Chapter 4 that K achieves robust performance
if and only if
maxµ ∆ ( F L ( P, K ) ( jω ) ) < 1 .
ω
5-4
Problem Setting
The goal of µ synthesis is to minimize over all stabilizing controllers K, the peak
value of µ∆(⋅) of the closed-loop transfer function FL(P,K). More formally,
min maxµ ∆ ( F L ( P, K ) ( j, ω ) )
K ω
stabilizing
(5-1)
min max
K !2R
P
-K
Figure 5-3: µ Synthesis
5-5
5 Control Design via µ Synthesis
–1
µ∆ ( M ) ≤ inf σ ( DMD )
D ∈ D∆
Recall that Dx∆ is the set of matrices with the property that D∆ = ∆D for every
D ∈ D∆, ∆ ∈ ∆.
Using this upper bound, the optimization in equation Figure 5-1 is
reformulated as
–1
min max min σ [ D ω F L ( P, K ) ( jω )D ω ]
K ω D ∈D
stabilizing ω ∆
(5-2)
–1
min min max σ [ D ω F L ( P, K ) ( jω )D ω ]
K
stabilizing
D.,D ω ∈ D ∆ ω
(5-3)
–1
min min DF L ( P, K )D ∞
K D.,D ω ∈ D ∆
stabilizing
(5-4)
5-6
Replacing µ With Its Upper Bound
So, replacing D by UD does not affect the upper bound. Using this freedom in
the phase of each block of D, we can restrict the frequency-dependent scaling
matrix Dω of equation Figure 5-4 to be a real-rational, stable, minimum-phase
ˆ ( s ) , and not affect the value of the minimum.
transfer function, D
Hence the new optimization is
ˆ F ( P, K )D̂ – 1
min min D L ∞
ˆ
K D(s) ∈ D
stabilizing ∆
stable, min-phase
(5-5)
min ^ ^ ,1
D P D 1
K;D
- K
Figure 5-4: Replacing µ with Upper Bound
A specific example clarifies some of the ideas. Assume for simplicity that the
uncertainty block ∆pert only has full, unmodeled dynamics (ie., complex) blocks,
say, N of them. Then the set ∆pert is of the form
ri × ci
∆ pert = diag [ ∆ 1, ∆ 2, …, ∆ N ] : ∆ i ∈ C
(5-6)
5-7
5 Control Design via µ Synthesis
The set ∆ has the additional fictitious block (for the robust performance
characterization)
ri × ci nd × ne
∆ = diag [ ∆ 1, ∆ 2, …, ∆ N, ∆ F ] : ∆ i ∈ C , ∆F ∈ C
(5-7)
D ∆ = { diag [ d 1 I, d 2 I, …, d N I, I ] : d i > 0 }
(5-8)
The elements of D∆, which are defined in Figure 5-8 to be real and positive, can
be allowed to take on any nonzero complex values and not change the value of
–1
the upper bound, inf D ∈ D σ ( DMD ) . Using this freedom in the phase of each
∆
entry of D, we can restrict the frequency-dependent scaling matrix Dω of
equation Figure 5-4 to be a real-rational, stable, minimum-phase transfer
ˆ
function, d ( s ) . The optimization is now
–1
ˆ ˆ
d1 I … 0 0 d1 I … 0 0
min F L ( P, K )
K, d̂ ˆ ˆ
0 ….. d N I 0 0 …...d N I 0
…
…
…
…
…
…
.
0 … 0 I 0 … 0 I ∞
5-8
D-K Iteration: Holding D Fixed
–1
min D̂F L ( P, K )D̂ ∞
K
stabilizing
(5-9)
^ ^ ,1
PD = D P D
Figure 5-5: Absorbing Rational D Scaling
min F L ( P D, K ) ∞
K
stabilizing
5-9
5 Control Design via µ Synthesis
The two-step procedure is a viable and reliable approach. The primary reason
for its success is the efficiency with which both of the individual steps are
carried out. The µ upper bound is based on a convex optimization problem. For
this problem, we have developed many heuristics, which when combined with
standard convex minimization tools leads to a fast, and accurate computation
of the upper bound.The fitting algorithm, using genphase and fitsys, is based
on FFT, least squares and again heuristics. It is extremely fast and works well
in most situations.
–1
min σ [ D ω F L ( P, K ) ( jω )D ω ]
D ∈D
ω
This minimization is done over the real, positive Dω from the set D∆ defined in
equation Figure 5-8. This is carried out with µ, in the upper bound
optimization. Recall that the addition of phase to each di (ω) does not affect the
–1
value of σ [ D ω F l ( P, K ) ( jω )D ω ] . In other words, the important aspect of the
scaling di is its magnitude, |di (jω)|.
5-10
D-K Iteration: Holding K Fixed
Hence, each positive function, di, which is defined on a finite set of frequencies,
is fit (in magnitude) by a proper, stable, minimum-phase transfer function,
dˆ i ( s ) . This is accomplished as follows: Use the Bode integral formulae to
determine the phase θi(ω) of the stable, minimum-phase function Li that
satisfies
|Li(jω)| = di(ω)
for all ω. Then, use the transfer function fitting routine fitsys to construct a
ˆ
real-rational transfer function d i ( s ) such that
ˆ jθ i ( ω )
d i ( jω ) ≈ e d i ( ω ) = L i ( jω )
{
{
phase magnitude
and absorbed into the original open-loop generalized plant P (to yield PD, as
described earlier).
2 D(0) ∈ Rn×n
3 ∀ω ∈ R, D ( jω ) = D* ( jω ) > 0
4 D = DT > 0
5-11
5 Control Design via µ Synthesis
1 Find a stable minimum phase rational function ĝ i such that for all ω ∈ R,
ĝ i ( jω ) ≈ D ii ( jω )
jθ i ( ω ) ĝ i ( jω )
e := -------------------
ĝ i ( jω )
3 For each k = 1,2,. . .,i – 1,i + 1,. . .,n, find rational ĝ ik (not necessarily stable
or minimum phase) such that for all ω ∈ R,
j∅ i ( w )
ĝ ik ( jw ) ≈ e D ik ( jw )
ĝ 1 ĝ 12 … ĝ 1n
ˆ ĝ 21 ĝ 2 … ĝ 2n
G ( s ) :=
..
…
…
.
ˆ g ˆ … gˆ
g n1 n2 n
and define Φ := diag[φi]. If the rational fitting was done accurately, then, for all
ω
ˆ ( jω ) = e jΦ ( ω ) D ( jω )
G
ˆ
Also, G has no poles on the imaginary axis, and has nonzero determinant
everywhere on the imaginary axis. Hence, by spectral factorization techniques
(and the command extsmp), we can find a stable, minimum-phase D ˆ and a
ˆ
unitary matrix function U such that
ˆ ˆ ( jω )D̂ ( jω )
G ( jω ) = U
ˆ is an appropriate
for all ω. The stable, minimum-phase rational matrix D
scaling for the iteration.
5-12
Commands for D – K Iteration
1 Use the graphical user interface dkitgui for automated, adjustable, and
visual iterations that allow for easy monitoring of progress.
2 Use the script file dkit (improved syntax, algorithms and ease-of-use from
version 2.0) for automated but adjustable iterations.
3 Use the dkit command in the auto mode to run a specified number of
iterations in an automatic mode (requires no user intervention).
4 Write your own iteration loop, using commands such as hinfsyn, frsp, mu,
and msf. This approach is not recommended.
5-13
5 Control Design via µ Synthesis
Discussion
There are two shortcomings with the D – K iteration control design procedure:
• We have approximated µ∆(⋅) by its upper bound. This is not a serious problem
since the value of µ and its upper bound are often close.
• The D – K iteration is not guaranteed to converge to a global, or even local
minimum [SteD]. This is a very serious problem, and represents the biggest
limitation of the design procedure.
Reference
[SteD:] Stein, G., and J. Doyle, “Beyond singular values and loopshapes,” AIAA
Journal of Guidance and Control, vol. 14, num. 1, pp. 5-16, January, 1991.
5-14
Auto-Fit for Scalings (Optional Reading)
d1 ( ω ) … 0 0 0 … 0
..
… 0
…
. 0 0 0
0 … dk – 1 ( ω ) 0 0 … 0
D k, r ( ω ) := 0 … 0 d̂ k ( jω ) 0 … 0
0 … 0 0 dk + 1 ( ω ) … 0
..
…
…
…
…
.
0 … 0 0 0 … I
–1 –1
σ [ D ( ω )F L (P, K) ( jω )D ( ω ) ] and σ [ D [ k, r ] ( ω )F L (P, K) ( jω )D [ k, r ] ( ω ) ]
If these are close, then r is deemed a suitable order for the kth scaling function
dk. The measure of closeness can be chosen. Define
–1
β := maxσ [ D ( ω )F L (P, K) ( jω )D ( ω ) ]
ω∈R
5-15
5 Control Design via µ Synthesis
–1
σ [ D ( ω )F L (P, K) ( jω )D ( ω ) ] ≥ 0.5β
we require that
–1 –1
σ [ D k, r ( ω )F L (P, K) ( jω )D k, r ( ω ) ] ≤ 1.03σ [ D ( ω )F L (P, K) ( jω )D ( ω ) ]
The quantity 1.03 is the AutoTol parameter in dkit, which can be modified
easily. In dkitgui, the Auto-Fit Tolerance, which is adjustable in the
Parameter window, varies this parameter from 1.01 (tight) to 1.06 (loose).
For other frequencies, we require that
–1 –1
σ [ D k, r ( ω )F L (P, K) ( jω )D k, r ( ω ) ] ≤ σ [ D ( ω )F L (P, K) ( jω )D ( ω ) ] + 0.1β
Note that the order of d̂ k is chosen based on its performance as a scaling while
all of the other scalings are set to their optimal (dk(ω)). You can easily modify
the constants 0.5 and 0.1 to demand tighter tolerance on the auto-fitting
algorithm.
5-16
6
Graphical User Interface
Tools
6 Graphical User Interface Tools
6-2
Workspace User Interface Tool: wsgui
6-3
6 Graphical User Interface Tools
The scrollable table can be moved up/down one page by pressing above/below
the slider. Pressing the arrows at the end of the slider moves the table one line.
A filter is used to make viewing of a reduced number of selections easy. The
Prefix, Suffix, and matrix type filters are on the bottom of the scrollable
table. The matrix type filter is a pop-up menu to the right of Suffix. For
instance, let’s look at SYSTEM matrices whose names begin with w. Type a w
in the Prefix box, as shown in Figure 6-2. Note that the Apply Selection
button becomes enabled, yet all 20 matrices in the workspace are still displayed
(even those that don’t start with a w). The selection filter is only applied when
you press the Apply Selection button. Move to the right to the pop-up menu,
which currently displays All, and select System, as shown in Figure 6-2. The
Apply Selection remains enabled, and again, matrices that are not SYSTEM
matrices are still displayed. Press Apply Selection to apply the filter (first
6-4
Workspace User Interface Tool: wsgui
letter = w, matrix type = SYSTEM). The scrollable table refreshes, leaving 7 (of
the original 20) matrices displayed, as seen in Figure 6-3.
6-5
6 Graphical User Interface Tools
Now, go into the MATLAB command window, and create a new SYSTEM
matrix, with first letter w
wnew = nd2sys([1 2],[1 2 3 4]);
Note that this does not immediately appear in the scrollable table, even though
it satisfies the selection criteria. This new variable will not appear until the
Refresh Variables button is pushed. Press the Refresh Variables button and
note that when the table is refilled, the system wnew appears as expected,
Figure 6-4.
6-6
Workspace User Interface Tool: wsgui
Often, you want to create a more complicated selection criteria. The Custom
filter can be used to do this. Press the push button marked with a * (to the right
of the matrix type pop-up menu) to switch to the Custom filter. Lets find all
matrices with four or more outputs. In the Custom box, type
ynum(mdata)>=4
and press Apply Selection. The results are shown in Figure 6-5. This will
evaluate the expression in the Custom box, and select those workspace
variables for which the expression is true. In the Custom box, use mdata to
indicate the matrix’s value, and mname to substitute the matrix’s name. Hence,
selections can be based on the name and value of any workspace variable.
6-7
6 Graphical User Interface Tools
File Menu
The File menu at the top of the Workspace Manager window has three menu
items as seen in Figure 6-6.
6-8
Workspace User Interface Tool: wsgui
The Clear Selected Matrices item allows you to clear the variable names
currently appearing in the Workspace Manager window from the MATLAB
workspace. Upon selecting the Clear Selected Matrices item you are
prompted with the box shown in Figure 6-7.
Pressing the Clear push button clears the selected variables from the
MATLAB workspace. Pressing Cancel cancels this command.
Similarly, the Save Selected Matrices item allows you to save the variable
names currently appearing in the Workspace Manager window to a MATLAB
MAT-file. Upon selecting Save Selected Matrices, you are prompted with the
box shown in Figure 6-8.
You must enter the name of the file in editable text frame in which to store
these variables. The default filename for the variables to be saved into is
savefile. Pressing the Save push button saves the selected variables from the
MATLAB workspace to the filename as defined by the editable text string.
Pressing Cancel cancels this command. The Quit item quits the workspace tool
and deletes the Workspace Manager window.
6-9
6 Graphical User Interface Tools
Options Menu
The Options menu at the top of the Workspace Manager window has three
menu items, as seen in Figure 6-9.
The CleanUp item redisplays the variable names and data appearing in the
Workspace Manager window. The other two menu items are Font and # of
Lines. These items correspond to the font type and number of items shown in
the workspace window. The Font menu item allows you to select a font size of
7 to 12 to display the data (see Figure 6-10). The # of Lines menu item allows
you to select the number of lines of data displayed in the workspace window.
You can select between [12 20 28 36 44] lines of data. This is especially useful
since the Workspace Manager window is resizable.
Note You must select the CleanUp item from the Options menu after
resizing.
6-10
Workspace User Interface Tool: wsgui
Export Button
The Export button and editable text boxes at the bottom of the Workspace
Manager window allow you to export data from other µ-Tools user interfaces
to the MATLAB workspace. The Export button also allows you to copy
workspace variables, although it is just as easy to do this at the MATLAB
command line.
Consider the following example of how to copy variables. Select all the
SYSTEM matrices currently in your MATLAB workspace. To copy the wp
SYSTEM matrix to the variable TEMP, type wp in the editable text box to the
right of the Export button and TEMP in the editable text box to the right of As.
Your Workspace Manager window should correspond to Figure 6-11.
Pressing the Export button copies wp to the variable TEMP in your MATLAB
workspace. The text display in the message bar shows the MATLAB command
and the time and date it was executed. Your Workspace Manager window
should look like Figure 6-12 after the Refresh Variables button is pressed.
The drop box to the right of the Export button provides another manner to
deposit information to be copied into the workspace. For more information
about how to use µ-Tools drop boxes see the “Dragging and Dropping Icons”
section of this chapter.
6-11
6 Graphical User Interface Tools
Figure 6-11: wsgui Main Window with SYSTEMs Selected and Export Data
6-12
Workspace User Interface Tool: wsgui
6-13
6 Graphical User Interface Tools
d Θ Θx u1
------ x = 0 10 + 0 1
dt Θ – 10 0 Θ y 1 0 u2
y
y1
= 1 10 Θ x
y2 – 10 1 Θ y
Therefore, the nominal state-space model for the spinning satellite is defined as
0 10 1 0
AG BG – 10 0 0 1
G = =
CG DG 1 10 0 0
– 10 1 0 0
6-14
Spinning Satellite Example: Problem Statement
The spinning satellite has the following characteristics that need to be included
in the control problem formulation:
• The model of the channel 1 actuator has uncertainty or error of 20% at low
frequency, below 1 rad/sec. This modeling error reaches 100% uncertainty at
20 rad/sec with very large potential errors in this actuator model above 200
rad/sec. We choose to model this in the µ framework as a multiplicative input
uncertainty. A frequency domain weight is constructed to describe the
percentage modeling error as a function of frequency. The multiplicative
uncertainty weight associated with the first actuator is
10 ( s + 4 )
W del1 := ------------------------
s + 200
0
10
1
10 2 1 0 1 2 3 4
10 10 10 10 10 10 10
Frequency (rad/sec)
6-15
6 Graphical User Interface Tools
• The channel 2 actuator has 40% uncertainty at low frequency and the
uncertainty in this model reaches 100% at 58 rad/sec. The multiplicative
uncertainty weight associated with this actuator is
10 ( s + 24 )
W del2 := ---------------------------
3 ( s + 200 )
12 ( s + 25 )
w n := ------------------------------
5 ( s + 6000 )
Based on this weight at low frequency, below 20 rad/sec, the noise signal has
a magnitude of ± 0.01radians, at high frequency the noise level reaches a
magnitude of ± 2.4 radians.
• The desired closed-loop performance is to achieve 100:1 disturbance rejection
at DC. This can also be interpreted as desiring 1% tracking error at DC. The
performance objective is the same in each channel. Hence it is represented
by a diagonal performance weighting function Wp, Wp = wpI2×2. The scalar
transfer function wp is defined as
s+4
w p = -----------------------------
2 ( s + 0.02 )
6-16
Spinning Satellite Example: Problem Statement
1
Magnitude 10
0
10
1
10
2
10 2 1 0 1 2 3 4
10 10 10 10 10 10 10
Frequency (rad/sec)
The control design block diagram for the spinning satellite is shown in
Figure 6-15. The uncertainty weights, Wdel1 and Wdel2, and the performance
weights, Wp and Wn, are design parameters that you, the control engineer, can
manipulate. These weights are used to incorporate information about the
physical system into the control design.
Let P denote this open-loop interconnection. Suppose we order the inputs and
outputs in P, as shown in Figure 6-16 (each signal represents a vectored valued
signal with two components).
6-17
6 Graphical User Interface Tools
d1 d2
w1 z1
6 ? ? ?
Wdel2 Wp
u1 - ?g - -?g - e1
u2 -g - G ?
-g - e2
6
Wdel1 y1 ?
g n1
6 y2 ?
g Wn
n2
w?
2 z2
Figure 6-15: Spinning Satellite Interconnection Structure
w z
e
d
Pss n
y u
To create and load the P_ss interconnection structure shown in Figure 6-16,
type
ssic
6-18
D-K Iteration User Interface Tool: dkitgui
• Main Iteration window, which is the main interface for the user during the
iteration.
• Setup window, where initial data is entered.
• Parameter window, which is occasionally used to modify properties of the D
– K iteration, such as H∞ parameters, and to select the variables that are
automatically exported to the workspace each iteration.
• Frequency Response window, where the plots of µ and σ of the closed-loop
transfer function matrix are displayed.
• Scaling window, where the rational fits of the frequency-dependent D-scale
data are shown, and can be modified.
The spinning satellite control design example from the previous section is used
to illustrate the basic features of dkitgui. Start the tool by typing
dkitgui
in its lower left corner, as shown in Figure 6-17. This is the location of the
message bar where information about the D – K iteration is displayed.
6-19
6 Graphical User Interface Tools
Press the SETUP button as instructed. This brings the Setup window to the
foreground. The Setup window is shown in Figure 6-18.
6-20
D-K Iteration User Interface Tool: dkitgui
6-21
6 Graphical User Interface Tools
You can enter this information in any order, except that the dimensions of the
uncertainty blocks can only be specified after entering the number of
uncertainty blocks. All data entry points are uicontrol editable text objects,
and operate in a machine dependent manner with which you should be
familiar. As before, when we instruct you to enter data in an editable text
object, this implicitly means enter the text, and complete the action (by
pressing Return, by pressing the mouse on another object, by moving the
mouse pointer out of the text object, etc.). See the MATLAB Function Reference
online for more details on completing a text entry.
Notice that three items in the Setup window — Controller, Iteration
Suffix, and Iteration Name — are enclosed in < >. This notation denotes a
variable that is optional, and no action is necessary.
Enter P_ss (the open-loop interconnection) in the open-loop editable text frame
(see Figure 6-19). Press the Open-Loop IC checkbox to load the data into the
Open-Loop IC variable. The pointer turns into an hourglass while MATLAB
loads the data. Upon loading the data, the pointer turns back into an arrow,
and the matrix type (S for System, C for Constant, and V for Varying) of the
Open-Loop IC variable and its dimensions are displayed to the right of the
editable text. In this example the variable P_ss is a SYSTEM matrix with six
outputs, eight inputs, and eight states. The <Controller> data is optional.
This allows you to load an initial controller to start the D – K iteration or during
the D – K iteration you may want to load a reduced order controller.
Since there are two uncertain actuators in the spinning satellite problem, the
uncertainty structure has 2, 1-by-1 uncertainty blocks. Enter a 2 in the
Uncertainty Structure # of Blocks text field. This opens a 2-by- 3 editable
matrix, as seen in Figure 6-20, where the dimensions of the uncertainty blocks
are entered. Enter a 1 for each row and column dimension. The third column,
labeled Fac, is used for scaling the size of the uncertainty during the iteration.
6-22
D-K Iteration User Interface Tool: dkitgui
The value of Fac may be varied from.1 to 10, effectively reducing or increasing
the size of the uncertainty by a factor of 10. Leave these factors as 1 for the time
being.
In the spinning satellite problem (Figure 6-16), there are four exogenous
disturbances (two disturbance torques and two measurement noises) and two
penalized errors (two tracking errors). Enter a 2 in the # of Errors editable text
and a 4 into the # of Disturbances editable text in the Performance
Structure frame. The spinning satellite has two measured variables for
feedback and two control actuators. Enter these in the # of Measurements and
# of Controls editable text, respectively, in the Feedback Structure frame.
Entering data with these push buttons is optional. These push buttons are
used to enter the same data as SIGNAL DIMENSIONS and Frequency Range
6-23
6 Graphical User Interface Tools
data. To the left of each push button is a drop box and to the right of each push
button is an editable text box. Data that is dropped into the drop boxes or
entered in the editable text overwrites the data in the left column of the Setup
window. (See the “Dragging and Dropping Icons” section at the end of this
chapter for more details.) For example, instead of entering a 2 into the # of
Blocks text and 1, 1, 1, and 1 into each block structure, you could type
ones(2,2) in the Uncertainty editable text and press the Uncertainty push
button. This action overloads this data into the Uncertainty Structure by
redrawing the Uncertainty Structure frame and filling out the corresponding
editable text locations. Similarly the Performance data overwrites the
Performance Structure data, Feedback overwrites the Feedback Structure
data, and Omega overwrites the Frequency Range data.
6-24
D-K Iteration User Interface Tool: dkitgui
To continue with the D – K iteration, first hide the Setup window. This is done
by pulling down the Window menu and selecting Hide Setup from the menu
and returning to the main window. Now select the Parameters window from
the Window menu.
6-25
6 Graphical User Interface Tools
6-26
D-K Iteration User Interface Tool: dkitgui
The HinfSyn Parameters and Riccati Solver settings, shown in the table
below, correspond to the inputs of the H∞ control design program hinfsyn.
Gamma Min and Gamma Max are the minimum and maximum γ values.
The Suboptimal Tol is how close to the optimal γ value is desired. The
Imaginary Tol and Positive Def Tol are epr and epp in the hinfsyn program.
They correspond to the measure of when the real part of an eigenvalue of the
Hamiltonian matrices is zero and determination of the positive definiteness of
the X∞ and Y∞ solutions. The current default value for each parameter is shown
in parentheses, ( ), to the right of each label. The Riccati Solver has a mutually
exclusive set of buttons for selecting either the Schur or Eigenvalue method to
be used to solve the H∞ Riccati equations.
Suboptimal Tol
Imaginary Tol
The HinfSyn Parameters frame also allows you to deselect the measurements
and controls used during the control design process. For the spinning satellite
example there are two controls and two measurements. The Measurements
Utilized frame indicates that all measurements are currently being used. You
can input to the Measurements Utilized editable text a standard MATLAB
vector to denote the measurements that are to be used. Similarly, you can
select in the Controls Utilized frame which control inputs are to be used. The
resulting control design will have zeros in the state space B or C matrix of the
controller corresponding to the measurements inputs or control outputs that
have been deselected.
The Structured Singular Value (Mu) settings frame has two sets of options.
These options correspond to calculation of the structured singular value (µ)
using the mu program during D – K iteration. The first set of buttons is
mutually exclusive. You can either select to use greatest accuracy or a less
6-27
6 Graphical User Interface Tools
accurate but faster technique for calculating µ. (The Optimal method calls the
mu program with the option ’c,’ Fast method calls mu with the ’f’ option). You
can also select to calculate only an upper bound for µ which calls mu with the ’U’
option. Selecting this option speeds up the µ calculation, but you will be unable
to see how different the upper and lower µ bounds are when they are calculated
and plotted in the Mu/SVD Plot window.
The D-Scale Prefit frame contains settings for calculation of the rational
D-scales. The Max Auto-Order defines the maximum D-scale state order to be
used to fit an individual D-scaling during the prefitting part of the D-scales
fitting routine. The Max Auto-Order default is five states. The Auto-Fit
Tolerance scroll bar allows you to define how close the rational scaled µ upper
bound is to approximate the actual µ upper bound in a norm sense. Selecting
Loose will result in lower order D-scales being used in the D-scale prefitting,
whereas selecting Tight will likely result in higher order D-scales during the
D-scale prefit computation. This setting can play an important role in
determining which minimum of the D – K iteration is achieved. Currently this
is done by trial and error.
The Each Iteration: Export... frame shows data in the form of radio buttons
that is available to be exported to the MATLAB workspace. In the following
list, a subscript i denotes that the integer iteration number is added to the
variable’s name. These variables are:
The Iteration Suffix string, which was input in the Setup window, is appended
to the end of all the output variables selected in the Each Iteration: Export...
frame. In this example, the Controller and the Mu Analysis data is selected
for output. Therefore after the Control Design button executes for the first
time, the variable K1ss will be in the workspace.
6-28
D-K Iteration User Interface Tool: dkitgui
Hide the Parameter window and return to the main D – K iteration window to
continue this example. This can be done by simply pulling down the Window
menu and first selecting the Iteration option. This moves the main Iteration
window into the foreground. The main window is now shown in Figure 6-24. Go
back to the Parameter window, pull-down the Window menu, and select Hide
Parameter to hide the Parameter window.
Figure 6-24: Main D – K Iteration Window After All the Data Is Specified
6-29
6 Graphical User Interface Tools
D-K Iteration
The main window has (at this point) five significant items:
• Five push buttons, whose actions constitute the D – K iteration. Recall, the
D – K iteration pertains to the picture shown below, and is
D ,1
PD = P D
- K - K
Khinf H∞ controller
Kuse Controller K used in the current
D – K iteration.
Blk Block structure for µ calculation
Ydim Output dimension of the controller
Udim Input dimension of the controller
IC Open-loop interconnection structure
6-30
D-K Iteration User Interface Tool: dkitgui
These variables can be dragged and dropped at anytime into other µ-Tools user
interface commands.
The Control Design push button is enabled, and is the first step of the
iteration. At this point, the D matrices are set to identity (of appropriate
dimension). Design the first H∞ controller by pressing the Control Design
button. The standard gamma iteration data from hinfsyn is displayed in the
MATLAB command window. The DK Iteration Summary table is updated at
the end of the control design, and the Form Closed-Loop button is enabled.
The next two steps are simple — form the closed-loop system, and calculate
closed-loop frequency response. Press the Form Closed-Loop and Frequency
Response buttons as they are enabled. As the frequency response of the
closed-loop system is calculated, a running tab of the number of frequency
points calculated is shown in the message frame of the main window. The norm
of the closed-loop transfer function as a function of frequency is plotted in the
Frequency Response window as seen in Figure 6-25. (Note that Figure 6-25
includes the µ plot, which is not accurate at this point in the D – K iteration.)
In general, this transfer function has the D-scalings from the previous iteration
(for the first iteration it’s just identity) and the controller which was just
designed. Upon completing the frequency response the Compute Mu button is
enabled.
Now, compute the structured singular value, µ, of the closed-loop frequency
response by pressing the Compute Mu button. The block structure was defined
in the Setup window, in the Block Structure field. Recall (see Chapter 4) that
the upper bound for µ is computed by determining the optimal D-scalings as a
function of frequency. Like the frequency response, a running tab of the µ
calculation is shown in the message frame of the main window. The results are
shown in Figure 6-25.
6-31
6 Graphical User Interface Tools
Figure 6-25: µ Upper Bound and Maximum Singular Value for First D – K
Iteration
You have now completed one D – K iteration. The DK Iteration Summary table
is completely updated for the first iteration, as shown in Figure 6-26. The Next
Iteration button is highlighted. Once the Next Iteration button is pushed you
cannot effectively return to the previous iteration.
Suppose you desire a more refined frequency response for the µ calculation.
You can do this before you move to the second iteration by changing the data
in the Frequency Range frame in the Setup window. Changing the number of
frequency response points from 60 to 80 results in the <Frequency Response>
button being enabled. Note that the sideways carrots < > around the button
name denote that you may select the frequency response or go to the next
iteration. Continuing to the next iteration without pressing the Frequency
button will not change any of the data calculated during the first iteration.
6-32
D-K Iteration User Interface Tool: dkitgui
Pressing the Next Iteration button results in the D-scale data output from the
µ calculation being fit with rational D-scales. In the spinning satellite example,
there are two uncertainty blocks and one performance block, therefore, there
are two D-scales to be fit. A table entitled D Scaling Order appears in the main
D – K iteration window, as shown in Figure 6-27. The table contains the scaling
number and the order of each scaling. Each D-scale data is prefit with up to a
maximum state order transfer function in an attempt to minimize the
difference between the scaled µ upper bound and the µ upper bound with the
rational D-scales. The maximum order of the prefit D-scales is specified in the
D-Scale Prefit field, with the Max Auto-Order data in the Parameters
window.
The D-scaling information for the first D-scaling is shown in graphical form in
the Scaling window, Figure 6-28. Note that the first D-scaling was fit with a
first order transfer function (see Figure 6-27). There are three plots shown in
this window. The top plot shows the µ upper bound, which contains the
D-scaling data, and a plot of the scaled upper bound. The scaled upper bound
is calculated with the rational D-scales wrapped in to the original closed-loop
frequency response. The middle plot shows the D-scale magnitude data and the
rational fit for the first D-scale. The D-scale magnitude data is the variable
being fit. The bottom plot shows the sensitivity of the µ upper bound to changes
in the D-scale. The larger the sensitivity, the more important it is to fit the
D-scale well in that frequency range.
6-33
6 Graphical User Interface Tools
You can change which D-scale data is shown in the Scaling window by pressing
the ’- -’ or ’++’ buttons to the left of the Scaling title. This will cycle through
each of the D-scalings. The ’- -’ or ’++’ buttons to the left of the Order title
6-34
D-K Iteration User Interface Tool: dkitgui
decrement or increment the order of the D-scale fit by 1. You can also change
the order of the D-scale fit by editing the D-scale order directly. Changing the
D fit order will affect the middle plot, which shows the magnitude data (solid
line) and the rational fit (dashed line) and will also affect the current scaled
upper bound (dashed line) shown in the top figure of the Scaling window. Note
that the goal of D – K iteration is to reduce the µ upper bound. It is usually
important that the current scaled upper bound, which incorporates the rational
D-scalings, closely matches the calculated µ upper bound. This is especially
true in the frequency range where µ is large.
The D-scale data for this example are both fit with first order transfer
functions. These fits appear to be sufficient, therefore we will go on to the
second control design. Pressing the Control Design button wraps the rational
D-scalings into the original interconnection structure, P_ss, and designs a new
H∞ controller. For this second D – K iteration, a γ value of 7.89 is achieved.
We now desire to run the next three buttons in sequence. This can be done by
pulling down the Iteration menubar in the main Iteration window. Selecting
Auto Steps, dragging the mouse to the right of Auto Steps allows you to choose
the menu item Next 3 Steps, as shown in Figure 6-29. This automatically runs
the next three steps of the D – K iteration. Selecting either the Auto Steps or
Auto Iterate will result in the appearance of a Stop button in the main
Iteration window below the DK Iteration Summary table. Pressing the Stop
button terminates the automated D – K iteration after the current button
running has completed.
Under the Iteration menu, the Restart option allows you to restart a D – K
iteration at the very beginning while leaving the Setup window data intact.
This is often useful if the incorrect weights were selected and you would like to
reload the system interconnection structure and start over.
6-35
6 Graphical User Interface Tools
Note that after two complete D – K iterations we have achieved a µ value of 2.11
(Figure 6-30). The objective is to achieve a µ value less than 1. Therefore,
several more D – K iterations may be required. User interaction can be
eliminated from the D – K iteration by selecting the Auto Iterate menu item
from the DK Iteration window, Iteration menubar, as shown in Figure 6-31.
Dragging the mouse to the right of the Auto Iterate menu item allows you to
select up from one to eight automated D – K iterations. During each iteration,
the rational D-scale order is selected automatically by the dkitgui program.
Again, the Stop button allows you to terminate the Auto Iterate option at any
time. After three complete D – K iterations, a µ value of 1.03 is achieved.
Selecting one more automated D – K iteration results in a µ value of 0.91 after
four iterations (Figure 6-32).
6-36
D-K Iteration User Interface Tool: dkitgui
Based on the setting of the Each Iteration Export radio buttons in the
Parameter window, the controller designed each iteration has been exported
to the MATLAB workspace. Therefore, after four D – K iterations controllers
K1ss, K2ss, K3ss, and K4ss are present in your workspace. We also have data
from the µ analysis, ddataiss, desnsiss, mubndiss, and pertiss in the
workspace. If you have your Workspace Manager open, press the Refresh
Variables button. The results are shown in Figure 6-33.
6-37
6 Graphical User Interface Tools
6-38
D-K Iteration User Interface Tool: dkitgui
Options Menu
The Options menu in the main dkitgui window allows you to perform two
specialized operations. Selecting the Auto_Refresh K item from the Options
menu will refresh the <Controller> editable text in the Setup window after
successful completion of the Control Design button. Any valid MATLAB
expression can be typed in the <Controller> editable text space. This allows
you to perhaps have an automated controller reduction scheme that would get
executed after the design of the H∞ controller.
Selecting the Auto_Refresh Olic item from the Options menu will refresh the
Open-Loop IC editable text in the Setup window after successful completion
of the fitting the D-scale data. Any valid expression can be typed in the
Open-Loop IC editable text space. This allows you to have an automated
program that modifies the open-loop interconnection weightings based on the
value of µ from the previous iteration.
6-39
6 Graphical User Interface Tools
• Main Simulation Tool window, which is the main interface for the
simulation.
• Parameter window, which is used to modify properties of the time
simulation, such as the final time, integration step size, initial conditions,
and variables automatically exported to the workspace.
• Plot windows, where the plots of time responses are displayed. You can open
up to six of these windows.
w
e z
y d
d
Ptss
n
u u
y
6-40
LFT Time Simulation User Interface Tool: simgui
Two controllers to be analyzed are K1ss and K4ss. K1ss is the controller design
from the first D – K iteration and K4ss was designed in the fourth D – K
iteration. They are either present in your current workspace or they can be
loaded by typing
load ss_cont
at the command line. The function simgui will be used to analyze the time
response of these controllers. Typing the command tssic creates and loads the
Ptss interconnection structure.
tssic
Setting Up simgui
Start the linear fractional transformation (LFT) time simulation tool by typing
simgui
appears in the lower left corner of the main window. This is the location of the
message bar where information about the time simulation is displayed. Upon
completion of the simgui setup, the main window displays the message
Done with setup
6-41
6 Graphical User Interface Tools
The simgui tool simulates linear fractional models and plots their responses.
simgui is based on the standard linear fractional model shown in Figure 6-37.
-
w z
P
e d
y u
- K
Figure 6-37: Standard Linear Fractional Model
6-42
LFT Time Simulation User Interface Tool: simgui
If this is the only data entered, the input dimension of the Plant matrix must
match the row dimension of the Input Signal VARYING or CONSTANT
matrix. In addition you can enter a:
• Controller
• Perturbation
6-43
6 Graphical User Interface Tools
-0.1]) in the Perturbation text entry and press the Perturbation push
button.
Plant, Perturbation, and Controller correspond directly to P, ∆, and K in
Figure 6-37. Based on the dimensions of these matrices, the Input Signal must
have four rows of signals for the system to have the correct dimensions. An
error message will be displayed if the dimensions of the interconnected system
are incorrect. The resulting closed-loop, perturbed system has a total of eight
outputs. They are e, y, d, and u, as shown in Figure 6-35.
Let’s create an input signal for the spinning satellite example. The first two
inputs are disturbances, and the third and fourth inputs are sensor noise. For
this example, input a unit step command into channel 1, zero input to channel
2, and a normally distributed random noise signal into channels 3 and 4 which
is scaled by 0.05. Typing the following commands at the MATLAB prompt will
generate this input signal.
u1 = step_tr(0,1,.01,5);
u2 = mscl(u1,0);
t = getiv(u1);
u34 = siggen('0.05*randn(2,1)',t);
u = abv(u1,u2,u34);
Enter u in the Input Signal text entry and press its push button. The
successful entry of the input signal results in the appearance of the Plots and
Line Types scroll table in the main simulation window, as shown in
Figure 6-38.
6-44
LFT Time Simulation User Interface Tool: simgui
6-45
6 Graphical User Interface Tools
6-46
LFT Time Simulation User Interface Tool: simgui
Leaving the Grouped pull-down menu selected, as seen in Figure 6-40, and
pressing the third output of the Closed-Loop Nominal response results in the
third output of all four responses being selected. Therefore, currently in this
example, this would result in the third output of the Open-Loop and
Closed-Loop Nominal responses being plotted in the Plot Page #1. The Free
Form option decouples the responses. Pressing on any output channel
checkbox only selects or deselects that checkbox.
The text in the checkbox corresponds to a MATLAB color type, next to a
MATLAB line type. For example, the default line types and colors for a color
monitor are: Open-Loop Nominal outputs are yellow (y) and solid lines (–).
Closed-Loop Nominal outputs are magenta (m) and dotted lines (:).
The LineStyle menu provides a way of modifying the line color and type
(Figure 6-41). Selecting edit from the LineStyle menu changes the checkboxes
into uicontrol editable text objects, and operates in a machine dependent
manner with which you should be familiar. As before, when we instruct you to
enter data in an editable text object, this implicitly means enter the text, and
complete the action (by pressing Return, by pressing the mouse on another
object, by moving the mouse out of the text object, etc.). See the online
MATLAB Function Reference for more details on completing a text entry.
6-47
6 Graphical User Interface Tools
You can modify the first output of the Closed-Loop Nominal channels to be
white (w) and dashed (--), as shown in Figure 6-42.
Press the Done Edit button in the top right corner of the Plots and Line Types
table to return the table to checkboxes for output selection. The Default
selection of the LineStyle menu provides four different line type defaults:
• Color
1 Open-Loop Nominal outputs: yellow, solid (y-)
2 Open-Loop Perturbed outputs: red, dashed (r--)
3 Closed-Loop Nominal outputs: magenta, dotted (m:)
4 Closed-Loop Perturbed outputs: white, dashed-dotted (w-.)
6-48
LFT Time Simulation User Interface Tool: simgui
• Color Symbols
1 Open-Loop Nominal outputs: yellow, x (yx)
2 Open-Loop Perturbed outputs: red, star (r*)
3 Closed-Loop Nominal outputs: magenta, plus (m+)
4 Closed-Loop Perturbed outputs: white, circle (wo)
6-49
6 Graphical User Interface Tools
The Plot Figure frame, Plot Fig#, corresponds to the plotting window number
displayed. There can be a maximum of six plotting windows. Pressing the
increment ++ and decrement -- buttons changes the page number and brings
the new plot window to the foreground. If the page number is incremented to a
page that previously didn’t exist, a new page is created with one plot axes. You
6-50
LFT Time Simulation User Interface Tool: simgui
can also change the Plot Fig# editable text to reflect the plot page desired. The
input to this editable text must be a positive integer between 1 and 6.
The row number, Row# frame, has two editable texts and a decrement -- and
increment ++ button as seen in Figure 6-44. The far right editable text, after
of, corresponds to the number of subplot axes rows desired for the current page
(Plot Fig#). The first editable text, after Row#, indicates the row number of the
corresponding subplot on the given Plot Fig#. The minimum Row# editable
text value is 1, and its maximum values correspond to the value shown in the
second editable text of Row#. The increment and decrement buttons increase
or decrease the row number by 1. This is shown in the first Row# editable text.
The minimum row number is 1 and the maximum is the value of the second
Row# editable text.
Similarly, the column number, Col# frame, has two editable texts and a
decrement -- and increment ++ button (see Figure 6-46). The second editable
text, after of, corresponds to the number of subplot axes columns desired for
the current page (Plot Fig#). The first editable text, after Col#, indicates the
column number of the corresponding subplot on the given Plot Fig#. The
increment and decrement buttons increase or decrease by 1 the column number
shown in the first Col# editable text. The minimum column number is 1 and
the maximum is the value of the second Col# editable text.
For example, selecting the second editable text of Row# to be 2 and Col# to be
3 would result in six subplots, two rows of three columns. Changing either the
second editable text of Row# or Col# will display the result in a push button
label Apply, appearing in the right of these two frames, as shown in
Figure 6-44. The new desired subplot description is applied after this button is
pushed. As before, the inputs to the editable text frames must be positive,
nonzero integers.
The Plot Fig# and the second editable text frames of the Row#/Col# frames
indicate the layout of the current simulation plot page. The Plot Fig#, and first
editable text of the Row#, and Col# frames indicate which plot data is
currently in the Plots and LineTypes scroll table and their corresponding plot
labels and fonts. The first editable text in the Row# and Col# frames indicate
the row and column number of the corresponding subplot in the Plot Fig#. The
minimum first editable text values of Row#, and Col# are 1, and their
maximum values correspond to the values shown in the second editable text
frames in Row# and Col# respectively. In this example, let’s include two plots
on the first page, Plot Fig# 1, one above the other. Enter a 2 in the # of rows
editable text frame and press the Apply button, as seen in Figure 6-45.
6-51
6 Graphical User Interface Tools
Figure 6-45: Current Plotting Figure Information with Apply Button Enabled
6-52
LFT Time Simulation User Interface Tool: simgui
Increment the Row# to 2, to indicate the bottom plot (subplot 212 of Plot Page
#1). Enter the title of Plant Error Output 2, an x-axis label of Time and a
y-axis label of Degrees. Select the font size to be 11 for the text and the axes,
grid off, and select output 2 Open-Loop Nominal and Closed-Loop Nominal
responses and the second output of these two responses to be plotted in the
Plots and LineType scroll table. Your first simulation plot page should look
the same as Figure 6-47.
6-53
6 Graphical User Interface Tools
Increment the Plot Fig# to 2. This creates a new page with one plot axes and
brings it to the foreground. The Window pull-down menu in any simgui
window shows all the current simgui windows and allows you to hide the
current window or bring any of the other windows to the foreground. For Plot
Fig# 2, enter the title of Closed-loop plant outputs and controls, an x-axis
label of Time and a y-axis label of Degrees - Newtons. Select the font size to be
9 for the title, and 10 for all the text and the axes, grid off, and select the
Closed-Loop Nominal responses. Change the Grouped button in the Plots
and LineType scroll table to Free Form and select the y and u and outputs 3,
4, 5 and 6 be plotted. Change the line types of the output 3 to be white, solid
(w-), output 4 to be white, dashed (w--), output 5 to be white, dashed-dotted
(w-.), and output to be white, dotted (w:). You have now finished setting up the
plotting data and labeling the plots. It’s now time to calculate the
continuous-time simulation.
6-54
LFT Time Simulation User Interface Tool: simgui
Pull down the Window menu from any simgui window and select the
Parameters window. Select an integration time of 0.01 second by entering
0.01 into the Integration Step Size editable text, as seen in Figure 6-48.
Pull down the Window menu in the Parameter window and hide the
Parameter window. Press the Compute button in the main window to initiate
the simulations. Immediately the message Computing appears in the message
bar and the label on the Compute button changes to Stop. By pressing the
Stop button, the time simulation will terminate at the next available execution
of a break. After the Computing message appears in the message bar, and
assuming that the Stop button was not pressed, one of two pieces of
information will appear in the message bar. If each simulation is estimated to
take less than 3 minutes to compute then a running tab of simulation as it
progresses is shown. If a simulation is estimated to take more than 3 minutes
to compute the message, then
Simulation will take approx. X seconds:
check Integration Step Size
6-55
6 Graphical User Interface Tools
6-56
LFT Time Simulation User Interface Tool: simgui
Let’s compare these results to those with controller K4ss implemented. Return
to the Main Simulation window and enter K4ss in the Controller editable
text frame. Press the Controller push button. This loads K4ss into the
Controller variable and enables the Compute button. The previous
simulation data with K1ss implemented is deleted from the plot windows since
6-57
6 Graphical User Interface Tools
6-58
LFT Time Simulation User Interface Tool: simgui
Printing Menu
Each Simulation Plot Page has a Printing menu for sending that figure to a
printer or saving it to a file. Selecting the Print option from the Printing menu
will pop up a Print dialog box, as shown in Figure 6-51. This window has three
editable text boxes across from Device, Options, and Filename, and a Print
and Cancel button. Device, Options, and Filename correspond to the exact
same inputs you would provide to the standard MATLAB print command.
Therefore, the exact string -dps and a filename have to be entered into the
Device and Filename editable text respectively, to output the plot to a
postscript file.
Leaving the three editable text boxes empty and pressing the Print button will
send the current figure to the printer. This is similar to the MATLAB print
command. Pressing the Cancel button will not execute any print command.
After filling in any or all of the three editable text boxes, pressing the Print
button will execute the MATLAB print command. Pressing either Print or
Cancel executes that command and hides this window.
6-59
6 Graphical User Interface Tools
Selecting the Load Setup from the menu will display a Load Setup dialog box,
as shown in Figure 6-53. The Load Setup option is used to load plot setup
information that was previously saved using the Save Setup option. You can
enter a Variable name, (the default is SAVESET), and a Filename. If the
Filename editable text is left empty, the variable is loaded from the current
workspace. Pressing the Load button loads the data from the location
described by the Variable and Filename data and hides the window. Pressing
Cancel hides the window and loads no data. The data loaded includes all the
Plots and LineTypes Table information, plotting information and labels, final
time, integration time, sample time, initial conditions, export suffix, and
simulation name if available. If an error occurs during the load operation, an
error message will appear in the main window message bar.
Selecting Save Setup from the File menu will display a Save Setup dialog box,
very similar to Figure 6-53 except with the Save button replacing the Load
button. This option saves all the current plot and line type data along with the
labels, final time, integration time, sample time, initial conditions, export
suffix, and simulation name, if available, to the Variable string name.
The Variable and Filename editable text and the Cancel button operate in the
exact same manner as in the Load Setting dialog box. Pressing the Save
button saves the data to the current workspace in the Variable editable text
string. If this is empty, the default variable name is set to SAVESET. The data is
saved to the filename defined by the Filename editable text, or to the filename
SAVESET if there is no Filename string.
The Load Setup and Save Setup options are extremely useful. They allow you
to customize simulation plots for use with many plants, controllers,
perturbations, and input signals.
The Quit button of the File menu exits the simgui tool and deletes all the data
and windows opened by simgui.
6-60
LFT Time Simulation User Interface Tool: simgui
Simulation Type
You can select three different time simulations from the Options menu on the
main window, as seen in Figure 6-54.
6-61
6 Graphical User Interface Tools
default final time and integration step size are the same as in the
continuous-time case. The default sample time is the same as the discrete-time
case. See sdtrsp for more information.
Progress of all these simulations is shown in the message bar of the main
window during their computation.
6-62
LFT Time Simulation User Interface Tool: simgui
and their meaning. See “Dragging and Dropping Icons” for more information
on how to drag and drop µ-Tools linkage variables.
The Export to Workspace frame contains variables that you can export to the
MATLAB workspace. Each variable is a radio button, which you can select or
deselect. If a variable is selected, the variable is exported to the MATLAB
workspace each time that simulation is performed. The default setting is to
export YOLN, YOLP, YCLN, and YCLP every time they are calculated. The Export
to Workspace names and variables are:
6-63
6 Graphical User Interface Tools
The Iteration Suffix string is appended to the end of all the output variables
selected. The default is to export the time simulation data YOLN, YOLP, YCLN,
and YCLP as they are computed.
Note These exported variables are overwritten with their new output
responses after the individual time responses are calculated.
6-64
LFT Time Simulation User Interface Tool: simgui
6-65
6 Graphical User Interface Tools
Figure 6-56: dkitgui (left) and smgui (right) Linkable Variables Icon Tables for
Dragging
To drag an icon, which is a variable name, position the mouse arrow over the
variable name to be dragged. Press the left mouse button and hold the button
down. You have now selected the variable name for dragging. For example,
select the idmod variable for dragging from the wsgui scroll table and move the
mouse button slightly. (Note: the wsgui workspace has just loaded the mk_wts
file from the MATLAB workspace.) Your screen should look similar to
Figure 6-57.
6-66
Dragging and Dropping Icons
There are several important facts regarding the dragging and dropping of icons
that should be noted:
• Icons being dragged are only visible in windows, and they are not visible
when the icons are over a frame or other MATLAB uicontrol objects.
• Dragging an icon may be slow on networked computers. This is due to the
overhead of transferring the information about the icon being dragged.
• The drag and drop operation is only completed if the icon is deposited in a
drop box.
6-67
6 Graphical User Interface Tools
For example, you can select to drag the fourth D – K iteration controller from
the spinning satellite example in the “D-K Iteration User Interface Tool:
dkitgui” section to the LFT time simulation tool, simgui. Select the Khinf link
variable in the dkitgui main window and move the mouse button slightly
(Figure 6-57). Drag the Khinf link variable over to the main simgui window
(Figure 6-58) and drop it in the Controller drop box, as shown in Figure 6-59.
Upon successfully dragging and dropping the data, the string
grabvar(1,'Khinf') appears in the main simgui window Controller editable
text frame.
6-68
Dragging and Dropping Icons
6-69
6 Graphical User Interface Tools
In simgui and dkigui the linkable variables are to be extracted from a large
storage matrix. The program grabvar performs this operation. The inputs to
the grabvar function are the µ-Tools GUI figure number and the variable name
in the originating function. In this example, the variable dragged from dkitgui
to simgui is Kinf. The dkitgui figure number is 1, which corresponds to the 1
in the simgui editable text: grabvar(1,'Khinf'), as seen in Figure 6-60. The
drag and drop operation is responsible for entering the required input data and
requires no additional input from you.
6-70
7
This chapter has a number of examples to show how to apply the µ-Analysis
and Synthesis Toolbox (µ-Tools) to robust control problems. These examples
include:
7-2
SISO Gain and Phase Margins
s–2
G ( s ) = ---------------
2s – 1
K1 ( s ) = 1
s+β
K 2 ( s ) = ---------------- ( β > 0, as β → 2 )
βs + 1
βs + 1
K 3 ( s ) = ---------------- ( β > 0, as β → 2 )
s+β
2
s + 2.5 1.7s + 1.5s + 1
K 4 ( s ) = --------------------- -----------------------------------------
2.5s + 1 s 2 + 1.5s + 1.7
7-3
7 Robust Control Examples
y1 u1
u1-+ g -
u2
?
K -
y +- - y2
y2 T u2 1 g
G
,
6 +
For the problem at hand, form the closed-loop system and calculate the poles of
the closed-loop system.
K2 = nd2sys([1 1.9],[1.9 1]);
clp2 = formloop(G,K2);
spoles(clp2)
k3 = nd2sys([1.9 1],[1 1.9]);
clp3 = formloop(G,k3);
spoles(clp3)
k4 = nd2sys([1 2.5],[2.5 1]);
k4 = mmult(k4,nd2sys([1.7 1.5 1],[1 1.5 1.7]));
clp4 = formloop(G,k4);
spoles(clp4)
For gain and phase margins, we are interested in the loop transfer function L(s)
:= G(s) K(s). Select a frequency range of 0.01 rad/sec to 100 rad/sec with 100
points. Calculate the frequency response of the loop transfer function and plot
the Bode and Nyquist diagrams of L. The command add_disk adds a unit disk
to the Nyquist plot. The results are shown in Figure 7-2.
omega = logspace(-2,2,100);
G_g = frsp(G,omega);
k1_g = frsp(k1,omega);L1 = mmult(G_g,k1_g);
K2_g = frsp(K2,omega);L2 = mmult(G_g,K2_g);
k3_g = frsp(k3,omega);L3 = mmult(G_g,k3_g);
k4_g = frsp(k4,omega);L4 = mmult(G_g,k4_g);
vplot('nyq',L1,'-',L2,'--',L3,'-.',L4,'.'), add_disk
axis([-4 1 -1 1])
vplot('bode',L1,'-',L2,'--',L3,'-.',L4,'.'), grid
7-4
SISO Gain and Phase Margins
7-5
7 Robust Control Examples
Nyquist Plot
1
0.8
0.6
0.4
0.2
Imaginary
0.2
0.4
0.6
0.8
1
4 3.5 3 2.5 2 1.5 1 0.5 0 0.5 1
Real
Bode Plot
1
10
Log Magnitude
0
10
1
10 2 1 0 1 2
10 10 10 10 10
Frequency (radians/sec)
100
Phase (degrees)
150
200 2 1 0 1 2
10 10 10 10 10
Frequency (radians/sec)
Figure 7-2: Nyquist and Bode Plots of the Loop Gain with k1, k2, k3, and k4
7-6
MIMO Loop-at-a-Time Margins
d - -
, 6 d- K - G
,6
2
G := ------------------ s – α α(s + 1) ,
1
K = I2
2 2
s + α –α ( s + 1 ) s – α2
7-7
7 Robust Control Examples
- 1
d - - -d?
, 6 d- K - -d G
,6 - 2 6
Break the loop at the place where one perturbation is, and compute the
open-loop transfer function. This transfer function will be a function of the
remaining perturbation. For instance, to check the margins in the first channel
with perturbations in the second channel, consider the diagram in Figure 7-5.
d - -
w1
-
z1
, 6 d- K - d - G
,6 w2
- 2 6z2
In this particular example, K is the identity, so the loop in Figure 7-5 can be
redrawn, as shown in Figure 7-6. This figure is easily constructed using starp.
Figure 7-6: Redrawn Closed-Loop System with Loop Broken in Channel 1 and
Perturbation in Channel 2
A Bode or Nyquist plot of the transfer function from z1 to w1 reveals the margin
(nearness to +1 point in this case, since the negative sign is embedded in the
7-8
MIMO Loop-at-a-Time Margins
2
10
0
10
2
10 3 2 1 0 1 2
10 10 10 10 10 10
Frequency (radians/sec)
Phase (degrees)
100
0 3 2 1 0 1 2
10 10 10 10 10 10
Frequency (radians/sec)
The previous Bode plots show that each channel can tolerate large,
loop-at-a-time, gain variations. Now though, consider a simultaneous 10%
variation in each loop. Specifically, check the internal stability of the perturbed
closed-loop shown in Figure 7-4, redrawn in Figure 7-8, with δ 1 = – 1 ⁄ 101 ,
and δ 2 = 1 ⁄ 101 .
7-9
7 Robust Control Examples
To do this, note that since K = I2, the perturbed loop in Figure 7-4 is simply the
star product (starp) of –G and the matrix [1+delta1 0; 0 1+delta2].
Note When two SYSTEM matrices are connected using starp, and all of the
inputs of the top system are all of the outputs of the bottom system, and all of
the inputs of the bottom system are all of the outputs of the top system, then
the output of starp is the “A” matrix of the interconnection, stored as a
regular MATLAB matrix.
The following code creates two perturbations of size 101 , creates the
perturbation matrix, forms the closed-loop system and calculates the
closed-loop eigenvalues.
delta1 = -1/sqrt(101);
delta2 = 1/sqrt(101);
delta = [1+delta1 0 ; 0 1+delta2];
minfo(G)
minfo(delta)
clpAmat = starp(mscl(G,-1),delta,2,2)
minfo(clpAmat)
eig(clpAmat)
Notice that the eigenvalues of clpAmat are unstable. Hence, this small (10%),
simultaneous perturbation to both channels causes instability (slightly larger
values for δ1 and δ2 push the eigenvalues further into the right-half-plane).
7-10
MIMO Loop-at-a-Time Margins
We can see the effect of simultaneous perturbations with the help of a Nyquist
plot. Refer back to Figure 7-6 and calculate the transfer function obtained by
breaking the loop in the first channel, with a perturbation of δ2 = 0.001. The
single-loop margin in channel 1 is still quite large, since the Nyquist plot, solid
line in Figure 7-9 intersects the real axis around 11. Try a slightly larger value
for δ2 (again, note that the +1 point is of interest due to the extra minus (–) sign
on G).
delta2 = 0.001;
pert_ch1_lb1 = starp(mscl(Gg,-1),1+delta2,1,1);
vplot('nyq',pert_ch1_lb1);
delta2 = 0.01;
pert_ch1_lb2 = starp(mscl(Gg,-1),1+delta2,1,1);
vplot('nyq',pert_ch1_lb1,'-',pert_ch1_lb2,'--');
add_disk
3
Imaginary
1
2 0 2 4 6 8 10 12
Real
7-11
7 Robust Control Examples
It is easy to write a for loop (as in ex_ml1) that calculates the open-loop
frequency response in channel 1 for several values of δ2. Six δ2 values are
constructed from 0.001 to 0.1. A for loop calculates the perturbed loop transfer
function in channel 1 for the six values of δ2 and their responses are plotted on
a Nyquist diagram.
file:ex_ml1.m
delta2values = logspace(-3,-1,6);
pert_ch1_lb = [];
for i=1:length(delta2values)
delta2 = delta2values(i);
pert_ch1_lb = ...
sbs(pert_ch1_lb,starp(mscl(Gg,-1),1+delta2,1,1));
end
xa = vpck([-2; 12],[1 2]);
ya = vpck([-sqrt(-1); 6*sqrt(-1)],[1 2]);
vplot('nyq',pert_ch1_lb,xa,ya);
title('Loop Broken in Channel 1, Delta2 in [.001 .1]')
xlabel('Real')
ylabel('Imag')
Typing
ex_ml1
at the command line generates Figure 7-10. Note that the margin in channel 1
rapidly disappears due to small perturbations in channel 2.
7-12
MIMO Loop-at-a-Time Margins
3
Imaginary
1
2 0 2 4 6 8 10 12
Real
Exactly analogous results hold for breaking the loop in channel 2, and
considering small perturbations in channel 1. This can be verified by closing
the upper loop of –G with (1 + δ1). For example,
delta1 = -0.01;
pert_ch2_lb = starp(1+delta1,mscl(Gg,-1),1,1);
vplot('nyq',pert_ch2_lb);
add_disk
7-13
7 Robust Control Examples
1.5
0.5
Imaginary
0.5
1.5
2
2 1.5 1 0.5 0 0.5 1
Real
Hence, in a multivariable system the single loop margins may be good though
simultaneous interaction of perturbations in each loop may lead to instability,
even for small perturbations. The structured singular value, µ, can be used to
detect the nonrobustness in this feedback system. To see this we will reanalyze
this example using µ in the “MIMO Margins Using m” section in this chapter.
7-14
Analysis of Controllers for an Unstable System
1
G := ------------
s–1
--- ( --- s + 1 )
1 1
4 2
W u := ----------------------
1
------- s + 1
32
M ( G, W u ) := { G ( I + ∆ G W u ) : max ∆ G ( jω ) ≤ 1 }.
ω
(7-1)
with the restriction that the number of right-half plane poles of perturbed plant
equal the number of right-half plane poles of G. The nominal model G,
weighting function Wu, and the unknown transfer function ∆G are used to
parameterize all the possible models.
We are interested in the stability and performance of the closed-loop system for
all possible plant models in the set M(G, Wu). In this example, we choose the
performance objective to be a stable, closed-loop system, and output
disturbance rejection up to 0.6 rad/sec, with at least 100:1 disturbance rejection
at DC.
This objective can approximately be represented as a weighted H∞ norm
constraint on the sensitivity function,
Wp
------------------ ≤ 1
1 – G̃K ∞
7-15
7 Robust Control Examples
1
--- s + 0.6
4
W p := -------------------
- = s + 0.006
s
Grouping G, Wu, and Wp together (as P), the uncertain closed-loop system is
redrawn in Figure 7-13.
- G
w z
e P
d
y u
- K
Figure 7-13: General Interconnection for Uncertain System
Type
ex_unic
7-16
Analysis of Controllers for an Unstable System
file: ex_unic.m
G = nd2sys(1,[1 -1]);
Wu = nd2sys([0.5 1],[0.03125 1],.25);
WP = nd2sys([.25 0.6],[1 0.006]);
systemnames = 'G Wu WP';
sysoutname = 'P';
inputvar = '[ z; d; u ]';
input_to_p = '[ z + u ]';
input_to_Wu = '[ u ]';
input_to_WP = '[ G + d ]';
outputvar = '[ Wu; WP; G + d ]';
cleanupsysic = 'yes';
sysic
Next, consider two controllers, K1 and K2 which stabilize the nominal plant
model G.
0.9s + 1 2.8s + 1
K 1 = – 10 --------------------- K 2 = – 1 ---------------------
s s
7-17
7 Robust Control Examples
0.5
0.4
M22
0.3
0.2
0.1
0 2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
0.9
0.8
0.7
0.6
M11
0.5
0.4
0.3
0.2
0.1 2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
7-18
Analysis of Controllers for an Unstable System
The actual goal is to achieve the performance objective for every plant model
as defined by the dashed-box in Figure 7-12. This objective is defined as robust
performance.We can use the structured singular value, µ, to analyze the
closed-loop system to determine if robust performance is achieved. Achieving
robust performance is mathematically equivalent to
maxµ ∆ ( M ( jω ) ) ≤ 1
ω∈R
P
0.8
mu(M)
0.6
0.4
2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
7-19
7 Robust Control Examples
Note that while K1 achieves better nominal performance than K2, the closed-
loop system with K2 has better robust performance properties than with K1. In
fact, the controller K2 does, just barely, achieve robust performance. Hence, for
every plant G̃ ∈ M ( G, W u ) , the closed-loop system, with K2 is stable, and
moreover
Wp
--------------------- < 1
1 – G̃K 2 ∞
This is not true for the closed-loop system with K1, as the robust performance
test, using µ is not satisfied for FL(P,K1).
The script-file
ex_usrp
7-20
Analysis of Controllers for an Unstable System
file: ex_usrp.m
K1 = nd2sys([.9 1],[1 0],-10);
K2 = nd2sys([2.8 1],[1 0],-1);
om = logspace(-2,2,80);
M1 = starp(P,K1);
M2 = starp(P,K2);
M1g = frsp(M1,om);
M2g = frsp(M2,om);
uncblk = [1 1];
fictblk = [1 1];
deltaset = [uncblk;fictblk];
bnds1 = mu(M1g,deltaset);
bnds2 = mu(M2g,deltaset);
vplot('liv,m',sel(M1g,2,2),'-',sel(M2g,2,2),'--')
xlabel('Frequency (rad/sec)')
ylabel('M22');
title('Nominal Performance (K1 solid, K2 dashed)');
vplot('liv,m',sel(M1g,1,1),'-',sel(M2g,1,1),'--')
xlabel('Frequency (rad/sec)')
ylabel('M11');
title('Robust Stability (K1 solid, K2 dashed)');
vplot('liv,m',sel(bnds1,1,1),'-',sel(bnds2,1,1),'--')
xlabel('Frequency (rad/sec)')
ylabel('mu(M)');
title('Robust Performance (K1 solid, K2 dashed)');
How does the weighted sensitivity transfer function degrade with uncertainty?
This can be answered easily using the worst-case performance analysis, which
shows the worst-case degradation of performance as a function of the size of the
uncertainty.We also construct, for each closed-loop system, the worst-case
perturbation of size (i.e., the worst case plant from M(G, Wu)) and use this later
in time-domain simulations. The script file
ex_wcp
7-21
7 Robust Control Examples
file: ex_wcp.m
[deltabad1,wcp_l1,wcp_u1] = wcperf(M1g,uncblk,1,8);
[deltabad2,wcp_l2,wcp_u2] = wcperf(M2g,uncblk,1,8);
vplot(wcp_l1,wcp_u1,wcp_l2,'--',wcp_u2,'--')
xlabel('Size of Delta_G')
ylabel('Weighted Sensitivity')
title('Performance Degradation (K1 solid, K2 dashed)')
2.5
Weighted Sensitivity
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
Size of Delta_G
max ∆ G ( jω ) Š 0.76,
ω
7-22
Analysis of Controllers for an Unstable System
the robust closed-loop performance using K2 is better than that obtained using
K1. The perturbations deltabad1 and deltabad2 are the worst-case
perturbations ∆G, of size 1.They each cause the weighted sensitivity to degrade
a maximal amount, over all perturbations of size 1. In the case of K1, the
perturbation deltabad1 causes the weighted sensitivity to degrade to
approximately 3.5, while in the case of K2, deltabad2 causes the weighted
sensitivity to degrade to approximately 1.0. We will use these worst-case
perturbations in time-domain simulations.
Based on the robust performance analysis results, K2 will stabilize the nominal
plant and the seven extreme plant models discussed in the “Unmodeled
Dynamics” section in Chapter 4. Also, the performance degradation, in terms
of the weighted sensitivity function, should be small. Using controller K1,
stability is also guaranteed. However, since FL(P,K1)(jω) has a peak µ-value of
about 1.2, it is clear that K1 does not have robust performance over the set
M(G,Wu) and we expect the performance degradation over the seven extreme
plants to be worse than that using K2.
These seven plants, all of which are members of the set M(G,Wu) and the
worst-case plant from M(G,Wu), are
G 1 = ------------ -----------------
1 6.1 1.425
G 2 = -----------------------
s – 1 s + 6.1 s – 1.425
G 4 = ------------ ----------------------------
.67 1 – 0.07s + 1
G 3 = --------------------
s – 0.67 s – 1 0.07s + 1
1 1
2 2
70 70
G 5 = ------------ ------------------------------------------------------------- G 6 = ------------ ----------------------------------------------------------
s – 1 s 2 + 2 ⋅ 0.15 ⋅ 70s + 70 2 s – 1 s 2 + 2 ⋅ 5.6 ⋅ 70s + 70 2
50 6
G 7 = ------------ ---------------
1
G wc = [ G ( 1 + ∆ wc W u ) ]
s – 1 s + 50
7-23
7 Robust Control Examples
file: ex_mkpl.m
G1 = mmult(G,nd2sys([6.1],[1 6.1]));
G2 = nd2sys([1+0.425],[1 -1-0.425]);
G3 = nd2sys([1-0.33],[1 -1+0.33]);
G4 = mmult(G,nd2sys([-0.07 1],[0.07 1]));
G5 = mmult(G,nd2sys([70^2], [1 2*0.14*70 70^2]));
G6 = mmult(G,nd2sys([70^2], [1 2*5.6*70 70^2]));
Gt = nd2sys([50],[1 50]);
G7 = mmult(G,Gt,Gt,Gt,Gt,Gt,Gt);
Gwc1 = mmult(G,madd(1,mmult(deltabad1,Wu)));
Gwc2 = mmult(G,madd(1,mmult(deltabad2,Wu)));
The closed-loop system shown in Figure 7-18 is used for time simulations.
Twenty time simulations are computed, for all combinations of K1 and K2, with
the nominal plant G, the seven extreme plants G1,. . .,G7, and the two
worst-case plants Gwc1 and Gwc2. Rather than simulate an output disturbance,
we manipulate the diagram, and simulate a unit-step reference command.
7-24
Analysis of Controllers for an Unstable System
Typing
ex_mkclp
file: ex_mkclp.m
clpg_k1 = sel(formloop(G,K1,'pos','neg'),2,1);
clpg1_k1 = sel(formloop(G1,K1,'pos','neg'),2,1);
clpg2_k1 = sel(formloop(G2,K1,'pos','neg'),2,1);
clpg3_k1 = sel(formloop(G3,K1,'pos','neg'),2,1);
clpg4_k1 = sel(formloop(G4,K1,'pos','neg'),2,1);
clpg5_k1 = sel(formloop(G5,K1,'pos','neg'),2,1);
clpg6_k1 = sel(formloop(G6,K1,'pos','neg'),2,1);
clpg7_k1 = sel(formloop(G7,K1,'pos','neg'),2,1);
clpgwc1_k1 = sel(formloop(Gwc1,K1,'pos','neg'),2,1);
clpgwc2_k1 = sel(formloop(Gwc2,K1,'pos','neg'),2,1);
clpg_K2 = sel(formloop(G,K2,'pos','neg'),2,1);
clpg1_K2 = sel(formloop(G1,K2,'pos','neg'),2,1);
clpg2_K2 = sel(formloop(G2,K2,'pos','neg'),2,1);
clpg3_K2 = sel(formloop(G3,K2,'pos','neg'),2,1);
clpg4_K2 = sel(formloop(G4,K2,'pos','neg'),2,1);
clpg5_K2 = sel(formloop(G5,K2,'pos','neg'),2,1);
clpg6_K2 = sel(formloop(G6,K2,'pos','neg'),2,1);
clpg7_K2 = sel(formloop(G7,K2,'pos','neg'),2,1);
clpgwc1_K2 = sel(formloop(Gwc1,K2,'pos','neg'),2,1);
clpgwc2_K2 = sel(formloop(Gwc2,K2,'pos','neg'),2,1);
7-25
7 Robust Control Examples
file: ex_ustr.m
stepref = step_tr([0 0.5 10],[0 1 1],.01,10);
Tfinal = 10;
tinc = 0.01;
yg_k1 = trsp(clpg_k1,stepref,Tfinal,tinc);
yg1_k1 = trsp(clpg1_k1,stepref,Tfinal,tinc);
yg7_k1 = trsp(clpg7_k1,stepref,Tfinal,tinc);
ygwc1_k1 = trsp(clpgwc1_k1,stepref,Tfinal,tinc);
ygwc2_k1 = trsp(clpgwc2_k1,stepref,Tfinal,tinc);
yg_k2 = trsp(clpg_k2,stepref,Tfinal,tinc);
yg1_k2 = trsp(clpg1_k2,stepref,Tfinal,tinc);
ygwc2_k2 = trsp(clpgwc2_k2,stepref,Tfinal,tinc);
subplot(211)
vplot(vdcmate(yg_k1,5),'+',yg1_k1,yg2_k1, yg3_k1,yg4_k1,...
yg5_k1,yg6_k1,yg7_k1,ygwc1_k1,ygwc2_k1);
xlabel('Time (seconds)')
title('Closed-loop responses using k1')
subplot(212)
vplot(vdcmate(yg_k2,5),'+',yg1_k2,yg2_k2, yg3_k2,yg4_k2,...
yg5_k2,yg6_k2,yg7_k2,ygwc1_k2,ygwc2_k2);
xlabel('Time (seconds)')
title('Closed-loop responses using K2')
7-26
Analysis of Controllers for an Unstable System
2.5
1.5
0.5
0.5
1
0 1 2 3 4 5 6 7 8 9 10
Time (seconds)
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0.2
0 1 2 3 4 5 6 7 8 9 10
Time (seconds)
7-27
7 Robust Control Examples
Return to the main Iteration window, as shown inFigure 7-21, and pull down
the Iteration menu and select 3 from the Auto Iterate submenu. Figure 7-21
shows the main Iteration window after the third iteration has completed.
7-28
Analysis of Controllers for an Unstable System
7-29
7 Robust Control Examples
2.5
1.5
0.5
0.5
1
0 1 2 3 4 5 6 7 8 9 10
Time (seconds)
1.4
1.2
0.8
0.6
0.4
0.2
0.2
0.4
0 1 2 3 4 5 6 7 8 9 10
Time (seconds)
Figure 7-22: Time Response with K1dk (top) and K3dk (bottom) Implemented
7-30
Analysis of Controllers for an Unstable System
Near the H∞ optimal solution, the controller tends to have high bandwidth,
providing some argument against using the H∞ norm as the sole measure of
performance. For this reason, we often back off from optimality, leading to a
lower bandwidth, and more reasonable controller. In the Parameter window,
change Gamma Min to 0.78, and change Gamma Max to 0.78. This fixes the next
control design to be at γ = 0.78, which is approximately 10% above the optimal
value of 0.71. Then, in the Iteration window, press Control Design to produce
K4dk. This controller will sacrifice some (about 10%) H∞ performance, but have
much lower bandwidth. You should repeat the time-domain simulations for
this controller.
You can also do simple model reduction on this controller, using sysbal and
strunc. In the Setup window, type
reducek(
in the <Controller> data entry box. Then, drag, from the Iteration window,
the linkable variable Khinf into the Setup window drop box for the controller.
Finally, type
,3)
7-31
7 Robust Control Examples
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0.2
0 1 2 3 4 5 6 7 8 9 10
Time (seconds)
7-32
MIMO Margins Using µ
w1
1 z1
? 6
W 1
y1 c? u1
y2 c
G
6
u2
W2
6 ?
w2
2 z2
• δi(s) is stable
• max δ i ( jω ) < 1 ,
ω∈R
but otherwise be unknown. Everything inside the dashed box, in particular, the
transfer functions G, Wδ1 and Wδ2, along with the specific interconnection, are
assumed given and known. Since the uncertainty represented by the δi is
7-33
7 Robust Control Examples
1
10
Magnitude
0
10
1
10 1 0 1 2 3 4
10 10 10 10 10 10
Frequency (rad/sec)
7-34
MIMO Margins Using µ
To use the µ theory, we need to isolate the perturbations from the known
components. We do this by constructing a four input, four output system, with
two fictitious input/output pairs (w,z), as well as two actual input/output pairs
(u,y). The fictitious inputs and outputs are used to model the multiplicative
uncertainty at the plant input. The system is called P, and is shown below.
Figure 7-26: Plant Model with Multiplicative Uncertainty Blocks Pulled Out
The SYSTEM matrix P can be constructed easily from the components, using
the SYSTEM interconnection program, sysic.
systemnames = 'G wdel1 wdel2';
inputvar = '[w{2}; u{2}]';
input_to_G = '[u(1) + wdel1; u(2) + wdel2]';
input_to_wdel1 = '[w(1)]';
input_to_wdel2 = '[w(2)]';
outputvar = '[u; G]';
sysoutname = 'P';
sysic;
7-35
7 Robust Control Examples
The feedback controller for this problem will measure the variables y, and
generate control signals, u. In a block diagram, the perturbed closed-loop
system appears as shown in Figure 7-27. In terms of P, the perturbed
closed-loop system appears as Figure 7-28.
w1
1 z1
? 6
W1
?c 6
G
c K
6 6
W2
6 ?
P
w2
2 z2
- 1 0
z1 0 2 w1
z2 w2
P
y1 u1
-
y2 u2
K
7-36
MIMO Margins Using µ
Next, close the lower loop of P with the controller (Ka or Kb) yielding M, which
casts the robust stability problem as that depicted in Figure 4-13. Use starp to
compute the closed-loop system M. A block diagram of this is shown in
Figure 7-29.
z w z M w
P
- K
Figure 7-29: Closed-Loop Interconnection Block Diagrams
7-37
7 Robust Control Examples
Ma = starp(P,ka,2,2);
Mb = starp(P,kb,2,2);
spoles(Ma)
spoles(Mb)
Loop-at-a-Time Robustness
The loop-at-a-time robustness measures are computed by separately plotting
the transfer function that each δi sees. This corresponds to the (1,1) and (2,2)
entries of M. Recall that, if δ2 ≡ 0, then the perturbed closed-loop system is
stable for all δ1(s) satisfying
1
δ 1 ∞ < -------------------
M 11 ∞
7-38
MIMO Margins Using µ
that causes instability. This is the robustness test for perturbations in channel
1, with no uncertainty in channel 2. Similar comments apply for the robustness
test for perturbations in channel 2, with no uncertainty in channel 1.
The results for Ma_g (solid) and Mb_g (dashed) are computed below, and shown
in Figure 7-30.
Ma_g = frsp(Ma,omega);
Mb_g = frsp(Mb,omega);
vplot('liv,m',sel(Ma_g,1,1),'-',sel(Mb_g,1,1),'--');
vplot('liv,m',sel(Ma_g,2,2),'-',sel(Mb_g,2,2),'--');
δ1 0
∆ := : δ 1 ∈ C, δ 2 ∈ C
0 δ2
The µ-Tools representation of the uncertainty set, along with the µ calculation
syntax are given as follows.
7-39
7 Robust Control Examples
0.4
0.3
0.35
0.25
0.3
0.2
Magnitude
Magnitude
0.25
0.2 0.15
0.15
0.1
0.1
0.05
0.05
0 1 0 1 0 1 2 3 4
0 1 2 3 4
10 10 10 10 10 10 10 10 10 10 10 10
Frequency (rad/sec) Frequency (rad/sec)
Plots of µ∆(M(jω)) for both closed-loop systems are shown in Figure 7-31. Note
that bnds_a and bnds_b each contain upper and lower bounds for µ, hence there
are a total of four plots in Figure 7-31. In the case of two complex uncertainties,
the upper and lower bound for µ are guaranteed to be equal, so it appears that
there are only 2 plots in the figure.
The peak associated with controller ka is much larger than the peak associated
with controller kb. Again, the size of the smallest block-diagonal perturbation
which causes instability is equal to
1
--------------------------------------------
max ω µ ∆ ( M ( jω ) )
7-40
MIMO Margins Using µ
1
mu
0.5
0 1 0 1 2 3 4
10 10 10 10 10 10
Frequency (rad/sec)
7-41
7 Robust Control Examples
perta = dypert(pvec_a,deltaset,bnds_a);
pertb = dypert(pvec_b,deltaset,bnds_b);
[pkvnorm(sel(bnds_a,1,2)) 1/pkvnorm(sel(bnds_a,1,2))]
ans =
1.4657 0.6823
[pkvnorm(sel(bnds_b,1,2)) 1/pkvnorm(sel(bnds_b,1,2))]
ans =
0.6435 1.5540
hinfnorm(perta)
norm between 0.6823 and 0.6829
achieved near 0
hinfnorm(pertb)
norm between 1.554 and 1.556
achieved near 2.182
The destabilizing perturbations, perta and pertb, have the same structure as
∆, block-diagonal. Therefore, the diagonal entries of perta and pertb will have
norms of 0.68 and 1.55, respectively, with zero off diagonal entries. The
following commands verify this fact. The output of these functions is not
shown.
hinfnorm(sel(perta,1,1))
hinfnorm(sel(perta,1,2))
hinfnorm(sel(perta,2,1))
hinfnorm(sel(perta,2,2))
hinfnorm(pertb)
hinfnorm(sel(pertb,1,1))
hinfnorm(sel(pertb,1,2))
hinfnorm(sel(pertb,2,1))
hinfnorm(sel(pertb,2,2))
7-42
MIMO Margins Using µ
The perturbed closed-loop can be formed with starp. Each perturbation results
in a pair of imaginary axis eigenvalues at the frequency associated with the
peak (across frequency) of µ∆(M(jω)).
pertclpa = starp(perta,Ma,2,2);
pertclpb = starp(pertb,Mb,2,2);
rifd(eig(pertclpa))
rifd(eig(pertclpb))
7-43
7 Robust Control Examples
7-44
Space Shuttle Robustness Analysis
p
r
ele
ny AC
rud
dgust
θ ele (rad)
u = θ rud (rad)
d gust (ft/sec)
The first input is the actual angular deflection of the elevon surface. The second
is the actual deflection of the rudder surface. Finally, there is a lateral wind
gust disturbance input, due to the winds that occur at this altitude.
There are four output variables of the aircraft. Three of these are states, while
the fourth is the lateral acceleration at the pilot’s location, denoted ny (units of
ny are ft/sec2).
p
r
y =
ny
φ
7-45
7 Robust Control Examples
side force c yβ c ya c yr β
yawing moment = c ηβ c ηa c ηr θ ele
rolling moment c lβ c la c lr θ rud
c yβ c ya c yr c yβ c ya c yr r yβ δ yβ r ya δ ya r yr δ yr
c ηβ c ηa c ηr = c ηβ c ηa c ηr + r ηβ δ ηβ r ηa δ ηa r ηr δ ηr
c lβ c la c lr c lβ c la c lr r lβ δ lβ r la δ la r lr δ lr
7-46
Space Shuttle Robustness Analysis
and the perturbations δ•• are assumed to be fixed, unknown, real parameters,
with each satisfying |δ••| ≤ 1. We use the notation r••. * δ•• to denote the 3 ×
3 perturbation matrix in the model for the aero coefficients, c••.
The aircraft model acnom has the nominal aerodynamic coefficients absorbed
into the state-space data. In addition to the inputs µ and outputs y described
earlier, acnom has three fictitious inputs and outputs such that the uncertain
behavior of the aircraft AC is given by the linear fractional transformation in
Figure 7-33.
The state-space model for acnom is created by the M-file mk_acnom. A listing of
state-space model acnom is given in “Shuttle Rigid Body Model” at the end of
this section.
7-47
7 Robust Control Examples
Actuator Models
The aircraft has two controlled inputs, rudder command, and elevon command.
Each actuator is modeled with a second order transfer function, as well as a
second order delay approximation to model the effects of the digital
implementation.
The model for the rudder is
rud
s2 +2
2
!rud
2
rud !rud s !rud +
vrud Wdelay urud
Here, urud is the electrical command that the controller will generate to move
the rudder. The transfer function Wdelay is a second order approximation of a
delay, to model the effects of the digital implementation of the control system.
In particular
2 2
1 – 2ξ del ( s ⁄ ω del ) + ( s ⁄ ω del )
W delay ( s ) = ------------------------------------------------------------------------------
2 2
1 + 2ξ del ( s ⁄ ω del ) + ( s ⁄ ω del )
with ωdel = 173 rad/s, and ξdel = 0.866. The transfer function
2
ω rud
------------------------------------------------------------
2 2
s + 2ξ rud ω rud s + ω rud
models the physical devices (motors, inertias, etc.) involved in actually moving
the rudder. The variable θrud is the actual deflection of the rudder surface,
while urud represents the command to the rudder system. The values of the
parameters are ξrud = 0.75, ωrud =21 rads/sec.
A similar model is used for the elevon actuation system. The parameters in
that case are ξele = 0.72, ωele =14 rads/sec, with an identical second order delay
model.
The state-space models for the actuators are created by the M-file mk_act.
Since the closed-loop performance objectives include penalties on the
7-48
Space Shuttle Robustness Analysis
rud ele
rud actrud urud ele
_ actele uele
rud
_
ele
• Wind gusts
• Sensor noise
• Pilot bank-angle command
In the H∞ framework, all time domain signals are modeled as the unit ball in
L2, filtered by problem dependent weighting functions which reflect typically
occurring signals in the application. In addition to the L2 gain, the H∞ norm
also has an interpretation in terms of gain from sinusoids to sinusoids. Now,
suppose h represents one of the exogenous signals, and Wh is the associated
stable weighting function. Then, the signal h is assumed to be any signal from
the set
h ∈ {Whηh : ||ηh||2 ≤ 1}
By choosing the form of Wh(s), the spectral content of such signals h can be
shaped.
1+s⁄2
d gust ∈ W gust η gust : W gust = 30 -------------------, η gust 2 ≤ 1 .
1 + s
The set on the right-hand side of the equation models the typical wind gusts
that the shuttle will encounter at this flight condition.
7-49
7 Robust Control Examples
1 + s/0.01
W P = W r = 0.0003 --------------------------
1 + s/0.5
1 + s/0.01
W φ = 0.0007 --------------------------
1 + s/2
1 + s/0.05
W n = 0.25 --------------------------
y 1 + s/10
r meas = r + W r η r
φ meas = φ + W φ η φ
ny = n y + W ny ηny
meas
• Pilot Bank-Angle Command: In this problem, the pilot (or autopilot) takes
the shuttle through a series of sweeping “S” turns to slow the vehicle down.
7-50
Space Shuttle Robustness Analysis
1 + s/2
W φcmd := 0.5 ----------------------
1 + s/0.5
The particular choice roughly implies that the bank-angle commands are
dominated by low frequency signals, with a maximum magnitude of
approximately 0.5 radians.
The noise weighting functions are denoted by Wnoise = diag{Wp, Wr, Wφ, W n y }
in the control block diagram.
Errors
There are several variables that are to be kept small in the face of the
exogenous signals listed in the previous section. In this context, these variables
will be considered errors.
Actuator signal levels: the angular position, angular rates, and angular
accelerations of the rudder (⋅ rud) and elevon (⋅ ele) surfaces should remain
reasonably small in the face of the exogenous signals. The signals are weighted
to give an actuator error vector of
ee 4 θ ele
·
e e· θ ele
.. ..
ee 0.005 θ ele
e act = :=
er 2 θ rud
·
e r· 0.2 θ rud
.. ..
er 0.009 θ rud
7-51
7 Robust Control Examples
θ ele
·
θ ele
..
θ ele
e act = W act
θ rud
·
θ rud
..
θ rud
• Performance variables:
- The ideal bank angle response (φideal) of the shuttle to a bank-angle
command (φcmd) is
1
φ ideal := ---------------------------------------------------------φ cmd
2
1 + 2 ⋅ ξ(s/ω ) + (s/ω )
where ω = 1.2 rad/sec, and ξ = 0.7. The bank-angle tracking error is defined
as φ – φideal.
- Turn coordination: in an ideal turn, the bank angle, and the yaw rate are
related. For this aircraft, a turn coordination error is defined as
rp := r – 0.037φ
- In a turn, it is desired that the pilot feel very little lateral acceleration,
hence, the lateral acceleration variable, ny, is an error.
These error signals are weighted by frequency dependent weights to give
a performance error vector as
7-52
Space Shuttle Robustness Analysis
1+s
0.8 1----------------------
+ s/0.1
0 0 ny
e perf := 1+s
0 500 -------------------------
1 + s/0.01
- 0 r – 0.037φ
1+s φ – φ ideal
0 0 250 -------------------------
1 + s/0.01
-
p
r
e perf = W perf ny
φ
φ ideal
The error weight on the lateral acceleration indicates a tolerance for low
frequency accelerations of 1.25 ft/sec2, which is relaxed at high frequency,
allowing accelerations up to 12.5 ft/sec2. Again, these specifications
correspond to ny errors produced by the exogenous signal set (wind gusts,
measurement noises, and bank angle commands). Similar interpretation
is given to the other performance variables.
r yβ δ yβ r ya δ ya r yr δ yr
W L ⋅ diag[δ yβ ,δ ηβ ,δ lβ ,δ ya ,…,δ lr ] ⋅ W R = r ηβ δ ηβ r ηa δ ηa r ηr δ ηr
r lβ δ lβ r la δ la r lr δ lr
for all δ••. This is easily done with the permutation matrices WL and WR shown
below.
7-53
7 Robust Control Examples
1 1 1 0 0 0 0 0 0
T
WR = 0 0 0 1 1 1 0 0 0
0 0 0 0 0 0 1 1 1
7-54
Space Shuttle Robustness Analysis
--
-
pertoutf1-9g pertinf1-9g - -
eact
-- Wact
-
Wr
Wl
ele actele
acnom
Ideal bank
angle response
model
The M-file mk_olic uses the sysic command to create a SYSTEM matrix
description of the open-loop interconnection structure. In the workspace, the
open-loop system is denoted by olic, and has 23 states, 23 outputs, and 17
inputs.
mk_olic;
minfo(olic)
A schematic diagram, with the specific input/output ordering for olic, is shown
in Figure 7-35.
7-55
7 Robust Control Examples
pertoutf1g
1
pertoutf2g
2
pertoutf3g
3
pertoutf4g
pertinf1g
4 1
pertoutf5g
pertinf2g
5 2
pertoutf6g
pertinf3g
6 3
pertoutf7g
pertinf4g
7 4
pertoutf8g
pertinf5g
8 5
pertoutf9g
pertinf6g
9 6
weighted ny
pertinf7g
10 7
eperf weighted r
pertinf8g
11 8
weighted err
olic
pertinf9g
12 9
p
weighted elevon acc 13 10
r
weighted elevon rate 14 11
ny
weighted elevon pos 15 12 exogenous
eact
disturbances
weighted rudder acc
16 13
gust
weighted rudder rate
17 14
cmd
cmd
weighted rudder pos 18 15
noisy p
19 16 elevon cmd
u
noisy r
20 17 rudder cmd
y
noisy ny
21
noisy
22
23
7-56
Space Shuttle Robustness Analysis
Controllers
In this section, the robustness properties of three different controllers are
analyzed using µ. The controllers receive four sensor measurements along with
the φ command signal and produce two control signals for the elevon and
rudder commands. The controller block diagram is shown below.
cmd
elevon cmd noisy p
noisy r
rudder cmd
K
noisy ny
noisy
7-57
7 Robust Control Examples
The two other controllers have already been designed and stored in the file
shutcont.mat.
load shutcont
minfo(k_x)
minfo(k_mu)
In the closed-loop system, there are six exogenous signals (the six η signals:
four sensor noises, wind gust, bank angle command) and nine errors (weighted
performance error vector and the weighted actuator error vector). The nominal
performance objective is that this multivariable transfer function matrix
should have an H∞ norm less than 1. Using µ-Tools, it is easy to evaluate this
performance criterion. Simply form the closed-loop system, calculate its
frequency response, and plot the norm of the appropriate transfer function
versus frequency.
7-58
Space Shuttle Robustness Analysis
omega = logspace(-2,3,30);
clp_h = starp(olic,k_h,5,2);
clp_hg = frsp(clp_h,omega);
clp_x = starp(olic,k_x,5,2);
minfo(clp_x)
clp_xg = frsp(clp_x,omega);
minfo(clp_xg)
clp_mu = starp(olic,k_mu,5,2);
clp_mug = frsp(clp_mu,omega);
Note that the closed-loop systems have additional inputs and outputs from the
nine aero-perturbation channels. The relevant exogenous signals and errors
are selected (using sel) before calculating the maximum singular value
(vnorm).
np_hg = sel(clp_hg,[10:18],[10:15]);
np_xg = sel(clp_xg,[10:18],[10:15]);
np_mug = sel(clp_mug,[10:18],[10:15]);
vplot('liv,m',vnorm(sel(clp_hg,10:18,10:15)),...
vnorm(sel(clp_xg,10:18,10:15)),...
vnorm(sel(clp_mug,10:18,10:15)))
title('NOMINAL PERFORMANCE: ALL CONTROLLERS')
k_h − SOLID
0.9
k_x − DASHED
k_mu − DOTTED
0.8
0.7
H−INFINITY NORM
0.6
0.5
0.4
0.3
0.2
0.1
0
−2 −1 0 1 2 3
10 10 10 10 10 10
FREQUENCY (RAD/SEC)
7-59
7 Robust Control Examples
Note that the best nominal performance is achieved by controller k_h, as seen
in Figure 7-36. This is not surprising, since it was designed specifically with
these disturbances and errors in mind. Relatively, the performance from k_mu
is poor, though it does meet the nominal performance objective. In later
calculations, it will become clear that the degradation in nominal performance
is offset by a much greater insensitivity to variations in the aerodynamic
coefficients.
Robust Stability
Using µ, the robust stability characteristics of each closed-loop system can be
evaluated. The uncertain parameters (δyβ,. . .,δlr) can be assumed to be real,
representing uncertainty in the constant aerodynamic coefficients. However,
the flow around the vehicle is very complex, and the quasi-steady implication
of constant aerodynamic coefficients is somewhat simplistic. Consequently, for
a more conservative analysis, the uncertain parameters can be treated as
complex. In this section, both models of uncertainty will be analyzed, and
compared. Refer to Chapter 4, “Modeling and Analysis of Uncertain Systems”
for more detail on the interpretations.
This motivates two separate representations of the uncertainty set,
∆ C := { diag [ δ 1 ,δ 2 ,…,δ 9 ] : δ i ∈ C }
∆ R := { diag [ δ 1 ,δ 2 ,…,δ 9 ] : δ i ∈ R }
Here, the lower case rs refers to robust stability (as opposed to robust
performance, rp, which will be addressed later).
The perturbation inputs/outputs from the frequency responses are selected for
a robust stability µ test. The input/output channels associated with the
performance criterion are not used in the robust stability µ test. A diagram of
the closed-loop system is shown in Figure 7-37.
7-60
Space Shuttle Robustness Analysis
pertoutf1g pertinf1g
pertoutf2g 1 1
pertinf2g
2 2
pertoutf3g pertinf3g
3 3
pertoutf4g pertinf4g
4 4
pertoutf5g clp RS pertinf5g
5 5
pertoutf6g pertinf6g
6 6
pertoutf7g pertinf7g
7 7
pertoutf8g pertinf8g
8 8
pertoutf9g 9 9 pertinf9g
clp_hgRS = sel(clp_hg,1:9,1:9);
clp_xgRS = sel(clp_xg,1:9,1:9);
clp_mugRS = sel(clp_mug,1:9,1:9);
Calculate µ across frequency, and look at µ plots. Start with the complex
uncertainty structure.
[bnds_h,dv_h,sens_h,rp_h]=mu(clp_hgRS,delsetrs_C);
[bnds_x,dv_x,sens_x,rp_x]=mu(clp_xgRS,delsetrs_C);
[bnds_mu,dv_mu,sens_mu,rp_mu]=mu(clp_mugRS,delsetrs_C);
vplot('liv,d',bnds_h,'-',bnds_x,'--',bnds_mu,'-.')
title('ROBUST STABILITY OF CLOSED-LOOP: COMPLEX')
7-61
7 Robust Control Examples
2.5
2
MU
1.5
0
10 -2 10 -1 10 0 10 1 10 2
FREQUENCY (RAD/SEC)
Figure 7-38: Complex Robust Stability µ Analysis of k_h, k_x, and k_mu
According to Figure 7-38, the k_mu controller has the best robust stability
properties when the perturbations are treated as complex (dynamic). The peak
of the lower bound, 0.9, implies that there is a diagonal complex perturbation
1
of size, -------
0.9
- , that causes instability. The peak of the upper bound, approximately
1
0.99, implies that for diagonal perturbations smaller than ----------
0.99
- , the closed-loop
system remains stable. The gap between the upper and lower bound can be
reduced by using the “c” option in the mu command. Without this option, the
upper bound from mu is a computational approximation to
–1
inf σ ( DMD )
D∈D
that can be refined (option “c”) at the expense of slower execution. Using the “c”
option reduces the upper bound peak to 0.9, so that the complex µ analysis
gives a tight estimate on the size of the smallest destabilizing perturbation.
Similar interpretations are possible for the closed-loop systems with
controllers k_h and k_x, though, since the µ plots have larger peaks, the bound
on allowable perturbations is smaller. Hence, the closed-loop system with the
7-62
Space Shuttle Robustness Analysis
k_h - SOLID
1.2 k_x - DASHED
k_mu - DOTTED
1
0.8
MU
0.6
0.4
k_h (solid line)
0.2 k_x (dashed line)
k_mu (dotted line)
0
10 -2 10 -1 10 0 10 1 10 2
FREQUENCY (RAD/SEC)
Figure 7-39: Real Robust Stability µ Analysis of k_h, k_x, and k_mu
The k_h controller has the best robust stability properties, when the
perturbations are treated as real, as seen in Figure 7-39. This is in contrast to
the robust stability analysis with complex perturbation where k_mu exhibited
the best properties. The peak of the upper bound, approximately 0.66, implies
1
that for diagonal, real perturbations smaller than ----------
0.66
- , the closed-loop system
remains stable. The lower bound in Figure 7-39 is often 0 and does not converge
for all values of frequency, leading to a large gap between the upper and lower
bound. This gap can be reduced by adding a small amount of complex
perturbation to the pure real perturbation. A detailed discussion of this
7-63
7 Robust Control Examples
Robust Performance
Using µ, the robust performance characteristics of each closed-loop system can
be evaluated. The uncertain parameters are treated as real parameters in this
analysis. These parameters can also be treated as complex perturbations,
though this is not done in this section.
The appropriate block structure for the robust performance test is
∆P := {diag[δ1,δ2,. . .,δ9,∆10] : δi ∈ R, ∆10 ∈ C6×9}
which is simply an augmentation of the original real robust stability
uncertainty set, ∆R, with a complex 6 × 9 full block to include the performance
objectives. Recall from the “Using m to Analyze Robust Performance” section
in Chapter 4: H∞ performance objectives are always represented with a full,
complex block. Hence,
delsetrp_R = [delsetrs_R;6 9]
7-64
Space Shuttle Robustness Analysis
MU 2
1.5
1
k_h (solid line)
k_x (dashed line)
0.5
k_mu (dotted line)
0
10 -2 10 -1 10 0 10 1 10 2
FREQUENCY (RAD/SEC)
The axis is selected in Figure 7-40 to show a comparison of controllers k_x and
k_mu. At low frequency, the closed-loop robust performance with k_h
implemented gets as bad as 14. The closed-loop system using controller k_x
achieves a robust performance µ value of 1.56, while controller k_mu achieves a
robust performance µ value of 1.22.
Worst-case Perturbations
Using a µ calculation, we have seen that all controllers achieve robust-stability
to the 9 × 9 real uncertainty matrix which represents uncertainty in the
aero-coefficients. However, the performance of each closed-loop system
degrades differently under LFT real, diagonal perturbations. We use wcperf to
compute the worst-case performance degradation as well as the worst-case,
norm 1, perturbation. The worst-case perturbation of norm 1 will be used in the
next section for uncertain time-domain simulations.
[deltabadh,wcp_lowh,wcp_upph] = wcperf(clp_hg,delsetrs_R,.05,4);
[deltabadx,wcp_lowh,wcp_upph] = wcperf(clp_xg,delsetrs_R,.05,10);
[deltabadmu,wcp_lowmu,wcp_uppmu]= wcperf(clp_mug,delsetrs_R,.05,10);
vplot(wcp_lowh,wcp_upph,wcp_lowx,wcp_uppx,wcp_lowmu,wcp_uppmu)
7-65
7 Robust Control Examples
Worst-Case Performance
2
1.6 k_mu
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1 1.2
Size of Uncertainty
Using k_h, it is clear that the closed-loop performance degrades rapidly and
severely. It would not be an acceptable controller in the real aircraft.
Note Optimizing the H∞ norm of some closed-loop transfer function does not,
in any way, guarantee robustness to perturbations at other points in the
feedback loop.
Using k_x and k_mu, reasonable robustness properties (on the order of the
original specifications) are attained. The controller k_x achieves better
nominal performance (i.e., at ||∆|| = 0), at the expense of more rapid potential
performance degradation under uncertainty. Both closed-loop systems
potentially degrade to unacceptable (performance norm > 1) performance with
less than one-half of the original modeled uncertainty. At that level of
uncertainty, the closed-loop system with k_mu degrades more gracefully. This
type of tradeoff curve illustrates some of the differences between the two
controllers, and can be helpful in understanding the tradeoffs involved.
7-66
Space Shuttle Robustness Analysis
Time Simulations
The open-loop simulation interconnection, Figure 7-42, is similar to olic, but
contains none of the weighting functions. It is used exclusively for nominal and
perturbed time-domain simulations, where unweighted time signals will be
calculated and plotted.
mk_olsim;
minfo(olsim)
f1g
f2g
pertout 1
f3g
pertout 2
f4g
pertout 3
f5g
pertout 4
f6g
pertout 5
pertout
f7g
6
f g
pertin 1
1
pertout
f8g
7
f g
pertin 2
2
pertout
f9g
8
f g
pertin 3
3
pertout
9
f g
pertin 4
4
10
f g
pertin 5
5
ny
r , c
11
f g
pertin 6
6
, ideal
12
f g
pertin 7
olsim 7
f g
ele
13
pertin 8
8
f g
ele
14
pertin 9
9
ele
_ 15
10 dgust
rud
16
cmd
11
rud
17
elevoncmd
12
rud
_ 18
13 ruddercmd
cmd
19
p
20
p
21
22
ny
23
24
7-67
7 Robust Control Examples
see the “LFT Time Simulation User Interface Tool: simgui” section in Chapter
6.
The main performance objective is bank angle tracking, so the response to a 0.5
radian step input for φcmd is investigated. The gust input is set to zero in these
simulations. This data is entered into the simgui Main window Input Signal
editable text. Note that φcmd is the 11th input of olsim and the second
non-perturbation input. The output signals of interest are φ, ny, r – cφ, and φ –
φideal, which are outputs 10 through 13 of olsim or the first through fourth
outputs after the perturbation has been included. In the simgui Main window,
input olsim into the Plant editable text. Figure 7-43 shows the main
simulation window for the nominal and perturbed response of controller k_x.
7-68
Space Shuttle Robustness Analysis
badpertx is used in the perturbed response for controller k_x and badpertmu is
used as the worst-case real perturbation for controllers k_h and k_mu. This data
is input into the Perturbation editable text in the simgui main window.
The controllers are implemented in discrete-time at a sample-rate of 20Hz on
the shuttle. To replicate the same implementation, a sample-data time
simulation is performed. This simulation is available under the Options menu
in the Main simulation window. Therefore, the continuous-time controllers,
k_x, k_h, and k_mu, must be discretized for the sampled-data time simulation.
The continuous-time plant, olsim, is simulated at 200Hz and the controllers at
20Hz as seen in the simgui parameter window, Figure 7-44.
7-69
7 Robust Control Examples
The nominal and perturbed closed-loop responses with k_x, k_h, and k_mu
implemented are shown in Figure 7-45 and 7-46. As expected, the time domain
simulations reinforce the conclusions that were reached in the frequency
domain analysis. The nominal performance associated with k_h is superb, but
degrades significantly with the aerodynamic uncertainty. In that respect, the
controller k_mu performs the best, nearly achieving all of the robust
performance objectives. The nominal and perturbed time response of other
performance variables can also be easily investigated.
7-70
Space Shuttle Robustness Analysis
Figure 7-45: Closed-Loop Nominal and Perturbed Time Response, k_x (top)
and k_h
7-71
7 Robust Control Examples
Conclusions
This exercise illustrated the use of the µ-Tools software to analyze the robust
stability and robust performance objectives on a complicated, uncertain plant
model.
There is an important feature of the mu software that cannot be overlooked or
overemphasized. These algorithms calculate both upper and lower bounds for
µ, and produce worst-case perturbations which provide the lower bound. The
perturbations, and their effects, can be analyzed in both the frequency domain
and time domain. In practice, the bad perturbations are also used in high
fidelity, nonlinear simulations of the closed-loop system to discover limitations
and unforeseen problems.
7-72
Space Shuttle Robustness Analysis
Although this problem did not have repeated uncertain parameters (each δ••
appeared only once), the algorithms and software do handle these cases, and
the reader is referred back to the “Complex Structured Singular Value” section
in Chapter 4 for details.
7-73
7 Robust Control Examples
1 0 0 0
0 0 0 0
0 0 0 0
C_acnom = 0 1 0 0
0 0 1 0
– 6.8e + 1 – 1.7e + 0 – 4.1e + 0 – 3.7e – 5
0 0 0 1.0e + 0
0 0 0 0 0 1.2e – 6
0 0 0 1 0 0
0 0 0 0 1 0
D_acnom = 0 0 0 0 0 0
0 0 0 0 0 0
1.1e + 1 – 1.1e + 1 – 1.1e + 1 2.7e + 1 – 3.0e + 0 – 7.8e – 5
0 0 0 0 0 0
7-74
HIMAT Robust Performance Design Example
7-75
7 Robust Control Examples
7-76
HIMAT Robust Performance Design Example
The dashed box represents the true airplane, with associated transfer function
G. Inside the box is the nominal model of the airplane dynamics, Gnom, and two
elements, wdel and ∆G, which parametrize the uncertainty in the model. This
type of uncertainty is called multiplicative uncertainty at the plant input, for
obvious reasons. The transfer function wdel is assumed known, and reflects the
amount of uncertainty in the model. The transfer function ∆G is assumed to be
stable and unknown, except for the norm condition, ||∆G||∞ < 1. The performance
objective is that the transfer function from d to e be small, in the ||⋅ ||∞ sense, for
all possible uncertainty transfer functions ∆G. The weighting function WP is
used to reflect the relative importance of various frequency ranges for which
performance is desired.
The control design objective is to design a stabilizing controller K such that for
all stable perturbations ∆G(s), with ||∆G||∞ < 1, the perturbed closed-loop system
remains stable, and the perturbed weighted sensitivity transfer function,
S(∆G) := WP(I + P(I + ∆GWdel)K)–1
has ||S(∆G)||∞ < 1 for all such perturbations. Recall that these mathematical
objectives exactly fit in the structured singular value framework.
7-77
7 Robust Control Examples
Uncertainty Models
The airplane model we consider has two inputs: elevon command (δe) and
canard command (δc); and two measured outputs: angle-of-attack (α) and pitch
angle (θ).
A first principles set of uncertainties about the aircraft model would include:
• Uncertainty in the canard and the elevon actuators. The electrical signals
that command deflections in these surfaces must be converted to actual
mechanical deflections by the electronics and hydraulics of the actuators.
This is not done perfectly in the actual system, unlike the nominal model.
• Uncertainty in the forces and moments generated on the aircraft, due to
specific deflections of the canard and elevon. As a first approximation, this
arises from the uncertainties in the aerodynamic coefficients, which vary
with flight conditions, as well as uncertainty in the exact geometry of the
airplane. An even more detailed view is that surface deflections generate the
forces and moments by changing the flow around the vehicle in very complex
ways. Thus there are uncertainties in the force and moment generation that
go beyond the quasi-steady uncertainties implied by uncertain aerodynamic
coefficients.
• Uncertainty in the linear and angular accelerations produced by the
aerodynamically generated forces and moments. This arises from the
uncertainty in the various inertial parameters of the airplane, in addition to
neglected dynamics such as fuel slosh and airframe flexibility.
• Other forms of uncertainty that are less well understood.
7-78
HIMAT Robust Performance Design Example
The simple model of the airplane has four states: forward speed (v),
angle-of-attack (α), pitch rate (q) and pitch angle (θ); two inputs: elevon
command (δe) and canard command (δc); and two measured outputs:
angle-of-attack (α) and pitch angle (θ).
mkhimat;
minfo(himat)
seesys(himat,'%9.le')
The partitioned matrix represents the [A B; C D] state space data. Given this
nominal model himat (i.e., Gnom(s)) we also specify a stable, 2 × 2 transfer
matrix Wdel(s), called the uncertainty weight. These two transfer matrices
parametrize an entire set of plants, , which must be suitably controlled by the
robust controller K.
:= {Gnom(I + ∆GWdel) : ∆G stable, ||∆G||∞ ≤ 1}.
All of the uncertainty in modeling the airplane is captured in the normalized,
unknown transfer function ∆G. The unknown transfer function ∆G(s) is used to
parametrize the potential differences between the nominal model Gnom(s), and
the actual behavior of the real airplane, denoted by G. The dependence on
frequency of the uncertainty weight indicates that the level of uncertainty in
the airplane’s behavior depends on frequency.
In this example, the uncertainty weight Wdel is of the form Wdel(s) := wdel(s)I2,
for a given scalar valued function wdel(s). The fact that the uncertainty weight
is diagonal, with equal diagonal entries, indicates that the modeled
uncertainty is in some sense a round ball about the nominal model Gnom. The
scalar weight associated with the multiplicative input uncertainty is
7-79
7 Robust Control Examples
constructed using the command nd2sys. The weight chosen for this problem is
( s + 100 )
w del = 50
------------------------------- .
s + 10000
wdel = nd2sys([1 100],[1 10000],50);
50 ( s + 100 )
:= G nom I 2 + ------------------------------∆ G ( s ) : ∆ G ( s ) stable, ∆ G ∞ ≤ 1
s + 10000
The particular uncertainty weight chosen for this problem indicates that at low
frequency, there is potentially a 50% modeling error, and at a frequency of 173
rad/sec, the uncertainty in the model is up to 100%, and could get larger at
higher and higher frequencies. A frequency response of wdel is shown in
Figure 7-48.
7-80
HIMAT Robust Performance Design Example
10 1
10 0
10 -1
10 0 10 1 10 2 10 3 10 4 10 5
Frequency (rad/s)
omega = logspace(-3,2,50);
wp_g = frsp(wp,omega);
vplot('liv,lm',minv(wp_g))
title('Inverse of Performance Weighting function')
xlabel('Frequency (rad/s)')
This sensitivity weight indicates that at low frequency, the closed-loop (both
nominal and perturbed) should reject disturbances at the output by a factor of
50-to–1. Expressed differently, steady-state tracking errors in both channels,
due to reference step-inputs in either channel should be on the order of 0.02 or
smaller. This performance requirement gets less and less stringent at higher
and higher frequencies. The closed-loop system should perform better than
7-81
7 Robust Control Examples
10 0
10 -1
10 -2
10 -3 10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
7-82
HIMAT Robust Performance Design Example
Robust Stability. The closed-loop system achieves robust stability if the closed
loop system is internally stable for all of the possible plant models G ∈ .
In this problem, that is equivalent to a simple norm test on a particular
nominal closed-loop transfer function.
Robust Stability ⇔ ||WdelKGnom(I + KGnom)–1||∞ < 1
7-83
7 Robust Control Examples
z pertin
e himat ic dist
y control
has internal structure shown in Figure 7-50. The variables control, pertin,
dist, and y are two element vectors.
This can be produced with nine MATLAB commands, listed below. The first
eight lines describe the various aspects of the interconnection, and may appear
in any order. The last command, sysic, produces the final interconnection. The
commands can be placed in an M-file, or executed at the command line.
systemnames = ' himat wp wdel ';
inputvar = '[ pertin{2} ; dist{2} ; control{2} ]';
outputvar = '[ wdel ; wp ; himat + dist ]';
input_to_himat = '[ control + pertin ]';
input_to_wdel = '[ control ]';
input_to_wp = '[ himat + dist ]';
sysoutname = 'himat_ic';
cleanupsysic = 'yes';
sysic;
7-84
HIMAT Robust Performance Design Example
Since the system himat_ic is still open-loop, its poles are simply the poles of
the various components that make up the interconnection.
minfo(himat_ic)
spoles(himat_ic)
spoles(himat)
spoles(wdel)
spoles(wp)
∆1 0 2×2 2 × 2 4×4
∆:= : ∆1 ∈ C ,∆ 2 ∈ C ⊂C .
0 ∆2
The first block of this structured set corresponds to the full-block uncertainty
∆G used in section to model the uncertainty in the airplane’s behavior. The
second block, ∆2 is a fictitious uncertainty block, used to incorporate the H∞
performance objectives on the weighted output sensitivity transfer function
into the µ-framework.
Using theorem 4.5 from the “Robust Performance” section in Chapter 4, a
stabilizing controller K achieves closed-loop, robust performance if and only if
for each frequency ω ∈ [0, ∞], the structured singular value
µ∆[FL(P,K)(jω)] < 1
Using the upper bound for µ, (recall that in this case, two full blocks, the upper
bound is exactly equal to µ) we can attempt to minimize the peak closed-loop µ
value by posing the optimization problem
min min d ( s )I 2 0 –1
d ( s )I 2 0
F L ( P, K )
K d(s) 0 I2 0 I2
stabilizing stable,min–phase ∞
7-85
7 Robust Control Examples
Finding the value of this minimum and constructing controllers K that achieve
levels of performance arbitrarily close to the optimal level is called µ-synthesis.
A more detailed discussion of D – K iteration is given in Chapter 5.
Before plunging into the D – K iteration design procedure, we begin with a
controller designed via basic MIMO loop shaping methods.
7-86
HIMAT Robust Performance Design Example
10 1
10 0
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
The open-loop gain plot satisfies both the low frequency performance objective
and the high frequency robustness goals. We have only plotted the singular
values of GKloop, but KloopG looks similar. Hence, you would expect the
controller to satisfy the robust stability and nominal performance
requirements.
The two 2 × 2 transfer functions associated with robust stability and nominal
performance can be evaluated for the loop shaping controller. Simply close the
open-loop interconnection P (himat_ic) with the loop shaping controller, Kloop
(kloop) and evaluate the pertinent transfer functions using the command sel.
7-87
7 Robust Control Examples
In using sel, the desired outputs (or rows) are specified first, followed by the
desired inputs (or columns). The results are seen in Figure 7-52.
clp = starp(himat_ic,kloop,2,2);
spoles(clp)
rs_loop = sel(clp,1:2,1:2);
np_loop = sel(clp,3:4,3:4);
rs_loopg = frsp(rs_loop,omega);
np_loopg = frsp(np_loop,omega);
vplot('liv,m',vnorm(rs_loopg),vnorm(np_loopg))
tmp1 = 'ROBUST STABILITY (solid) and';
tmp2 = ' NOMINAL PERFORMANCE (dashed)';
title([tmp1 tmp2])
xlabel('Frequency(rad/s)')
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
Figure 7-52: Robust Stability and Nominal Performance Plots for the Loop
Shaping Controller
7-88
HIMAT Robust Performance Design Example
7-89
7 Robust Control Examples
Outputs
Properties of Controller
minfo(k1)
omega = logspace(-1,4,50);
spoles(k1)
k1_g = frsp(k1,omega);
vplot('bode',k1_g)
subplot(211), title('Frequency Response of k1')
7-90
HIMAT Robust Performance Design Example
Frequency Response of k1
2
10
Log Magnitude
0
10
2
10
4
10 1 0 1 2 3 4
10 10 10 10 10 10
Frequency (radians/sec)
200
Phase (degrees)
200
400 1 0 1 2 3 4
10 10 10 10 10 10
Frequency (radians/sec)
Figure 7-55 shows the singular values of the closed-loop system clp1. Although
clp1 is 4 × 4, at each frequency it only has rank equal to 2, hence only two
singular values are nonzero.
Closed-Loop Properties
rifd(spoles(clp1))
clp1g = frsp(clp1,omega);
clp1gs = vsvd(clp1g);
vplot('liv,m',clp1gs)
title('Singular Value Plot of clp1')
xlabel('Frequency (rad/s)')
7-91
7 Robust Control Examples
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
The two 2 × 2 transfer functions associated with robust stability and nominal
performance may be evaluated separately, using the command sel. Recall that
the robust stability test is performed on the upper 2 × 2 transfer function in
clp1, and the nominal performance test is on the lower 2 × 2 transfer function
in clp1. Since a frequency response of clp1 is already available, (in clp1g) we
simply perform the sel on the frequency response, and plot the norms.
rob_stab = sel(clp1g,[1 2],[1 2]);
nom_perf = sel(clp1g,[3 4],[3 4]);
minfo(rob_stab)
minfo(nom_perf)
vplot('liv,m',vnorm(rob_stab),vnorm(nom_perf))
tmp1 = 'ROBUST STABILITY (solid) and';
tmp2 = ' NOMINAL PERFORMANCE (dashed)';
title([tmp1 tmp2])
xlabel('Frequency (rad/s)')
7-92
HIMAT Robust Performance Design Example
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
7-93
7 Robust Control Examples
∆1 0 2×2 2 × 2
∆:= : ∆1 ∈ C ,∆ 2 ∈ C
0 ∆2
µ Analysis of H∞ Design
The H∞ design is analyzed with respect to structured uncertainty using µ.
First, the density of points in the frequency response is increased from 50 to
100 to yield smoother plots. Then the upper and lower bounds for µ are
calculated on the 4 × 4 closed-loop response of the matrix clp_g1. The upperand
lower bounds for µ are plotted (in this example they lie on top of one another)
along with the maximum singular value in Figure 7-57.
7-94
HIMAT Robust Performance Design Example
deltaset=[2 2; 2 2];
omega1 = logspace(-1,4,100);
clp_g1 = frsp(clp1,omega1);
[bnds1,dvec1,sens1,pvec1] = mu(clp_g1,deltaset);
vplot('liv,m',vnorm(clp_g1),bnds1)
title('Maximum Singular Value and mu Plot')
xlabel('Frequency (rad/s)')
text(.15,.84,'max singular value (solid)','sc')
text(.3,.4,'mu bounds (dashed)','sc')
text(.2,.15,'H-infinity Controller','sc')
1.4
1.2
1
mu bounds (dashed)
0.8
0.6
H-infinity Controller
0.4
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
7-95
7 Robust Control Examples
Hence, the controlled system (from H∞) does not achieve robust performance.
This conclusion follows from the µ plot in Figure 7-57, which peaks to a value
of 1.69, at a frequency of 73.6 rad/sec. This means that there is a perturbation
matrix ∆G, with ||∆G||∞ = ----------
1.69
1
- , for which the perturbed weighted sensitivity gets
large
||WP(I + Gnom(I + Wdel∆G)K–1||∞ = 1.69
This perturbation, ∆G, can be constructed using dypert. The input variables to
the command dypert consist of two outputs from µ, the perturbation matrix
and the bounds, along with the block structure, and the numbers of the blocks
for which the rational matrix construction should be carried out. Often times,
some of the blocks correspond to performance blocks and therefore need not be
constructed. Here, only the first block is an actual perturbation, so the
construction is only done for this 2 × 2 perturbation (fourth argument of
dypert).
delta_G = dypert(pvec1,deltaset,bnds1,1);
minfo(delta_G) % 2 by 2
rifd(spoles(delta_G)) % stable
hinfnorm(delta_G) % 1/1.69
clp_pert = starp(delta_G,clp1,2,2); % close top loop with delta
minfo(clp_pert)
rifd(spoles(clp_pert)) % stable, since RS passed
hinfnorm(clp_pert) % degradation of performance
- delta G
clp1 e clp pert
e d
d
7-96
HIMAT Robust Performance Design Example
However, the closed-loop system with the loop shaping controller does not
achieve robust performance. In fact, µ reaches a peak value of 11.7 at a
frequency of 0.202 rad/sec, as seen in Figure 7-58. This means that there is a
perturbation matrix ∆G, with ||∆G||∞ = ----------
11.7
1
- , for which the perturbed weighted
sensitivity gets large
||WP(I + Gnom(I + Wdel∆G)K–1||∞ = 11.7
Notice that this perturbation is 8.2 times smaller than the perturbation
associated with the H∞ control design, but that the subsequent degradation in
closed-loop performance is 8.2 times worse. Therefore, the loop shaping
controller will most likely perform poorly on the real system.
7-97
7 Robust Control Examples
10
0
10 -1 10 0 10 1 10 2 10 3 10 4
Frequency (rad/s)
The structured singular value µ is large in the low frequency range due to the
off-diagonal elements of clpg being large. One can see this using the command
blknorm, which outputs the individual norms of the respective blocks. The
coupling between the off-diagonal terms associated with 0.202 rads/sec point to
the problem — the upper right entry is 0.14, somewhat small, but not small
7-98
HIMAT Robust Performance Design Example
enough to counteract the large (nearly 1000) lower left entry. As expected,
µ ≈ 0.14 * 959 = 11.6 .
blkn_cl = blknorm(clpg,deltaset);
see(xtract(blkn_cl,.15,.3))
2 rows 2 columns
iv = 0.159986
4.9995e-01 1.4127e-01
5.6525e+02 1.6402e-01
iv = 0.202359
4.9991e-01 1.4193e-01
9.5950e+02 1.6520e-01
iv = 0.255955
4.9985e-01 1.4294e-01
7.5635e+02 1.6607e-01
Recapping Results
Let’s summarize what has been done so far:
7-99
7 Robust Control Examples
shows that the µ analysis improves on the σ ( . ) bound at most frequencies, but
there is no improvement at the frequency of 73.6 rads/sec.
Hence, the peak value of the µ-plot is as high as the peak value on the singular
value plot, the µ analysis seems to have been of little use. However, at most of
the frequencies, µ is smaller than σ , and in the next iteration of synthesis, the
controller can essentially focus its efforts at the problem frequency, and lower
the peak of the µ-plot.
Now, the open-loop interconnection structure is the eight input, six output
linear system, shown below
z pertin
e himat ic dist
y control
7-100
HIMAT Robust Performance Design Example
The M-file mkhicn creates the plant model, weighting functions and the
interconnection structure shown in Figure 7-60. This can be produced with
nine MATLAB commands, listed below, and also in the M-file mkhicn (which
creates the plant and weighting functions).
" #
w1
w2 6
pertin dist(1:2)
wdel
6 -++?f- himat -++f? - wp -
"
e1
#
e2
control
y +?f+ wn dist(3:4)
Figure 7-60: HIMAT Open-Loop Interconnection Structure
mkhicn
file: mkhicn.m
mkhimat;
wdel = nd2sys([50 5000],[1 10000]);
wp = nd2sys([0.5 0.9],[1 0.018]);
poleloc = 320;
Wn = nd2sys([2 0.008*poleloc],[1 poleloc]);
wdel = daug(wdel,wdel);
wp = daug(wp,wp);
Wn = daug(wn,wn);
systemnames = ' himat wp wdel wn ';
inputvar = '[ pertin{2} ; dist{4} ; control{2} ]';
outputvar = '[ wdel ; wp ; himat + dist(1:2) + wn ]';
input_to_himat = '[ control + pertin ]';
input_to_wdel = '[ control ]';
input_to_wp = '[ himat + dist(1:2) ]';
input_to_wn = '[ dist(3:4) ]';
sysoutname = 'himat_ic';
cleanupsysic = 'yes';
sysic;
7-101
7 Robust Control Examples
The dkit file himat_dk has been set up with the necessary variables to design
robust controllers for HIMAT using D – K iteration. A listing of the himat_dk
file follows. You can copy this file into your directory from the µ-Tools
subroutines directory, mutools/subs, and modify it for other problems, as
appropriate.
% himat_dk
%
% This script file contains the USER DEFINED VARIABLES for the
%mutools DKIT script file. The user MUST define the 5
%variables below.
%------------------------------------------%
REQUIRED USER DEFINED VARIABLES
%------------------------------------------%
% Nominal plant interconnection structure
NOMINAL_DK = himat_ic;
% Number of measurements
NMEAS_DK = 2;
%----------------------end of himat_dk-------------------------%
7-102
HIMAT Robust Performance Design Example
After the himat_dk.m file has been set up, you need to let the dkit program
know which setup file to use. This is done by setting the string variable
DK_DEF_NAME in the MATLAB workspace equal to the setup filename. Typing
dkit at the MATLAB prompt will then begin the D – K iteration procedure.
DK_DEF_NAME = 'himat_dk';
dkit
starting mu iteration #: 1
Iteration Number: 1
-------------------
7-103
7 Robust Control Examples
1.8
1.6
1.4
MAGNITUDE
1.2
0.8
0.6
0.4
0.2
0
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
7-104
HIMAT Robust Performance Design Example
------------CHANGING # of Points--------------
7-105
7 Robust Control Examples
1.8
1.6
1.4
MAGNITUDE
1.2
0.8
0.6
0.4
0.2
0
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
7-106
HIMAT Robust Performance Design Example
1.8
1.6
1.4
MU
1.2
0.8
0.6
0.4
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
Iteration Summary
------------------------------------
Iteration #1
Controller Order10
Total D-Scale Order0
Gamma Achieved2.152
Peak mu-Value2.075
Proceeding with the D – K iteration, we must fit the D-scaling variable that was
calculated in the µ upper-bound computation. This rational D-scaling will then
be absorbed into the open-loop interconnection.
7-107
7 Robust Control Examples
A plot of the µ upper bound, the first frequency-dependent D-scaling data (this
is the curve we want to fit), and the sensitivity of the µ upper bound. The
sensitivity measure roughly shows (across frequency) the relative importance
of the accuracy of the curve fit. It is used in the curve fit optimization to weight
some frequency ranges differently than others.
0
1.5 10
1
0.5 5 0 5
10 5 0 5
10 10 10 10 10 10
Sensitivity
1
10
0
10
1
10 5 0 5
10 10 10
7-108
HIMAT Robust Performance Design Example
You are prompted to enter your choice of options for fitting the D-scaling data.
Press return to see your options.
Enter Choice (return for list):
Choices:
• nd and nb allow you to move from one D-scale data to another. nd moves to
the next scaling, whereas nb moves to the next scaling block. For scalar
D-scalings, these are identical operations, but for problems with full
D-scalings, (perturbations of the form δI) they are different. In the (1,2)
subplot window, the title displays the D-scaling Block number, the row/
column of the scaling that is currently being fit, and the order of the current
fit (with d for data, when no fit exists).
• The order of the current fit can be incremented or decremented (by 1) using
i and d.
• apf automatically fits each D-scaling data. The default maximum state order
of individual D-scaling is 5. The mx variable allows you to change the
maximum D-scaling state order used in the automatic pre-fitting routine. mx
must be a positive, nonzero integer. at allows you to define how close the
rational, scaled µ upper bound is to approximate the actual µ upper bound in
a norm sense. Setting at 1 would require an exact fit of the D-scale data, and
is not allowed. Allowable values are greater than 1, and the meaning is
7-109
7 Robust Control Examples
Done
Block 1: 4
Block 2: 0
Auto PreFit Fit Tolerance: 1.03
Auto PreFit Maximum Order: 5
7-110
HIMAT Robust Performance Design Example
0
1.5 10
1
0.5 5 0 5
10 5 0 5
10 10 10 10 10 10
Sensitivity
1
10
0
10
1
10 5 0 5
10 10 10
In this case, the µ upper bound with the D-scale data is very close to the µ upper
bound with the rational D-scale fit. The fourth order fit is quite adequate in
scaling the closed-loop transfer function. The curve fitting procedure for this
scaling variable is concluded by entering e at the Enter Choice prompt.
Enter Choice (return for list): e
In this problem, the block structure consists of two complex full blocks: the 2 ×
2 block associated with the multiplicative uncertainty model for the aircraft,
and the 4 × 2 performance block. Since there are two blocks, there is only one
D-scaling variable, and we are completely done with the curve fitting in this
iteration.
7-111
7 Robust Control Examples
1.5
0.5
3 2 1 0 1 2 3
10 10 10 10 10 10 10
Finally, before the next H∞ synthesis procedure, we get the option of changing
the parameters used in the hinfsyn routine. This is useful to change the lower
bound in the γ-iteration. In this example, we make no changes, and simply
continue.
7-112
HIMAT Robust Performance Design Example
The iteration proceeds by computing the H∞ optimal controller for the scaled
(using the rational scalings from the curve fitting) open-loop system.
Iteration Number: 2
--------------------
7-113
7 Robust Control Examples
0.8
MAGNITUDE
0.6
0.4
0.2
0
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
7-114
HIMAT Robust Performance Design Example
0.9
MU
0.8
0.7
0.6
0.5
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
Iteration Summary
--------------------------------------------
Iteration #12
Controller Order1026
Total D-Scale Order016
Gamma Achieved2.1521.073
Peak mu-Value2.0751.073
7-115
7 Robust Control Examples
0
1 10
1
0.8 10
2
0.6 5 0 5
10 5 0 5
10 10 10 10 10 10
Sensitivity
1
10
0
10
1
10 5 0 5
10 10 10
7-116
HIMAT Robust Performance Design Example
0
1 10
1
0.8 10
2
0.6 5 0 5
10 5 0 5
10 10 10 10 10 10
Sensitivity
1
10
0
10
1
10 5 0 5
10 10 10
This fifth order fit works well in scaling the transfer function, so we exit the
curve fitting routine.
Enter Choice (return for list):e
0.9
0.8
0.7
0.6 3 2 1 0 1 2 3
10 10 10 10 10 10 10
7-117
7 Robust Control Examples
Iteration Number: 3
--------------------
7-118
HIMAT Robust Performance Design Example
0.9
0.8
0.7
MAGNITUDE
0.6
0.5
0.4
0.3
0.2
0.1
0
10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3
FREQUENCY (rad/s)
7-119
7 Robust Control Examples
0.95
0.9
0.85
MU
0.8
0.75
0.7
0.65
0.6 3 2 1 0 1 2 3
10 10 10 10 10 10 10
FREQUENCY (rad/s)
Iteration Summary
----------------------------------------------------------
Iteration #123
Controller Order102630
Total D-Scale Order01620
Gamma Achieved2.1521.0730.970
Peak mu-Value2.0751.0730.973
At this point, we have achieved the robust performance objective, and we end
the D – K iteration. We have designed a 30 state controller using D – K
iteration which achieves a µ value less than 1.
In this example, it is also possible to reduce the controller order to 12, using
truncated balanced realizations, and still maintain closed-loop stability and
robust performance.
7-120
HIMAT Robust Performance Design Example
max(real(spoles(kd_k3)))
ans =
-1.6401e-02
[k_dk3bal,hsv] = sysbal(k_dk3);
[k_dk3red] = strunc(k_dk3bal,12);
clpred_12 = starp(himat_ic,k_dk3red);
max(real(spoles(clpred_12)))
ans =
-6.9102e-03
clpred_12g = frsp(clpred_12,OMEGA_DK);
[bnds] = mu(clpred_12g,[2 2;4 2],'c');
pkvnorm(sel(bnds,1,1))
ans =
9.9910e-01
Design Precompensator
First form the HIMAT system and plot its maximum singular values across
frequency (see Figure 7-61).
mkhimat
[type,p,m,n] = minfo(himat);
om = logspace(-2,4,100);
himatg = frsp(himat,om);
vplot('liv,lm',vsvd(himatg),1);
title('SINGULAR VALUES OF HIMAT')
ylabel('SINGULAR VALUES'); xlabel('FREQUENCY (RAD/SEC)');
7-121
7 Robust Control Examples
10 4
SINGULAR VALUES 10 3
10 2
10 1
10 0
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2 10 3 10 4
FREQUENCY (RAD/SEC)
The singular values of himat are plotted in Figure 7-61, and although the unity
gain cross over frequency is approximately correct, the low frequency gain is
too low. We therefore introduce a proportional plus integral (P+I)
precompensator with transfer function (1 + s–1)I2×2 to boost the low frequency
gain and give zero steady state errors. The singular values of himat and himat
augmented with the P+I compensator are shown in Figure 7-62.
sysW1 = daug(nd2sys([1 1],[1 0]),nd2sys([1 1],[1 0]));
sysGW = mmult(himat,sysW1);
sysGWg = frsp(sysGW,om);
vplot('liv,lm',vsvd(himatg),'-.',vsvd(sysGWg),'-',1,'--')
title('SINGULAR VALUES OF HIMAT AND AUGMENTED HIMAT')
ylabel('SINGULAR VALUES');
xlabel('FREQUENCY (RAD/SEC)');
7-122
HIMAT Robust Performance Design Example
10 4
SINGULAR VALUES 10 3
10 2
10 1
10 0
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2 10 3 10 4
FREQUENCY (RAD/SEC)
7-123
7 Robust Control Examples
7-124
HIMAT Robust Performance Design Example
Now form the closed-loop and evaluate the robust performance µ with the H∞
loop shaping compensator implemented (see Figure 7-65).
clp1 = starp(himat_ic,sysKloop,2,2);
clp_g1 = frsp(clp1,om);
deltaset = [2 2; 2 2];
[bnds1,dvec1,sens1,pvec1] = mu(clp_g1,deltaset);
vplot('liv,m',bnds1);
title('ROBUST PERFORMANCE MU WITH LOOPSHAPE CONTROLLER')
ylabel('MU');
xlabel('FREQUENCY (RAD/SEC)');
disp(['mu value is ' num2str(pkvnorm(sel(bnds1,1,1)))])
mu value is 1.323
7-125
7 Robust Control Examples
1.2
0.8
MU
0.6
0.4
0.2
0
10 -2 10 -1 10 0 10 1 10 2 10 3 10 4
FREQUENCY (RAD/SEC)
The plot of µ is shown in Figure 7-65 (solid line), the µ-value is close to that
required, giving a satisfactory design without exploiting the details of the
performance and uncertainty weights. This substantiates the claim that this
design method can give a very robust initial design which does not require
detailed trade-offs between weights to be studied.
7-126
HIMAT Robust Performance Design Example
First reduce the weighted plant model order and measure the resulting gap.
[sysGW_cf,sigGW]=sncfbal(sysGW);
sigGW
sigGW =
8.9996e-01
7.1355e-01
3.3542e-01
7.9274e-02
8.5314e-04
2.1532e-04
sysGW_4 = cf2sys(hankmr(sysGW_cf,sigGW,4,'d'));
gapGW_4 = nugap(sysGW,sysGW_4)
gapGW_4 =
8.6871e-04
4.3597e-01
This three state controller can be reduced to two states using Hankel model
reduction techniques (hankmr).
[sysK1_3_cf,sigK1_3] = sncfbal(sysK1_3);
sigK1_3
sigK1_3 =
3.1674e-01
2.7851e-01
6.9959e-02
sysK1_2 = cf2sys(hankmr(sysK1_3_cf,sigK1_3,2,'d'));
gapK_2=nugap(sysK1_3,sysK1_2)
gapK_2 =
6.9959e-02
7-127
7 Robust Control Examples
3.7114e-01
and this can be compared with the actual stability margin with the reduced
order controller as follows.
cl_red = starp(p_ic,sysK1_2);
tmp = hinfnorm(cl_red);
e_act=1/tmp(1)
e_act =
4.0786e-01
It is seen that the actual robustness is about half way between the optimal and
this lower bound. The important use of the bounds is that they indicate what
level of reduction is guaranteed not to degrade robustness significantly.
This gives a third-order controller together with the second-order P+I term.
The µ-value for this controller, not shown here, turns out to have essentially
the same µ -value as the closed-loop system with the full order controller.
When the ncfsyn option ref is specified, the controller includes an extra set of
reference inputs. The second input argument to ncfsyn is 1.1. This implies we
are designing a suboptimal controller with 10% less performance than at the
optimal. In practice, a 10% suboptimal design often performs better in terms of
robust performance than the optimal controller on the actual system.
7-128
HIMAT Robust Performance Design Example
The last two inputs to cl_ref correspond to the reference signals, the first two
outputs are the outputs of the controller and the last two outputs are the inputs
to the controller (plant output plus observation noise). This design makes the
closed-loop transfer function from reference to plant output the numerator of a
normalized coprime factorization of sysGW. An external reference compensator
could also be added to improve the command response and there are many
possibilities. Here we first diagonalize the closed-loop reference to output
transfer function and then insert some phase advance to increase the speed of
response.
cl_ref_yr=sel(cl_ref,3:4,5:6);
P0 = transp(mmult([0 1; -1 0],cl_ref_yr,[0 1; -1 0]));
P1 = nd2sys([10 50],[1 50]);
P2 = daug(P1,P1);
sysQ = mmult(P0,P2);
Now reduce the order of sysQ to four states using the balanced realization
technique (sysbal), and incorporate into the controller.
[sysQ_b,sig_Q] = sysbal(sysQ);
sig_Q
sig_Q =
3.9665e+00
2.9126e+00
7.2360e-01
4.5915e-01
2.3600e-02
1.0016e-02
1.2526e-06
5.2197e-07
sysQ4 = strunc(sysQ_b,4);
sysK_ref = mmult(sysK3,daug(eye(2),sysQ4));
7-129
7 Robust Control Examples
The step responses are plotted in Figure 7-66. The first output (solid) tracks the
command well with a rise time of less than 0.1 second and no overshoot. The
output of the second channel (dashed) is zero, indicating that there is no cross
coupling between the output channels in the nominal closed-loop system. The
controller output commands (dotted and dashed-dotted lines) are also plotted.
This is just the nominal step response and further tests are needed to check the
sensitivity of the closed-loop to the plant uncertainty.
0.5
0 ... ....................................................................................................................................................................................................................................................................................................................................
. ...........................................................................
..........
. ......
....
. .....
-0.5 . ..
..
. ..
..
. ...
.
-1 . ..
..
. ...
. .
-1.5 .
. .
.
. .
.
.
. .
-2 .
. .
.
. .
. .
.
. .
-2.5 . .
.
. .
. ..
. ..
.. ..
-3 .... ...
..
-3.5
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
TIME (SECONDS)
7-130
HIMAT Robust Performance Design Example
HIMAT References
Freudenberg, J., and D. Looze, “Frequency domain properties of scalar and
multivariable feedback systems,” Lecture Notes in Control and Information
Sciences, Springer-Verlag, 1988.
Hartman, G.L., M.F. Barrett and C.S. Greene, “Control designs for an unstable
vehicle,” NASA Dryden Flight Research Center, Contract Report NAS 4–2578,
December, 1979.
Merkel, P.A., and R.A. Whitmoyer, “Development and evaluation of precision
control modes for fighter aircraft,” Proceedings of the AIAA Guidance and
Control Conference, San Diego, CA, Paper 76–1950, 1976.
Safonov, M.G., A.J. Laub, and G.L. Hartman, “Feedback properties of
multivariable systems: The role and use of the return difference matrix,” IEEE
Transactions on Automatic Control, Vol. AC–26, No. 1, pp. 47–65, February,
1981.
7-131
7 Robust Control Examples
7-132
F–14 Lateral-Directional Control Design
• Decoupled response of the lateral stick, δlstk, to roll rate, p, and rudder
pedals, δrudp, to side-slip angle, β. The lateral stick and rudder pedals have
a maximum deflection of ± 1 inch. Therefore, they are represented as
unweighted signals in Figure 7-67.
The aircraft handling quality (HQ) response from the lateral stick to roll rate
2 deg ⁄ sec
should be a first order system, 5 -----------
s + 2 inch
- ---------------------- . The aircraft handling quality
The stabilizer actuators have ±20 degs and ±90 degs/sec deflection and
deflection rate limits. The rudder actuators have ±30 degs and ±125 degs/sec
deflection and deflection rate limits.
• The three measurement signals — roll rate, yaw rate, and lateral
acceleration — are passed through second order, anti-aliasing filters prior to
being fed to the controller.
2
ω
Anti-aliasing filter = ------------------------------------
2 2
s + 2ζω + ω
• The natural frequency, ω, and damping, ζ, values for the yaw rate and lateral
acceleration filters are 12.5Hz and 0.5, respectively, and 4.1 Hz and 0.7 for
the roll rate filter. The anti-aliasing filters have unity gain at DC (see
Figure 7-67). These signals are also corrupted by noise prior to entering the
controller.
7-133
7 Robust Control Examples
freq = 12.5;
fr = freq*2*pi;
zeta = 0.5;
antiaf_yaw = nd2sys(fr^2,[1 2*zeta*fr fr^2]);
antiaf_lata = nd2sys(fr^2,[1 2*zeta*fr fr^2]);
freq = 4.1; fr = freq*2*pi;
zeta = 0.7;
antiaf_roll = nd2sys(fr^2,[1 2*zeta*fr fr^2]);
antia_filt = daug(antiaf_lata,antiaf_roll,antiaf_yaw);
• The desired δlstk-to-p and δrudp-to-β responses of the aircraft are formulated
as a model matching problem in the µ-framework. The difference between
the ideal response of the transfer functions, δlstk filtered through the roll rate
HQ model and δrudp filtered through the side-slip angle HQ model, and the
aircraft response, p and β, is used to generate an error that is to be
7-134
F–14 Lateral-Directional Control Design
minimized. The Wp transfer function (see Figure 7-67) weights the difference
between the idealized roll rate response and the actual aircraft response, p.
4 3 2
0.05s + 2.90s + 105.93s + 6.17s + 0.16
W p = 0.5 * W β = ---------------------------------------------------------------------------------------------------------------
4 3 2
s + 9.2s + 30.80s + 18.33s + 3.95
All the weighted performance objectives are scaled to have an H∞ norm less
than 1 when they are achieved. The performance of the closed-loop system is
evaluated by calculating the maximum singular value of the weighted
transfer functions from the disturbance and command inputs to the error
outputs, as shown in Figure 7-68.
7-135
7 Robust Control Examples
side-slip angle (β). Note that β is not a measured variable but is used as a
performance measure. The lateral-directional F–14 model, F14nom, has four
states: lateral velocity (v), yaw rate (r), roll rate (p) and roll angle (φ). These
variables are related by the state-space equations
v·
r· v
p· r
· p
φ = A B
β C D φ
p δ dstab
r δ drud
y ac
6.22e – 02 1.01e – 1
B = – 5.25e – 03 – 1.12e – 2 ,
– 4.67e – 02 3.64e – 3
0.00e + 00 0.00e + 0
0.00e + 00 0.00e + 0
D = 0.00e + 00 0.00e + 0 ,
0.00e + 00 0.00e + 0
2.89e – 03 2.27e – 3
7-136
F–14 Lateral-Directional Control Design
Typing
load F14_nom
will load the nominal, F–14 plant model into the workspace. The dashed box in
Figure 7-67 represents the true airplane, which corresponds to a set of F–14
plant models defined by . Inside the box is the nominal model of the airplane
dynamics, F14nom, models of the actuators, AS and AR, and two elements, Win
and ∆G, which parameterize the uncertainty in the model. This type of
uncertainty is called multiplicative plant input uncertainty. The transfer
function Win is assumed known, and reflects the amount of uncertainty in the
model. The transfer function ∆G is assumed to be stable and unknown, except
for the norm condition, ||∆G||∞ ≤ 1. The aircraft uncertainty is modeled as a
complex full-block, multiplicative uncertainty at the input of the rigid body
aircraft nominal model. This is the same type of uncertainty description that
was used in the previous section entitled “HIMAT Robust Performance Design
Example”.
The stabilizer and rudder actuators, AS and AR, are modeled as first order
25
transfer functions, --------------
s + 25
- . The actuator outputs are their respective rates and
deflections.
A_S = pck(-25,25,[-25;1],[25;0]);
A_R = pck(-25,25,[-25;1],[25;0]);
Given the actuator and aircraft nominal models (denoted by Gnom(s)) we also
specify a stable, 2 × 2 transfer function matrix Win(s), called the uncertainty
weight. These transfer matrices parameterize an entire set of plants, , which
must be suitably controlled by the robust controller K.
:= {Gnom(I + ∆GWin) : ∆G stable, ||∆G||∞ ≤ 1}.
All of the uncertainty in modeling the airplane is captured in the normalized,
unknown transfer function ∆G. The unknown transfer function ∆G(s) is used to
parameterize the potential differences between the nominal model, Gnom(s),
and the actual behavior of the real airplane, denoted by .
In this example, the uncertainty weight Win is of the form
w1 ( s ) 0
W in ( s ) :=
0 w2 ( s )
7-137
7 Robust Control Examples
for particular scalar valued functions w1(s) and w2(s). The w1(s) weight
associated with the differential stabilizer input is selected to be
2(s + 4)
w1 ( s ) = ---------------------
s + 160
. The w2(s) weight associated with the differential rudder input
1.5 ( s + 20 )
is selected to be w 1 ( s ) = -----------------------------
s + 200
.
Hence the set of plants that are represented by this uncertainty weight
25 2(s + 4)
0 I + 0
∆ G ( s )
--------------- ---------------------
s + 25 s + 100
G := F14 nom
2 1.5 ( s + 20 )
0 25
--------------- 0 -----------------------------
s + 25 s + 200
with ∆G(s) stable and ||∆G||∞ ≤ 1. Note that the weighting functions are used to
normalize the size of the unknown perturbation ∆G. At any frequency ω, the
value of |w1(jω)| and |w2(jω)| can be interpreted as the percentage of
uncertainty in the model at that frequency. The dependence on frequency of the
uncertainty weight indicates that the level of uncertainty in the airplane’s
behavior depends on frequency. Frequency response plots of weights w1 and w2
are shown in Figure 7-69.
om = logspace(-1,3,120);
W_ing=frsp(W_in,om);
vplot('liv,lm',sel(W_ing,1,1),'-',sel(W_ing,2,2),'--')
xlabel('Frequency (rad/sec)')
ylabel('Magnitude')
7-138
F–14 Lateral-Directional Control Design
1
10
0
10
Magnitude
1
10
2
10 1 0 1 2 3
10 10 10 10 10
Frequency (rad/sec)
7-139
7 Robust Control Examples
0.5 0.6
p (degrees/sec)
Beta (degrees)
1 0.4
1.5 0.2
2 0
0 1 2 3 4 0 1 2 3 4
Time (seconds) Time (seconds)
6 0.2
r (degrees/sec)
y_ac (g’s)
4 0.4
2 0.6
0 0.8
0 1 2 3 4 0 1 2 3 4
Time (seconds) Time (seconds)
Figure 7-70: Unit Step Responses of the Nominal Model (+) and 15 Perturbed
Models from
The M-file ex_f14tp generates the family of perturbed time responses shown
in Figure 7-70.
7-140
F–14 Lateral-Directional Control Design
file: ex_f14tp.m
Gnom = mmult(F14_nom,daug(sel(A_S,2,1),sel(A_R,2,1)));
u = step_tr(0,1,.02,2);
ydstab = trsp(Gnom,abv(u,0),4,.05);
ydrud = trsp(Gnom,abv(0,u),4,.05);
for i=1:15
delta = randn(2,2);
delta = delta/norm(delta);
p = mmult(Gnom,madd(eye(2),mmult(W_in,delta)));
y1 = trsp(p,abv(u,0),4,.05);
y2 = trsp(p,abv(0,u),4,.05);
ydstab = sbs(ydstab,y1);
ydrud = sbs(ydrud,y2);
end
cold = ynum(ydrud);
index = 2:cold;
subplot(221)
vplot(sel(ydstab,2,1),'+',sel(ydstab,2,[index]))
title('Diff. Stabilizer to Roll Rate')
xlabel('Time (seconds)'), ylabel('p (degrees/sec)')
subplot(222)
vplot(sel(ydrud,1,1),'+',sel(ydrud,1,[index]))
title('Diff. Rudder to Beta')
xlabel('Time (seconds)'), ylabel('Beta (degrees)')
subplot(223)
vplot(sel(ydstab,4,1),'+',sel(ydstab,4,[index]))
title('Diff. Stabilizer to Lat. Acceleration')
xlabel('Time (seconds)'), ylabel('ac_y (g''s)')
subplot(224)
vplot(sel(ydrud,3,1),'+',sel(ydrud,3,[index]))
title('Diff. Rudder to Yaw Rate')
xlabel('Time (seconds)'), ylabel('r (degrees/sec)')
The control design objective is to design a stabilizing controller K such that, for
all stable perturbations ∆G(s), with ||∆G||∞ ≤ 1, the perturbed closed-loop system
remains stable, and the perturbed weighted performance transfer function has
an H∞ norm less than 1 for all such perturbations. These mathematical
objectives exactly fit in the structured singular value framework.
7-141
7 Robust Control Examples
Controller Design
The control design block diagram shown in Figure 7-67 is redrawn as F14IC
shown in Figure 7-71. F14IC is the 25-state, six-input, six-output open-loop
transfer matrix used for control design. The M-file ex_f14ic contains the sysic
commands to generate the F14IC interconnection structure. The M-file
ex_f14wt, called from ex_f14ic, creates the weighting functions (Wact, Win, Wn,
Wp, and Wβ), the handling qualities models (hqmod_beta, hqmod_p),
anti-aliasing filters (anti_filt), the actuator models (AS and AR) and loads the
nominal F–14 plant model.
ex_f14ic
file: ex_f14ic.m
ex_f14wt
systemnames = 'W_in A_S A_R antia_filt hqmod_p hqmod_beta';
systemnames = [systemnames 'F14_nom W_act W_n W_P W_beta'];
inputvar = '[in_unc{2}; sn_nois{3}; roll_cmd; beta_cmd; ';
inputvar = [inputvar ' delta_dstab; delta_rud]' ];
outputvar = '[ W_in; W_P; W_beta; W_act; roll_cmd; ';
outputvar = [outputvar 'beta_cmd; antia_filt + W_n ]' ];
input_to_W_in = '[ delta_dstab; delta_rud ]';
input_to_A_S = '[ delta_dstab + in_unc(1) ]';
input_to_A_R = '[ delta_rud + in_unc(2) ]';
input_to_W_act = '[ A_S; A_R ]';
input_to_F14_nom = '[ A_S(1); A_R(1) ]';
input_to_antia_filt = '[ F14_nom(4) F14_nom(3) F14_nom(2)]';
input_to_hqmod_beta = '[ beta_cmd ]';
input_to_hqmod_p = '[ roll_cmd ]';
input_to_W_beta = '[ hqmod_beta - F14_nom(1) ]';
input_to_W_P = '[ hqmod_p - F14_nom(3) ]';
input_to_W_n = '[ sn_nois ]';
sysoutname = 'F14IC';
cleanupsysic = 'yes';
sysic
7-142
F–14 Lateral-Directional Control Design
The new generalized plant used in the second iteration has 29 states, 4 more
states than the original 25-state generalized plant, F14IC. These extra states
are due to the D-scale data being fit with a rational function, and absorbed into
the generalized plant for the next iteration. Four D – K iterations are
performed until µ reaches a value of 1.02. Information about the D – K
iterations is shown in Table 7-1.
7-143
7 Robust Control Examples
Iteration Number 1 2 3 4
Controller Order 25 29 29 29
To replicate these results using D – K iteration, start up dkitgui and press the
SETUP button in the main window. The data required in the DK Iteration
Setup window should be filled in to duplicate the Setup window shown in
Figure 7-72. The message “Mu-Synthesis Problem Specification
Complete...” will appear in the message bar upon correctly entering the
required data. Return to the main dkitgui window and press the Control
Design button. This will synthesize controller K1. To run 4 automated D – K
iterations, pull down the Iteration menu and select the number 4 from the
Auto Iterate menu.
7-144
F–14 Lateral-Directional Control Design
7-145
7 Robust Control Examples
the original generalized plant, F14IC, and the controller SYSTEM matrices, K1
and K4, are in your MATLAB workspace.
ex_f14mu
file: ex_f14mu.m
om = logspace(-2,2,60);
clp1 = starp(F14IC,K1);
clp4 = starp(F14IC,K4);
clp1g = frsp(clp1,om);
clp4g = frsp(clp4,om);
deltaset = [2 2; 5 6];
mubnds1 = mu(clp1g,deltaset);
mubnds4 = mu(clp4g,deltaset);
vplot('liv,lm',mubnds1,'-',mubnds4,'--')
xlabel('Frequency (rad/sec)')
7-146
F–14 Lateral-Directional Control Design
1.4
1.2
1
mu
0.8
0.6
0.4
0.2 2 1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
7-147
7 Robust Control Examples
1.5
0.5
Sideslip angle (degrees)
0.5
1.5
2.5
0 2 4 6 8 10 12 14
Time (seconds)
3
p (degrees/sec)
1
0 2 4 6 8 10 12 14
Time (seconds)
7-148
F–14 Lateral-Directional Control Design
The M-file ex_f14s1 contains the sysic commands to generate the F14IC
simulation interconnection structure. ex_f14s2 contains the commands to
calculate the worst-case perturbation of size 1 for K4, the closed-loop time
response of the nominal and perturbed systems, and plots the results.
ex_f14s1
ex_f14s2
Figure 7-74 validates the frequency domain results showing that the controller
synthesized via D – K iteration, K4, is insensitive to changes in the model. You
will notice that the roll-rate response of the F–14 tracks the roll-rate command
well initially and then departs from the command. This is due to a right-half
plane zero in the aircraft model at 0.024 rad/sec.
file: ex_f14s1.m
systemnames = 'Win A_S A_R F14_nom antia_filt hqmod_p ';
systemnames = [systemnames ' hqmod_beta '];
inputvar = '[in_unc{2}; roll_cmd; beta_cmd; ';
inputvar = [inputvar ' delta_dstab; delta_rud]' ];
outputvar = '[ W_in; hqmod_p; F14_nom(2); hqmod_beta; '
outputvar = [outputvar ' F14_nom(1); roll_cmd; beta_cmd; '];
outputvar = [outputvar 'antia_filt ]' ];
input_to_W_in = '[ delta_dstab ; delta_rud ]';
input_to_A_S = '[ delta_dstab + in_unc(1) ]';
input_to_A_R = '[ delta_rud + in_unc(2) ]';
input_to_F14_nom = '[ AS(1); AR(1) ]';
input_to_antia_filt = '[F14_nom(4); F14_nom(3); F14_nom(2)]';
input_to_hqmod_beta = '[ beta_cmd ]';
input_to_hqmod_p = '[ roll_cmd ]';
sysoutname = 'F14SIM';
cleanupsysic = 'yes';
sysic
7-149
7 Robust Control Examples
file: ex_f14s2.m
om = logspace(-2,2,40);
delta = wcperf(frsp(clp4,om),deltaset,1,1);
sclp4_nom = starp(zeros(2,2),starp(F14SIM,K4));
sclp4_pert = starp(delta,starp(F14SIM,K4));
ustk = step_tr([0 1 4],[0 1 0],.02,10);
upedal = step_tr([0 1 4 7 ],[0 1 -1 0],.02,10);
input = abv(ustk,upedal);
y4nom = trsp(sclp4_nom,input,14,0.02);
y4pert = trsp(sclp4_pert,input,14,0.02);
subplot(211), vplot(sel(y4nom,3,1),'-',sel(y4nom,4,1),...
'-.',sel(y4pert,4,1),'--')
xlabel('Time (seconds)')
ylabel('Side-slip angle (degrees)')
title('beta: ideal (solid), actual (dashed-dot),...
perturbed (dashed)')
subplot(212), vplot(sel(y4nom,1,1),'-',sel(y4nom,2,1),...
'-.',sel(y4pert,2,1),'--')
xlabel('Time (seconds)')
ylabel('Roll rate (degrees/sec)')
title('roll-rate: ideal (solid), actual (dashed-dot),...
perturbed (dashed)')
7-150
A Process Control Example: Two Tank System
Experimental Description
The system consists of two water tanks in cascade and is shown schematically
in Figure 7-75. The upper tank (tank 1) is fed by hot and cold water via
computer controllable valves. The lower tank (tank 2) is fed by water from an
exit at the bottom of tank 1. A constant level is maintained in tank 2 by means
of an overflow. A cold water bias stream also feeds tank 2 and enables the tanks
to have different steady-state temperatures.
Tank 1 is 5 3--8- inches in diameter and 30 inches in height. Tank 2 is 7 1--2- inches
in diameter and the overflow maintains the water level at 7 1--4- inches. This
configuration maintains the water level in tank 2 at 4 3--4- inches below the base
of tank 1. Flow control is obtained via linear electropneumatic actuators with
a CV of 1.0. One hundred inches of 1--2- -inch piping runs from each valve to the
top of tank 1. Approximately 36 inches of pipe connect the tanks, from the base
of tank 1 to the base of tank 2. The tank 2 cold water bias stream is manually
adjustable between 0.015 and 0.3 gpm. Thermocouples are mounted 1--4- inch
above the base of each tank. A pressure sensor (0 to 5 psig) provides a
measurement of the water level in tank 1.
All measured signals are filtered with fourth order Butterworth filters, each
with a cutoff frequency of 2.25 Hz. Twelve bit resolution is used for the A/D and
D/A conversions. In digital implementations of the controllers a sample period
of 0.1 seconds has been used.
7-151
7 Robust Control Examples
fh command
hot(th) supply
Actuator
fc command
fl+fd
cold (tc) supply
Actuator
supply at temp td
Tank1
fd
fl
fl+fd
Tank2 h2
7-152
A Process Control Example: Two Tank System
t1 Temperature of tank 1
t2 Temperature of tank 2
7-153
7 Robust Control Examples
d
----- ( A 1 h 1 ) = f h + f c – f 1 .
dt
(7-2)
It is assumed that the flow out of tank 1 (ƒ1) is a memoryless function of the
height (h1). As the exit from tank 1 is a pipe with a large length to diameter
ratio, the flow is proportional to the pressure drop across the pipe and thus to
the height in the tank. With a constant correction term for the flow behavior at
low tank levels the height and flow can be reasonably approximated by an
affine function,
· –1 1 1
f 1 = ----------
- f 1 + ----------- f h + ----------- f c .
A1 α A1 α A1 α
(7-4)
h 1 = α f 1 – β.
(7-5)
E1 = h1 t1 ,
(7-6)
β th tc
E 1 = ----------- 1 + ------ E 1 + ------- f h + ------- f c .
· –1
A1 α h1 A1 A1
(7-7)
1
t 1 = ------ E 1 .
h1
(7-8)
7-154
A Process Control Example: Two Tank System
Note that for a fixed h1 the above equations are linear. This will aid in the
identification.
In tank 2 the height (h2) is fixed and the input flow from tank 1 (ƒ1) is of
temperature t1. This gives only one equation.
d
----- ( A 2 h 2 t 2 ) = f 1 t 1 + f b t b – ( f 1 + f b )t 2 .
dt
(7-9)
We will develop a model which has t1 and h1 as inputs and t2 as an output. This
will allow us to concatenate the tank 1 and tank 2 models to give a model for
the full system. A more physically motivated model might have t1 and ƒ1 as
inputs. The linearizations will differ only by the factor α, so the difference is
not significant.
We can rearrange equation 7-9 to give,
h 1 + β + α f b h1 + β fb tb
t·2 = – ------------------------------- t 2 + ------------------ t 1 + --------------
αA h
2 2
αA 2 2h A2 h2
(7-10)
7-155
7 Robust Control Examples
Normalization Units
To quantify the model we define a system of normalized units as follows.
The first two definitions are sufficient to define all of the other units in the
problem. The input flows range from 0 to 2.0 gallons/minute and it is
convenient to define a flow unit at the input by 2.0 gpm = 1.0 funit. Using the
above units the system dimensions are now,
A1 0.0256 hunits2
A2 0.0477 hunits2
h2 0.241 hunits
tb 0.0 tunits
ƒs 0.00028 hunits3/sec/funit
The variable ƒs is a flow scaling factor which converts the input (0 to 1 funits)
to flow in hunits3/second. This is used in the tankdemo script.
7-156
A Process Control Example: Two Tank System
Operating Range
The physical system imposes constraints on the operating region. The most
obvious of these is that as the bias stream is cold, the temperature of tank 2 (t2)
must be less than that of tank 1 (t1). Saturation in the actuators prevents tank
1 from being completely full of either hot or cold water. The relationship
between ƒ1 and h1 can only be modeled by equation 7-3 for h1 in the range:
0.15 ≤ h 1 ≤ 0.75.
0.25 ≤ h 1 ≤ 0.75.
0.25 ≤ t 1 ≤ 0.75.
Actuator Model
There are significant dynamics and saturations associated with the actuators
and a model of these is included. In the frequency range of interest the
actuators can be modeled as a single pole system with rate and magnitude
saturations. The rate saturation has been estimated from observing the effect
of triangle waves of different frequencies and magnitudes. The following model
will be used for the actuators.
1
f h = ----------------------------- fhc
( 1 + 0.05s )
(7-11)
with a magnitude limit of 1.0 funits and a rate limit of 3.5 funits/sec. It is the
rate limit, rather than the pole location, that limits the actuator performance
for most signals. For a linear model some of the effects of rate limiting can be
included in a perturbation model.
7-157
7 Robust Control Examples
7-158
A Process Control Example: Two Tank System
10 1
10 0
10 -1
magnitude
10 -2
10 -3
10 -4
10 -5
10 -4 10 -3 10 -2 10 -1 10 0 10 1
frequency: Hz
-100
-200
phase - degrees
-300
-400
-500
experimental data
and theoretical
-600
10 -4 10 -3 10 -2 10 -1 10 0 10 1 model (solid line)
frequency: Hz
Figure 7-76: Transfer Function Between ƒhc + ƒcc and h1. Experimental Data
and Theoretical Model
7-159
7 Robust Control Examples
10 1
10 0
10 -1 h1 = 0.15
magnitude
10 -2
h1 = 0.75
10 -3
10 -4
10 -4 10 -3 10 -2 10 -1 10 0 10 1
frequency: Hz
-50
-100
-150
phase - degrees
h1 = 0.15
-200
-250
-300
-350 h1 = 0.75
Figure 7-77: Transfer Function Between ƒhc – ƒcc and t1. Experimental Data
and Models. h1 = 0.15 and h1 = 0.75.
7-160
A Process Control Example: Two Tank System
The nominal model/physical system discrepancies are far more significant for
the t1 case. A larger t1 perturbation weight is required to cover these
discrepancies. A more complete discussion on this issue is given in the
“Perturbation Model” section.
Similar analyses have been performed on tank 2. The open loop data is less
conclusive due to observing the system through tank 1 and a higher noise level.
The available data does confirm the theoretical model but to a lower frequency
than that for tank 1. More definitive results have been obtained for tank 2 with
the closed-loop experiments.
Closed-Loop Experiments
We will look at a closed-loop method of estimating a suitable uncertainty level
for the model. This involves using a relay to induce limit cycling and is based
on an auto-tuning method proposed by Åström and Hägglund [AstH]. More
detail on using such approaches for estimating uncertainty levels is described
by Smith and Doyle [SmD2].
Applying a relay in a feedback loop may drive the closed-loop system into stable
limit cycles. This technique works for a large class of systems including the two
tank system. In the experiments performed here a decoupled controller (into
height and temperature loops) was used with a relay in the temperature
control loop. This allowed stable control of the tank height (h1) and produced
limit cycles in t1. These experiments were performed at fixed heights (h1 = 0.15,
0.25, 0.47, and 0.75).
With a simple relay the closed-loop system will limit cycle at the frequency
where the response has a phase of 180 degrees. The gain at this frequency can
also be estimated from the input/output data. This experiment will identify the
system at a single point. Using this information, a new controller is designed
to introduce some lead into the closed-loop system. This new closed-loop system
has limit cycles at a higher frequency giving an additional point at which the
plant can be identified. In practice this technique can be repeated until the
nonlinear and/or inconsistent effects dominate and the closed-loop system no
longer limit cycles consistently. This also provides information on the
frequency at which uncertainty should dominate in the model.
Details of the application of this approach to tank 1 are given in [SmD2]. To
illustrate the concept, the configuration used to induce limit cycles in t1 is
shown in Figure 7-78.
7-161
7 Robust Control Examples
Height, h, is controlled by the proportional controller, Kh. R denotes the relay and
Ci denotes one of a series of lead controllers used to adjust the limit cycle frequency.
7-162
A Process Control Example: Two Tank System
0.6
0.55
0.5
level & temp
0.45
0.4
0.35
0.3
0 50 100 150 200 250 300 350 400
seconds
0.9
0.8
0.7
hot & cold flows
0.6
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400
seconds
7-163
7 Robust Control Examples
10 1
+
+
+
10 0 x
o
x * +
*+
*+
h1 = 0.15
10 -1
magnitude
h1 = 0.75
10 -2
10 -3
10 -4
10 -4 10 -3 10 -2 10 -1 10 0 10 1
frequency: Hz
-50 o
x +
x
+
* ++ +
+
-100 **
-150
magnitude
-200
-250
-300
-350
-400
-450
10 -4 10 -3 10 -2 10 -1 10 0 10 1
frequency: Hz
Figure 7-80: Limit Cycle Identification Experiments for Tank 2. Tank 1 Height
Held Fixed at h1 = 0.15, 0.25, 0.47, and 0.75
7-164
A Process Control Example: Two Tank System
The t1 and t2 reference signals are denoted by t1cmd and t2cmd, respectively.
Noise is assumed to enter within the tank system block.
7-165
7 Robust Control Examples
–1
----------- 0
αA 1
f·1 = f1
·
E1 βt̂ 1 – ( β + ĥ 1 ) E
-------------- ------------------------ 1
A 1 ĥ 1 αA 1 ĥ 1
1 1
----------- -----------
αA 1 αA 1 fh
+
th tc f
----------- ----------- c
αA 1 αA 1
(7-12)
7-166
A Process Control Example: Two Tank System
α 0
h1 1 f
= – αt̂ 1 1
t1 ------------ ------ E 1
ĥ 1 ĥ 1
(7-13)
– ( ĥ 1 + β + α f b ) t̂ 1 + t̂ 2 ĥ 1 + β h 1
t·2 = ---------------------------------------t 2 + -----------------
- ------------------ .
αA 2 h 2 αA 2 h 2 αA 2 h 2 t 1
(7-14)
As above, t̂ 2 denotes the steady state value of t2, and can be calculated by,
f̂ 1 t̂ 1 + f b t b
t̂ 2 = --------------------------
f̂ 1 + f b
f̂ 1 = αĥ 1 – β .
Perturbation Model
We must now select a perturbation model structure for our robust control
model. This involves determining the manner in which the perturbations enter
the model, and selecting appropriate frequency dependent weights to
normalize the perturbations. The guiding approach is to make the model
7-167
7 Robust Control Examples
7-168
A Process Control Example: Two Tank System
To complete this model we must specify the perturbation weights, Wh1, Wt1,
and Wt2. As we might expect, it is not possible to precisely determine the
amount of uncertainty in the system. At best, we look for a rough frequency
dependent bound. We discuss some of the ad-hoc means for getting suitable
weights below:
7-169
7 Robust Control Examples
7-170
A Process Control Example: Two Tank System
It is not surprising that there is less uncertainty associated with tank 2; most
of the temperature uncertainty arises from the mixing dynamics and tank 2 is
somewhat smaller than tank 1. The perturbation weights are illustrated
graphically in Figure 7-84.
7-171
7 Robust Control Examples
10
2 Wh1
Wt1
Wt2
1
10
Magnitude
0
10
1
10
2
10
3
10 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
Figure 7-84: Perturbation Weights for the Robust Control Design Problem
The output uncertainties Wh1 and Wt1 are given by (note that here we express
the t1 perturbation weight as a function of the steady state height),
0.5s
W h1 = 0.01 + ------------------------
0.25s + 1
20ĥ 1 s
W t1 = 0.1 + --------------------- .
0.2s + 1
100s
W t2 = 0.1 + ---------------- .
s + 21
7-172
A Process Control Example: Two Tank System
Wh1noise = 0.01
Wt1noise = 0.03
Wt2noise = 0.03
The design considered in tankdemo.m uses measurements of only t1 and t2. The
weight Wh1noise is unnecessary for this case but is included so that you can
investigate different control configurations.
There are additional disturbances or noises associated with the measurements
that are not, strictly speaking, sensor noise. For example, in tank 2 the
imperfect mixing of the bias stream causes variations in the temperature
measurement. The inclusion of such noises here does not affect the
performance requirements of the controller.
7-173
7 Robust Control Examples
The error weights are first order low pass filters, with Wt1perƒ weighting the t1
tracking error and Wt2perƒ weighting the t2 tracking error. These are given by,
100
W t1perf = ----------------------
400s + 1
50
W t2perf = ----------------------.
800s + 1
t 1cmd W t 1cmd 0 w1
= 1 0 .
t 2cmd 1 1 0 W t diffcmd w 2
In this case,
W t 1cmd = 0.1
W t diffcmd = 0.01
7-174
A Process Control Example: Two Tank System
Wt1perf
2
10 Wt2perf
Wt1cmd
Wtdiffcmd
1
10
Magnitude
0
10
1
10
2
10
3
10 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
Figure 7-85: Performance Weights for the Robust Control Design Problem
In the case of the actuation weights we would like to weight both the amplitude
and the rate of the actuator. This can be done by weighting ƒhc (and ƒcc) with a
function that rolls up at high frequencies. An alternative approach can be used
when we have a first order actuator model.
Using ƒhc as an example; the approach is to create an actuator model with ƒh
and dƒh/dt as outputs. These can then be separately weighted with constant
weights. Note that this approach has the advantage of reducing the number of
states in the interconnection structure, and hence in the final design.
Figure 7-86 illustrates the form of such a weighted actuator model.
7-175
7 Robust Control Examples
Figure 7-86: Model of the Flow Valve Actuator Including Magnitude and Rate
Weightings
W hact = 0.01
W cact = 0.01
W hrate = 50
W crate = 50
Note that each weighted actuator model contributes only one state to the
·
interconnection structure, and allows independent weighting of ƒh and f h (and
·
ƒc and f c ).
7-176
A Process Control Example: Two Tank System
Design Issues
One issue regarding the design approach is worth noting. We begin by
designing for the nominal case. This is done by selecting out of the
interconnection structure the perturbation inputs and outputs (v and z). This
design achieved a closed-loop H∞ norm of γ = 0.9082. This is a lower bound on
the achievable value of µ for robust performance.
The response of the nominal controller (k0 in the script) is simulated, and is
shown in Figure 7-87. This is a check on the applicability of our performance
weights. In general, the inclusion of the robustness in the design problem will
have the effect of trading off this nominal performance in order to improve the
stability robustness. Simulating the nominal design gives a rough idea of what
the time domain performance implied by the weights.
7-177
7 Robust Control Examples
0.75
0.7
h1
t1
Measurements
0.65
t2
0.6
0.55
0.5
0.45
0 100 200 300 400 500 600 700 800
Time: seconds
0.7
fhc
fcc
0.6
Actuators
0.5
0.4
0.3
0.2
0 100 200 300 400 500 600 700 800
Time: seconds
The reference signal ramps (from 80 to 100 seconds) t1 from 0.75 to 0.57,
and t2 from 0.67 to 0.47.
7-178
A Process Control Example: Two Tank System
The demo file, tankdemo, uses this interconnection structure for the robust
performance design. In the demo example three D – K iterations are performed.
In each of the D-scale fitting calls (performed with the function musynfit),
second order D-scale fits were selected. The final value of µ achieved is 1.73.
Further iterations may give further improvements; for this example it was
decided to stop at this point.
Closed-Loop Analysis
We will run through a typical series of analyses for our design and briefly
discuss the aspects of interest. The procedure guarantees closed-loop stability
(although it is still worth checking to make sure numerical problems have not
invalidated the design) but the controller need not be stable. It often is stable
and this is preferable for implementation purposes.
The frequency response of the controller is given in Figure 7-88. We note that
the controller rolls off around the frequency range where the uncertainties
start to become larger. This is what we would expect from a classical point of
view.
Note that there is not a great deal of low frequency gain. We will subsequently
see that this is because the noise level is high relative to the error weighting.
An identical simulation is run to give an idea of the loss of nominal
performance in the time domain. This is not intended as a comparison between
k0 and kmu. A more appropriate comparison would also include a simulation of
a perturbed system. The function dypert can be used to select the worst case
perturbation. The nominal response of kmu is shown in Figure 7-89. There is
little deterioration in the nominal response for controller kmu.
7-179
7 Robust Control Examples
Controller: kmu
1
10
0
10
1
10
Magnitude
2
10
3
10
4
10
5
10 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
Controller: kmu
300
200
100
Phase (degrees)
100
200
300
400 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
7-180
A Process Control Example: Two Tank System
0.75
0.7
h1
t1
Measurements
0.65 t2
0.6
0.55
0.5
0.45
0 100 200 300 400 500 600 700 800
Time: seconds
0.7
fhc
fcc
0.6
Actuators
0.5
0.4
0.3
0.2
0 100 200 300 400 500 600 700 800
Time: seconds
The reference signal ramps (from 80 to 100 seconds) t1 from 0.75 to 0.57,
and t2 from 0.67 to 0.47
7-181
7 Robust Control Examples
1.2
0.8
0.6
0.4
0.2
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
Figure 7-90 illustrates that the nominal performance limits the low frequency
robust performance. Robust stability does not become an issue except around
0.01 Hz. This is not surprising as the perturbation weights begin to increase
around this frequency, yet the controller has only begun to roll off.
These issues can be further investigated by examining smaller subblocks of the
D-scaled closed-loop system. This form of analysis has been referred to as
M-analysis in the aerospace robust control community, simply because the
closed-loop system was habitually designated by M. It allows us to determine
which inputs and outputs are limiting the achievable robust performance. This
7-182
A Process Control Example: Two Tank System
• Tracking errors
• Actuator penalties
• Actuator rate penalties
This give six 2 × 2 transfer function blocks, and for each of these we calculate
the maximum singular value as a function of frequency.
The results for the temperature inputs are shown in Figure 7-91. The
calculated nominal performance (maximum singular value of the w to e
transfer function) is an upper bound for each of the subblock calculations and
serves as a comparison point.
The temperature reference command to tracking error transfer function
dominates those shown in Figure 7-91. However it is only really significant,
with respect to the nominal performance, in the frequency range 0.001 to 0.01
Hz. It also has some contribution at lower frequencies. By contrast, the
actuator penalty has almost no effect on the design. We could change this
weight significantly without affecting the design. The actuator rate penalty
starts to influence the design at frequencies above 0.01 Hz.
Figure 7-92 illustrates the subblocks corresponding to the noise inputs. It is
immediately clear that the noise to tracking error transfer function dominates
the design at low frequencies. To improve the low frequency nominal
performance (and the low frequency robust performance) we must reduce the
noise weights. In practical terms this means buying higher quality sensors.
7-183
7 Robust Control Examples
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
7-184
A Process Control Example: Two Tank System
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
7-185
7 Robust Control Examples
A similar analysis can be used to determine which aspects of the design are
limiting robust stability. In this case, there are three transfer functions to
examine; v1 to z1, etc. These are illustrated in Figure 7-93 and compared to
both robust performance and robust stability.
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
0 5 4 3 2 1 0 1
10 10 10 10 10 10 10
Frequency: Hertz
*robust performance and nominal performance are included for comparison
7-186
A Process Control Example: Two Tank System
Experimental Evaluation
We present an experimental comparison between a µ synthesis design and a
more standard loopshaping controller on the two tank system. The design
shown here is not identical to that given above as it was calculated and
implemented several years before µ-Tools was written. However, the method
and relative weightings were similar.
A simple SISO style loopshaping technique has been used to design controllers
(Kloop) for comparison purposes. One of the worst case plant conditions, (h1 =
0.75, t1 = 0.25), was selected as a nominal design point. The controller simply
inverts the plant to a diagonal loopshape. It is now well known that this
technique will not work well for high condition number plants, particularly
those with uncertainty at the input. Refer to the work of Skogestad et al.
[SkoMD] for further details on this point.
The loopshape chosen was
100
t 1 loop = ------------------------------
( 1 + 3000s )
100
t 2 loop = ------------------------------
( 1 + 5000s )
7-187
7 Robust Control Examples
10 3
10 2
10 1
10 0
10 -1
10 -2
10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1
frequency: Hz
Both controllers have been tested over a wide range of commands. Figure 7-95
shows a typical command tracking response for the Kloop and Figure 7-96 shows
the same response for Kmu. The oscillatory behavior of the loopshaping
controller cannot be alleviated by selecting loopshapes that have a lower
crossover frequency. These merely produce lower frequency and higher
amplitude oscillations. It is the method of inverting the plant that leads to this
problem.
7-188
A Process Control Example: Two Tank System
0.9
h1
0.8
0.7
t1
0.6
0.5
t2
0.4
0.3
0.2
0.1
0
0 100 200 300 400 500 600 700 800
time: seconds
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 100 200 300 400 500 600 700 800
time: seconds
7-189
7 Robust Control Examples
0.9
0.8
0.7
t1
0.6
t2
0.5
h1
0.4
0.3
0.2
0.1
0
0 100 200 300 400 500 600 700 800
time: seconds
0.9
0.8
0.7
0.6
fh command
0.5
0.4
0.3
0.2 fc command
0.1
0
0 100 200 300 400 500 600 700 800
time: seconds
7-190
A Process Control Example: Two Tank System
7-191
7 Robust Control Examples
7-192
8
Reference
8 Reference
8-2
Summary of Commands
Summary of Commands
Command Description
8-3
8 Reference
Command Description
8-4
Summary of Commands
Command Description
8-5
8 Reference
Command Description
8-6
Summary of Commands
Command Description
8-7
8 Reference
Command Description
8-8
Summary of Commands
Command Description
8-9
8 Reference
8-10
Commands Grouped by Function
8-11
8 Reference
Modeling Functions
8-12
Commands Grouped by Function
8-13
8 Reference
8-14
Commands Grouped by Function
8-15
8 Reference
8-16
Commands Grouped by Function
8-17
8 Reference
8-18
abv, daug, sbs
Description abv places the matrix mat1 above the matrix mat2. daug places the input
matrices on the diagonal of the output matrix. sbs places the input matrices
next to one another. All these commands, abv, daug, and sbs, allow the use of
multiple input arguments inputs (up to nine). CONSTANT, SYSTEM, and
VARYING matrices can be placed by one another based on the following table.
mat2
CONSTANT SYSTEM VARYING
The input matrices must be compatible in the respective dimension in order for
the function to be performed. abv requires the same number of columns (inputs
for SYSTEM matrices) and sbs requires the input matrices to have the same
number of rows (outputs for SYSTEM matrices).
8-19
abv, daug, sbs
Examples Create two CONSTANT matrices a and b along with two SYSTEM matrices p1
and p2. Examples of manipulation using abv, daug, and sbs are shown in the
following examples.
a = [1 2 3; 4 5 6];
b = [7 7 7; 8 8 8 ];
pl = pck(-10,1,10,0);
p2 = pck(-3,2,4,.1),
seesys(abv(p1,p2))
out = abv(a,b,b)
out =
1 2 3
4 5 6
7 7 7
8 8 8
7 7 7
8 8 8
out = daug(a,b)
out =
1 2 3 0 0 0
4 5 6 0 0 0
0 0 0 7 7 7
0 0 0 8 8 8
out = sbs(p1,p2);
seesys = out
8-20
abv, daug, sbs
minfo(out)
system:2states1 outputs2 inputs
out = sbs(a,b,a)
out =
1 2 3 7 7 7 1 2 3
4 5 6 8 8 8 4 5 6
8-21
blknorm
Purpose 8blknorm
Create a matrix which is made up of norms of subblocks. Used in conjunction
with mu (µ)
Description blknorm computes the maximum singular value of the subblocks of matin,
using the information in the perturbation block structure, blk. The output of
blknorm is matout whose entries are the maximum singular value of the
subblocks of matin with these norms as elements. This helps to show which
parts of the matrix are contributing to making µ large. A more complete
description of the perturbation block structure, blk, can be found with the
command mu and in Chapter 4. blknorm is best used on scaled matrices from
the upper bound for µ. Repeated δI blocks are treated the same way as full
blocks. The function blknorm can be applied to both CONSTANT and
VARYING matrices.
Examples Create a 4 × 3 random matrix and determine its subblock norms for two
different block structures. The first block structure consists of a two element
repeated block and a 1 × 2 full block. The second block structure consists of a 1
× 1 block, a 1 × 2 full block, and a 1 × 1 block.
m = crand(4,3);
disp(m)
disp(blknorm(m,[2 0; 1 2]))
1.8498 1.5272
1.6656 0.6923
8-22
blknorm
Algorithm The maximum singular value of each block associated with the blk structure
is calculated via the MATLAB norm.
8-23
cjt, transp, vcjt, vtp
Description cjt forms the complex conjugate transpose of the input matrix mat and transp
forms the transpose of mat. transp outputs similar results to the MATLAB
command .′. These commands also work on SYSTEM and VARYING matrices.
For consistency in our naming convention, vcjt and vtp are the same
commands as cjt and transp, but work on just CONSTANT and VARYING
matrices.
For a SYSTEM matrix mat, transp, and cjt are defined as
mat =
A B
C D
transp(mat) =
A.’ C.’ , –A ’ –C ’
cjt(mat) =
B.’ D.’ –B ’ –D ’
8-24
cjt, transp, vcjt, vtp
Examples Create a SYSTEM matrix and calculate its transpose and conjugate transpose
using cjt and transp.
A = [-10 0; 0 3];
B = [1 0 3; 0 2 -9];
C = [10 0; 0 4];
D = [0 -.2 -45; .82 0 .1];
out = pck(A,B,C,D);
seesys (out, '%5 .2g')
-10 0 | 1 0 3
0 3 | 0 2 -9
--------------------------------------
10 0 | 0 -.2 -45
0 4 | .82 0 .1
x = transp (out);
seesys (x, '%5 .2g')
-10 0 | 10 0
0 3 | 0 4
---------------------------
1 0 | 0 .82
0 2 | -.2 0
3 -9 | -45 .1
x = cjt(out);
seesys (x, '%5 .2g')
10 0 | -10 0
0 3 | 0 -4
---------------------------
1 0 | 0 .82
0 2 | -.2 0
3 -9 | -45 .1
8-25
cjt, transp, vcjt, vtp
Algorithm These functions call the MATLAB commands ′ and .′ consistent with the type
of input matrices.
8-26
cmmusyn
minimization
r×t
minQ ∈ C µ ∆ ( R + UQV )
Algorithm This works for CONSTANT or VARYING data in R, U, and V. If two or more
matrices are VARYING, the independent variable values of these matrices
must be the same.
The approximation to solving the constant matrix µ synthesis problem is
two-fold: only the upper bound for µ is minimized, and the minimzation is not
convex, hence the optimum is generally not found. If U is full column rank, or
V is full row rank, then the problem can (and is) cast as a convex problem,
[PacZPB], and the global optimizer (for the upper bound for µ) is calculated.
The upper bound is returned in bnd, and the optimizing Q is returned in qopt.
The scaling matrices associated with the upper bound are in dvec and gvec and
may be unwrapped into block diagonal form using muunwrap.
8-27
cos_tr, sin_tr, step_tr
Description cos_tr, sin_tr, and step_tr generate time signals for use with the trsp (time
response) command. The following are the input variables provided to cos_tr
and sin_tr.
Inputs to cos_tr and sin_tr:
Output:
Inputs to step_tr:
Output:
8-28
cos_tr, sin_tr, step_tr
0.5
Magnitude
-0.5
-1
-1.5
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (seconds)
Algorithm cos_tr and sin_tr call the MATLAB commands cos and sin, respectively.
8-29
crand, crandn, sysrand, varyrand
8-30
crand, crandn, sysrand, varyrand
Examples Create a CONSTANT, complex random matrix and a random SYSTEM matrix
using the commands crand and sysrand
crand(4,3)
ans =
0.4764 + 0.1622i 0.9017 + 0.1351i 0.4103 + 0.4523i
0.3893 + 0.0711i 0.4265 + 0.7832i 0.1312 + 0.8089i
0.2033 + 0.3653i 0.1420 + 0.4553i 0.8856 + 0.9317i
0.0284 + 0.2531i 0.9475 + 0.3495i 0.0922 + 0.6516i
sys=sysrand(2,4,1);
seesys(sys)
See Also minfo, pck, pss2sys, rand, randel, sys2pss, unpck, vpck, vunpck
8-31
csord
Purpose 8csord
Compute an ordered, complex Schur form matrix
Description The csord function produces an ordered, complex Schur form matrix of the
input CONSTANT square matrix m with
t 11 t 12
v′ * m * v = t =
0 t 22
flgord=1 partial real part ordering, with real parts less than zero first,
then the jω axis eigenvalues and finally the real parts greater
than zero
flgjw=0 no exit condition on eigenvalue location (default)
The output flag flgout is nominally 0. flout is set to 1 if there are jω-axis
eigenvalues, set to 2 if there are an unequal number of positive and negative
eigenvalues, or set to 3 if both conditions occur. The fourth output argument,
reig_min, is the minimum, magnitude real part of the eigenvalues of m.
8-32
csord
The ric_schr routine calls csord to solve for a stabilizing solution to a matrix
Riccati equation. In this case, the m matrix has a special structure, and failure
modes are flagged to avoid extra, unnecessary computations.
Algorithm The eigenvalues are reordered by iterating through each of them and
interchanging them via a bubble sort based on the input flag, flgord. The
subroutine cgivens exchanges the out of order eigenvalues.
Reference Golub, G.H. and C.F. Van Loan, Matrix Computations, The Johns Hopkins
University Press, 1983.
8-33
dhfnorm
Purpose 8dhfnorm
dhfnorm computes the H∞ gain of a stable, discrete-time SYSTEM matrix
1 + sh ⁄ 2 2z – 1
z = ----------------------- , s = ----------------
1 – sh ⁄ 2 hz + 1
Output arguments:
out a 1 × 3 vector giving a lower bound, upper bound and
the frequency where the lower bound occurs.
See Also hinfsyne, hinffi, hinfnorm, hinfsyn, h2syn, h2norm, ric_eig, ric_schr,
sdhfnorm, sdhfsyn
8-34
dhfsyn
matrix
Description dhfsyn calculates a discrete-time H∞ controller that achieves the infinity norm
gfin for the interconnection structure p. The controller, k, stablizes the
discrete-time SYSTEM matrix p and has the same number of states as p. The
SYSTEM p is partitioned
A B1 B2
p = C 1 D 11 D 12
C 2 D 21 D 22
where B1 are the disturbance inputs, B2 are the control inputs, C1 are the
errors to be kept small, and C2 are the output measurements provided to the
controller. B2 has column size (ncon) and C2 has row size (nmeas).
The closed-loop system is returned in g. The same bilinear transformation
method described for dhfnorm is used. The controller k is returned that
minimizes the entropy integral,
gfin π
2 –2
1 – z0
∫
–2 jθ jθ
I = – ----------- log det ( I – gfin g ( e )′ g ( e ) ) ------------------------------- dθ
2π – π jθ –1 2
e – z0
8-35
dhfsyn
Input arguments
p SYSTEM interconnection structure matrix, (stable, discrete
time)
nmeas number of measurements output to controller
ncon number of control inputs
gmin lower bound on γ
gmax upper bound on γ
tol relative difference between final γ values
h time between samples (optional)
z0 point at which entropy is evaluated (default ∞)
quiet controls printing on the screen
1. no printing
0. header not printed
–1. full printing (default)
ricmethod 1.Eigenvalue decomposition (with balancing)
–1. Eigenvalue decomposition (without balancing)
2. Schur decomposition (with balancing, default)
–2. Schur decomposition (without balancing)
epr measure of when a real part of an eigenvalue of the
Hamiltonian matrix is zero (default epr = 1e–10)
epp positive definite determination of the X∞ and Y∞ solution
(default epp = 1e–6)
Output arguments:
8-36
dhfsyn
Note that the outputs ax, ay, hamx, and hamy correspond to the equivalent
continuous-time problems and can also be scaled and/or balanced.
The dhfsyn program outputs several variables, which can be checked to ensure
that the above conditions are being met. For each γ value the minimum
magnitude, real part of the eigenvalues of the X Hamiltonian matrices is
displayed along with the minimum eigenvalue of X∞, which is the solution to
the X Riccati equation. A # sign is placed to the right of the condition that failed
in the printout. This additional information can aid you in the control design
process.
Algorithm dhfsyn uses the above bilinear transformation to continuous-time and then the
formulae described in the Glover and Doyle paper for solution to the optimal
H∞ control design problem.
Subroutines called. hinfsyne, hinf_st, hinf_gam, hinfe_c:
hinf_gam calls ric_eig, ric_schr, csord, and cgivens
See Also hinfsyne, hinffi, hinfnorm, hinfsyn, h2syn, h2norm, ric_eig, ric_schr,
sdhfnorm, sdhfsyn
8-37
dkit
Syntax dkit
Description dkit is a µ-Tools script file for D–K iteration. The D–K iteration procedure is
an approximation to µ synthesis control design. It involves a sequence of
minimizations, first over the controller variable K (holding the D variable
associated with the scaled µ upper bound fixed), and then over the D variable
(holding the controller K variable fixed). The D–K iteration procedure is not
guaranteed to converge to the minimum µ value, but often works well in
practice. A detailed description of the D–K iteration can be found in Chapter 5.
dkit automates the D–K iteration procedure but requires the initialization of
several variables. The file dk_defin.m is an example of the information
required by dkit. You can copy this file from the µ-Tools subroutine directory
mutools/subs and modify it for your application. This file can also be renamed.
After renaming, assign the variable DK_DEF_NAME in the MATLAB workspace
to the (character string) name of the new file containing the user-defined
variables for dkit. For example, if the filename containing the setup data is
himat_def.m, then
DK_DEF_NAME = 'himat_def';
1 Upon running dkit, the program prompts you for starting D–K iteration
number.
Starting mu iteration #:
Type 1 to indicate the first D–K iteration.
2 (In the 1st iteration, this step is skipped.) The µ calculation (from the
previous step) provides a frequency-dependent scaling matrices, Df. The
fitting procedure is interactive (msf), and fits these scalings with rational,
stable transfer function matrices, D̂ ( s ) .
After fitting, plots of
–1
σ ( D f ( jω )F L ( P, K ) ( jω )D f ( jω ) )
8-38
dkit
and
–1
σ ( D̂ ( jω )F L ( P, K ) ( jω )D̂ f ( jω ) )
Subsequent iterations proceed along the same lines without the need to
re-enter the iteration number. A summary at the end of each iteration is
updated to reflect data from all previous iterations. This often provides
8-39
dkit
The following is a list of the optional variables that may be set (in either
dk_defin file or the file defined by DK_DEF_NAME) and their meanings.
8-40
dkit
A number of variables are saved in the workspace after each iteration. Some of
these variables are required every iteration, hence, it doesn’t make sense to
recompute them. The other variables are outputs from the D–K iteration
procedure. The variables saved after each iteration are
bnds_dk(i) Frequency domain upper and lower bounds for µ
associated with the ith iteration. The (i) denotes the ith
iteration which is augmented to the name by the program
dkit.
dl_dk(i) Left state-space D-scale associated with ith iteration. The
(i) denotes the ith iteration which is augmented to the
name by the program dkit. Hence, dl_dk5 would be the
left state-space D-scale from the fifth iteration.
dr_dk(i) Right state-space D-scale associated with ith iteration.
Same notation.
Dscale_dk(i) D-scaling data output from mu associated with ith
iteration. Dscale_dk(i) is in compressed form. Same
notation.
gf_dk(i) The H∞ norm of the ith iteration closed-loop system. Same
notation.
k_dk(i) Controller from the ith iteration. Same notation.
nom_dk_g Frequency response of NOMINAL_DK using OMEGA_DK.
sens_dk(i) Sensitivity data output from the ith iteration mu
calculation. Same notation.
Fitting D-Scalings
The D-scale fitting procedure is interactive and uses the µ-Tools command msf.
During step 2 of the D–K iteration procedure, you are prompted to enter your
choice of options for fitting the D-scaling data. After pressing return, the
following is a list of your options.
Enter Choice (return for list):
Choices:
nd Move to Next D-Scaling
nb Move to Next D-Block
8-41
dkit
• nd and nb allow you to move from one D-scale data to another. nd moves to
the next scaling, whereas nb moves to the next scaling block. For scalar
D-scalings, these are identical operations, but for problems with full
D-scalings, (perturbations of the form δI) they are different. In the (1,2)
subplot window, the title displays the D-scaling Block number, the row/
column of the scaling that is currently being fit, and the order of the current
fit (with d for data, when no fit exists).
• The order of the current fit can be incremented or decremented (by 1) using
i and d.
• apf automatically fits each D-scaling data. The default maximum state order
of individual D-scaling is 5. The mx variable allows you to change the
maximum D-scaling state order used in the automatic prefitting routine. mx
must be a positive, nonzero integer. at allows you to define how close the
rational, scaled µ upper bound is to approximate the actual µ upper bound in
a norm sense. Setting at 1 would require an exact fit of the D-scale data, and
is not allowed. Allowable values for at are greater than 1. This setting plays
a role (mildly unpredictable, unfortunately) in determining where in the
(D,K) space the D–K iteration converges.
• Entering a positive integer at the prompt will fit the current D-scale data
with that state order rational transfer function.
• e exits the D-scale fitting to continue the D–K iteration.
• The variable s will display a status of the current and fits.
8-42
dkit
Examples An example of using dkit for D–K iteration is provided in the “HIMAT Robust
Performance Design Example” in Chapter 7.
Reference Balas, G.J.and J.C. Doyle, “Robust control of flexible modes in the controller
crossover region,” AIAA Journal of Guidance, Dynamics and Control, Vol. 17,
no. 2, pp. 370–377, March-April, 1994.
Balas, G.J., A.K. Packard and J.T. Harduvel, “Application of µ-synthesis
techniques to momentum management and attitude control of the space
station,” AIAA Guidance, Navigation and Control Conference, New Orleans,
August 1991.
Doyle, J.C., Doyle, K. Lenz, and A. Packard, “Design examples using
µ-synthesis: Space shuttle lateral axis FCS during reentry,” NATO ASI Series,
Modelling, Robustness, and Sensitivity Reduction in Control Systems, vol. 34,
Springer-Verlag, 1987.
Packard, A., J. Doyle, and G. Balas, “Linear, multivariable robust control with
a µ perspective,” ASME Journal of Dynamic Systems, Measurement and
Control, 50th Anniversary Issue, vol. 115, no. 2b, pp. 310–319, June 1993.
Stein, G., and J. Doyle, “Beyond singular values and loopshapes,” AIAA
Journal of Guidance and Control, vol. 14, no. 1, pp. 5–16, January, 1991.
8-43
dkitgui
Syntax dkitgui
Description dkitgui is a graphical user interface (GUI) for D–K iteration. The D–K
iteration procedure is an approximation to µ synthesis control design. It
involves a sequence of minimizations, first over the controller variable K
(holding the D variable associated with the scaled µ upper bound fixed), and
then over the D variable (holding the controller K variable fixed). The D–K
iteration procedure is not guaranteed to converge to the minimum µ value, but
often works well in practice. A more detailed description of the D–K iteration
can be found in Chapter 5.
The GUI tool, dkitgui, has five windows. They are:
• Main Iteration window, which is the main interface for the user during the
iteration.
• Setup window, where initial data is entered.
• Parameter window, which is occasionally used to modify properties of the
D–K iteration, such as H∞ parameters, and to select the variables that are
automatically exported to the workspace each iteration.
• Frequency Response window, where the plots of µ and σ of the closed-loop
transfer function matrix are displayed.
• Scaling window, where the rational fits of the frequency-dependent D-scale
data are shown, and can be modified.
Examples An example of using dkitgui for D–K iteration is provided in the “HIMAT
Robust Performance Design Example” section in Chapter 7.
Reference Balas, G.J.and J.C. Doyle, “Robust control of flexible modes in the controller
crossover region,” AIAA Journal of Guidance, Dynamics and Control, vol. 17,
no. 2, pp. 370–377, March-April, 1994.
Balas, G.J., A.K. Packard and J.T. Harduvel, “Application of µ-synthesis
techniques to momentum management and attitude control of the space
8-44
dkitgui
8-45
drawmag
Purpose 8drawmag
Provide an interactive mouse-based log-log sketch and fitting tool
Description drawmag interactively uses the mouse in the plot window to create a VARYING
matrix pts and a stable, minimum-phase SYSTEM sysout, which
approximately fits, in magnitude, the frequency VARYING matrix in pts.
Input arguments:
Output arguments:
sysout SYSTEM matrix fitting pts.
pts VARYING matrix of points.
While drawmag is running, all interaction with the program is through the
mouse and/or the keyboard. The mouse, if there is one, must be in the plot
window. The program recognizes several commands:
• Clicking the mouse button adds a point at the crosshairs. If the crosshairs
are outside the plotting window, the points will be plotted when the fitting,
windowing, or replotting modes are invoked. Typing a is the same as clicking
the mouse button.
• Typing r removes the point with frequency nearest that of the crosshairs.
• Typing any integer between 0-9 fits the existing points with a transfer
function of that order. The fitting routine approximately minimizes the
maximum error in a log sense. The new fit is displayed along with the points,
and the most recent previous fit, if it exists.
• Typing w uses the crosshair location as the initial point in creating a window.
Moving the crosshairs and clicking the mouse or pressing any key then gives
a second point at the new crosshair location. These two points define a new
8-46
drawmag
8-47
dypert, sisorat
Description The input to dypert consists of the perturbation pvec, the block structure blk,
and lower and upper bounds, bnds, produced from a VARYING matrix µ
calculation. (See the µ-Tools command mu for a more complete description of
these variables.) By searching in bnds, dypert finds the peak value of the lower
bound — for example γ, occurring at frequency ω0. dypert then extracts the
perturbation (call this matrix ∆0) from pvec at the frequency ω0. dypert
constructs a SYSTEM matrix, pert, which is stable, and has the block-diagonal
structure associated with blk, and also satisfies the equations
1
pert ( jω 0 ) = ∆ 0 and pert ∞ = ---
γ
The command dypert can also be called with four input arguments. In this
case, the last argument is a vector of integers. This vector, blkindex, specifies
which blocks in the perturbation structure specified by the blk vector, are to be
used to construct the rational perturbation. For instance, if the block structure
specified by blk is
2×3
diag [ ∆ 1, δ 2 I 4 × 4, δ 3 I 2 × 2, ∆ 4, ∆ 5, δ 6 I 3 × 3, ∆ 7 ] : ∆ 1 ∈ C ,
∆ :=
δ ∈ C, δ ∈ C, ∆ ∈ C 4 × 2, ∆ ∈ C 3 × 3, δ ∈ C, ∆ ∈ C 2 × 1
2 3 4 5 6 7
8-48
dypert, sisorat
∆1 ( s ) 02 × 2 02 × 2 02 × 3
0 2 × 3 δ 3 ( s )I 2 × 2 0 2 × 2 02 × 3
04 × 3 04 × 3 ∆4 ( s ) 04 × 3
03 × 3 03 × 3 0 3 × 2 δ 6 ( s )I 3 × 3
This is useful when some of the perturbations are not physical perturbations,
but correspond to performance objectives, and hence do not need to be
constructed. Note that regardless of the order in which the numbers occur in
the fourth argument, the perturbations in the output SYSTEM pert are in
ascending order. When dypert is called with three arguments, all the
perturbations are constructed.
The command sisorat is essentially a scalar version of dypert, and is the main
subroutine for dypert. The input to sisorat is value, a 1 × 1, VARYING
matrix, with one independent variable value. The independent variable is
interpreted as a frequency, ω0, and the numerical value of value at that
frequency is denoted by γ. The output of sisorat is a single-input/single-output
stable, real, SYSTEM matrix, ratfit, satisfying
Algorithm The main subroutine of dypert is sisorat, which operates on the following
fact. For any complex number γ, and any real frequency ω0 > 0, there is a real
number β > 0 such that by proper choice of sign, the equality
s–β
± γ ------------ = γ
s + β s = jω
0
8-49
dypert, sisorat
fact be chosen to be a dyad, and the program mu does this at each independent
variable value. Hence, the matrix ∆0, as described above, is a diagonal
augmentation of scalar blocks, and dyads. Given that the ith block is a dyad,
write it as ∆ 0 i = y i x *i for complex vectors yi and xi. Using several calls to
sisorat, it is possible to create two stable, rational vectors h(s) (column) and
r(s) (row), such that each element of the vectors is of the form generated by
sisorat (stable, and flat across frequency), and for each k and l
h k ( jω 0 ) = y i ( jω 0 ) , r l ( jω 0 ) = x i ( jω 0 )
k l
ˆ
If we define ∆ i ( s ) := h ( s )r ( s ) then it is stable, and
ˆ ˆ
∆ i ∞ = σ ( ∆0i ) , ∆ i ( jω 0 ) = ∆ 0 i
8-50
fitmag, genphase, fitmaglp, magfit
Description fitmag fits a stable, minimum phase transfer function to magnitude data,
magdata, with a supplied frequency domain weighting function, weight. Both
of these are VARYING matrices, with identical independent variable values.
fitmag uses genphase to generate phase data, and fitsys to do the fit.
fitmag and fitmaglp have the additional input arguments dmdi, upbd, blk,
and blknum. These arguments are used exclusively with D–K iteration when
called by musynfit and musynflp. In this case, the magdata is the dvec output
of the mu program and corresponds to the blknum’th frequency varying D scale
to be fit. weight corresponds to a measure of the sensitivity of mu to changes in
the D scales at each frequency. This is the sens output from mu. heading is a
string variable denoting the title of the plot and oldfit is usually the D scalings
from the previous D–K iteration.
dmdi represents the VARYING matrix analyzed using mu. upbd is the upper
bound calculated using mu of dmdi with the perturbation block structure blk.
The last argument, blknum, corresponds to the current D scale in the block
structure being fit with fitmag or fitmaglp. Upon fitting the magnitude data,
magdata, the resulting transfer function sys is absorbed into the original
matrix dmdi and plotted along with the mu upper bound on the lower graph.
8-51
fitmag, genphase, fitmaglp, magfit
magfit is a batch version of fitmaglp that eliminates the user interaction. The
weight is optional, but dim is a required argument of parameters for the linear
program. dim has the form [hmax htol nmin nmax] where
Examples Create a second-order transfer function sys to test fitmag. Fit its magnitude
data with a first- and second-order transfer function via fitmag.
sys = nd2sys([1 -5 12],[1 2 7]);
w = logspace(-2,2,200);
sysg = frsp(sys,w);
wgt = 0.2;
wgtg = frsp(wgt,w);
sysfit = fitmag(vabs(sysg),wgtg);
ENTER ORDER OF CURVE FIT or 'drawmag' 1
10 0
10 -1
10 -2 10 -1 10 0 10 1 10 2
1) data 2) newfit
10 -1
10 -2 10 -1 10 0 10 1 10 2
ENTER NEW ORDER, ’drawmag’, or NEGATIVE NUMBER TO STOP
8-52
fitmag, genphase, fitmaglp, magfit
A first-order fit does not accurately respresent the frequency data as shown in
the above figure. The solid line represents the curve fit and the dashed line
represents the original frequency data. You can try and fit the data again with
a second-order system.
ENTER ORDER, 'drawmag', or NEGATIVE TO STOP 2
CURVE FITTING, W/ORDER = 2
10 1
10 0
10 -2 10 -1 10 0 10 1 10 2
1) data 2) newfit
10 -1
10 -2 10 -1 10 0 10 1 10 2
ENTER NEW ORDER, ’drawmag’, or NEGATIVE NUMBER TO STOP
The second-order fit lies directly on top of the original data, hence it is difficult
to distinguish the two plots. A -1 is entered at the end of this iteration
procedure to indicate satisfaction with the results.
Algorithm The algorithm for fitmag is as follows. On a log-log scale, the magnitude data
is interpolated linearly, with a very fine discretization. Then, using the
complex cepstrum algorithm, the phase, associated with a stable, minimum
phase, real, rational transfer function with the same magnitude as the magdata
variable is generated. This involves two fft’s, and logarithmic/exponential
conversions. With the new phase data, and the input magnitude data, the
MATLAB function invfreqs is used to find a real, rational transfer function
that fits the data. heading is an optional title and oldfit is an optional
8-53
fitmag, genphase, fitmaglp, magfit
previous fit that can be added to the graphs if they are included. These options
are used in the program musynfit.
The algorithm for magfit is as follows. The system sys is derived by solving a
linear program and searching over a parameter h according to the following
specification. Let m be the given magnitude data, g the transfer function of sys
and w the values of weight; then h is found such that at each frequency,
g < r,
1 ⁄ r < ----
-
m
where
2
r = 1+h⁄w .
The order of sys is increased until an h less than hmax is obtained or nmax is
reached. The minimum value of h at this order is then determined to an
accuracy of htol.
Problems For problems with very coarse data, fitmag may give incorrect answers, even
if the data was generated by taking the frequency response of a linear system.
The inaccuracy arises in the log-log interpolation step, which is used in the
phase calculation. This step is avoided in magfit. Hence, for coarse data sets,
magfit should be used. fitmaglp and magfit appear to be sometimes slower
than fitmag.
Reference Oppenheim, A.V., and R.W. Schaffer, Digital Signal Processing, Prentice Hall,
New Jersey, 1975, pp. 513.
8-54
fitsys
Purpose 8fitsys
Fits single-input/single-output (SISO), single-input/multi-output (SIMO) and
multi-input/single-output (MISO) frequency response data with SYSTEM
matrix
Description fitsys fits frequency response (VARYING) data in resp with a transfer
function of order ord, using a frequency dependent weight in wt (optional). The
frequency response data may be either a row (SIMO) or column (MISO). The
optional frequency dependent weight is a VARYING matrix. This weight may
be a scalar (1 row, 1 column), or may be the same shape as resp.
The fourth argument, code, is optional. If set to 0 (default), then the fit is as
described. If code = 1, as in the µ-synthesis routines, it forces the fit to be stable,
minimum phase, simply by reflecting the poles and zeros if necessary. In this
case, the response resp comes from the program genphase and already
corresponds to a stable, minimum phase transfer function. fitsys is called by
fitmag and msf.
Examples An example of how to use fitsys to derive a SIMO transfer function model is
provided in the “More Sophisticated SYSTEM Functions” section in Chapter 2.
8-55
frsp
Purpose 8frsp
Calculate the complex frequency response of a linear system
Description frsp calculates the complex frequency response of a given SYSTEM matrix
(sys) for a vector of frequency points (omega). The output matrix out is a
frequency dependent VARYING matrix containing the frequency response of
the input system sys at the frequency values contained in the vector omega .
For systems with multiple inputs and outputs, a multivariable frequency
response is returned.
Input arguments:
sys SYSTEM matrix
response calculated at these frequencies. If another VARYING
matrix is input here, then its independent variables are used
T 0 (default) indicates a continuous system. A nonzero value forces
discrete system evaluation with sample time T (optional)
balflg 0 (default) balances the SYSTEM A matrix prior to evaluation. A
nonzero value for balflg leaves the state-space data unchanged
(optional)
Output arguments
out VARYING frequency response matrix
The vector of frequency points is assumed to be real and can be generated from
the MATLAB command logspace or linspace. Given a continuous system sys,
of the form
sys =
A B
CD
and an input vector, omega , with N frequencies, [ω1, ω2,. . .,ωN], frsp evaluates
the following equation
8-56
frsp
jω i T –1
C(e – A ) B + D , i = 1, … , N
jω i 0 –1
C(e – A) B + D
Examples The SYSTEM matrix sys is constructed to have two inputs and two outputs
with poles at –2 and –10. A frequency vector omega is constructed with 30
points log spaced between .1 and 100 rad/s. The complex frequency response of
sys is calculated and its values between 3.5 and 4.6 rad/s are displayed.
a = [-2 0;0 -10];b = [.2 .12; -.3 .4];c = [.3 .7; 2 -1];
sys = pck(a,b,c);
omega = logspace(-1,2,30);
sysg = frsp(sys,omega);
see(xtract(sysg,3.5,4.6))
2 rows 2 columns
iv = 3.56225
iv = 4.52035
8-57
frsp
A frequency response is performed using frsp with the default variables set. A
plot of the frequency response is shown with the four line types corresponding
to the sysg(1,1), sysg(1,2), sysg(2,1), and the sysg(2,2) elements.
vplot('bode',sysg);
title('Complex frequency response example - continuous time')
10 -1
10 -2
10 -3
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
For digital filter design, you can examine the transfer function from 0 to π by
specifying T = 1. A Chebyshev type II filter is designed and its magnitude is
plotted to demonstrate this feature.
8-58
frsp
10 -1
Log Magnitude
10 -2
10 -3
10 -4
10 -5
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -1 10 0 10 1 10 2
Frequency (radians/sec)
[a,b,c,d] = cheby2(11,30,0.3);
dfilt = pck(a,b,c,d);
omega = [0:pi/100:pi*99/100];
dsysg = frsp(dfilt, omega,1);
vplot('iv,lm',dsysg);
xlabel('frequency on unit circle')
ylabel('Magnitude')
title('Complex frequency response on the unit circle')
Algorithm The algorithm to calculate the complex frequency response involves an matrix
inverse problem, which is solved via a Hessenberg matrix. If balflg is set to 0,
the frequency response balances the SYSTEM A matrix (using the MATLAB
balance command) prior to calculation of the Hessenberg form.
Note Balancing the system may cause errors in the frequency response. If
the output of frsp is questioned, compare the results with balancing and
without balancing the SYSTEM prior to calculating the frequency response.
8-59
frsp
10 -1
Magnitude
10 -2
10 -3
10 -4
0 0.5 1 1.5 2 2.5 3 3.5
8-60
gap, nugap
Description gap and nugap compute the gap and ν gap metrics between two SYSTEM
matrices. Both quantities give a numerical value δ(G0,G1) between 0 and 1 for
the distance between a nominal system sys1 (G0) and a perturbed system sys2
(G1). The gap metric was introduced into the control literature by Zames and
El-Sakkary, 1980, and exploited by Georgiou and Smith, 1990. The ν gap
metric was derived by Vinnicombe, 1993. For both of these metrics the
following robust performance result holds from Qui and Davidson, 1992, and
Vinnicombe, 1993
arcsin b(G1,K1) ≥ arcsin b(G0,K0)– arcsin δ(G0,G1)– arcsin δ(K0,K1)
where
–1
b ( G, K ) = I ( I – GK ) – 1 [ G I ]
K ∞
8-61
gap, nugap
Algorithm Tryphon Georgiou and Malcolm Smith wrote the gap program.
The computation of the gap amounts to solving 2-block H∞-problems, Georgiou,
1988. The particular method used here for solving the H∞-problems is based on
Green et al., 1990. The computation of the nugap uses the method of
Vinnicombe, 1993.
Reference Georgiou, T.T., On the computation of the gap metric, Systems Control Letters,
vol. 11, pp. 253–257, 1988.
Georgiou, T.T., and M. Smith, “Optimal robustness in the gap metric,” IEEE
Transactions on Automatic Control, vol. 35, pp. 673–686, 1990.
Green, M., K. Glover, D. Limebeer, and J.C. Doyle, “A J-spectral factorization
approach to H∞ control,” SIAM J. of Control and Opt., 28(6), pp. 1350–1371,
1990.
Qiu, L., and E.J. Davison, “Feedback stability under simultaneous gap metric
uncertainties in plant and controller,” Systems Control Letters, vol. 18–1, pp.
9–22, 1992.
Vinnicombe, G., “Measuring Robustness of Feedback Systems,” PhD
dissertation, Department of Engineering, University of Cambridge, 1993.
Zames, G., and El-Sakkary, “Unstable systems and feedback: The gap metric,”
Proceedings of the Allerton Conference, pp. 380–385, Oct., 1980.
8-62
genmu
Purpose 8genmu
Compute upper bounds for the mixed (real and complex) generalized
structured singular value (referred to as generalized mixed µ) of a VARYING/
CONSTANT matrix
I – ∆M
C
1
µ ∆ ( M, C ) := -------------------------------------------------------------------------------------------------------
min σ ( ∆ ) : δ ∈ ∆, rank I – ∆M < n
C
This quantity can be bounded above, using standard µ ideas. If there exists a
matrix Q such that
µ∆(M + QC) < β
then µ∆(M,C) < β. Hence,
µ ∆ ( M, C ) min µ ∆ ( M + QC )
n×m
Q∈C
8-63
genmu
Reference Pachard, A., K. Zhou, P. Pandey, and G. Becker, “A collection of robust control
problems leading to LMI’s,” 30th IEEE Conference on Decision and Control, pp.
1245–1250, Brighton, UK, 1991.
8-64
getiv, sortiv, tackon
Description getiv returns the independent variable values of the VARYING matrix mat. If
the input matrix mat is a VARYING matrix, the independent variable is
returned as a column vector, indv, and the output err is set to 0. If mat is not
a VARYING matrix, then indv is set to empty, and err is set to 1.
sortiv will reorder the independent variable and associated VARYING matrix
to be monotonically increasing or decreasing. The optional sortflg is set to 0
(default) for monotonically increasing sorting or nonzero for monotonically
decreasing sorting. sortiv can be used in conjunction with tackon to mesh
together two different VARYING matrices. The optional third input argument,
nored, is set to 0 (default) which does not reduce the number of independent
variables even if there are repeated ones. Setting nored to a nonzero value
causes repeated independent variables to be collapsed down if their
corresponding matrices are the same. If they are not, an error message is
displayed and only the first independent variable and corresponding matrix is
kept. The output argument err, which is nominally 0, is set to 1 if an error
message is displayed. The optional fourth input argument, epp, is a vector used
for checking closeness of two variables. If two independent variables are within
epp(1), and the norm of the difference between the two matrices at these points
is within epp(2), sortiv collapses these two independent variable values down
to one. If the two independent variables are within epp(1), and the norm
condition is not satisfied, an error message is displayed and out is set to the
null matrix. When nored is nonzero, the default value for epp is [1e – 9;1e – 9].
tackon strings together two VARYING matrices placing mat1 on top of mat2.
mat1 and mat2 must have the same row and column dimensions.
8-65
getiv, sortiv, tackon
Examples The frequency response VARYING matrix from the frsp example is a
two-input/two-output matrix containing 30 points. These independent
variables vary from 0.1 rad/sec to 100 rad/sec.
minfo(sysg)
varying: 30 pts2 rows2 cols
seeiv(sysg)
Typing getiv without any arguments outputs a brief description of its calling
sequence. All µ-Tools commands have this feature. The xtract command
selects the independent variables between 1 and 5 rad/sec and the getiv
command returns these independent variables from sysg and stores them in
indv.
getiv
usage: [indv,err] = getiv(mat)
[indv] = getiv(xtract(sysg,l,5));
indv'
ans =
1.0826 1.3738 1.7433 2.2122 2.8072
3.5622 4.5204
The sortiv command (with an optional second argument) resorts the
independent variable of the frequency response of sys in decreasing order.
syslg = sortiv (xtract ( sysg ,1 , 5 ) , 1 ) ;
seeiv(syslg)
8-66
h2norm, hinfnorm
Description h2norm calculates the 2-norm of a stable, strictly proper SYSTEM matrix. The
output is a scalar, whose value is the 2-norm of the system.
The output from hinfnorm is a 1 × 3 vector, out, which is made up (in order) of
a lower bound for ||sys||∞, an upper bound for ||sys||∞, and a frequency, ωo, at
which the lower bound is achieved.
The ||⋅||∞ norm calculation is an iterative process and requires a test to stop. The
variable tol specifies the tolerance used to calculate the ||sys||∞. The iteration
stops when
(the current upper bound) ≤ (1 + tol) × (the current lower bound).
The default value of tol is 0.001.
Algorithm The H2 norm of a SYSTEM follows from the solution to the Lyapunov equation.
AX + XA′ + BB′ = 0,
with ||sys||2 = trace (CXC′).
Calculation of the H∞ norm requires checking for jω axis eigenvalues of a
Hamiltonian matrix, Hα, which depends on a parameter α. If Hα has no jω axis
eigenvalues, then the ||⋅||∞ norm of the SYSTEM matrix is less than α. If the
matrix Hα does have jω axis eigenvalues, then these occur at the frequencies
where the transfer matrix has a singular value (not necessarily the maximum)
equal to α. By iterating, the value of the ||⋅||∞ norm can be obtained.
8-67
h2norm, hinfnorm
Reference Boyd, S., K. Balakrishnan and P. Kabamba, “A bisection method for computing
the H∞ norm of a transfer matrix and related problems,” Math Control Signals
and Systems, 2(3), pp. 207–219, 1989.
Boyd, S., and K. Balakrishnan, “A regularity result for the singular values of a
transfer matrix and a quadratically convergent algorithm for computing its H∞
norm,” Systems and Control Letters, vol. 15–1, 1990.
Bruinsma, O., and M. Steinbuch, “A fast algorithm to compute theH∞ norm of
a transfer function matrix,” Systems and Control Letters, vol. 14, pp. 287–293,
1990.
8-68
h2syn
Description h2syn calculates the H2 optimal controller k and the closed-loop system g for
the linear fractional interconnection structure p. nmeas and ncon are the
dimensions of the measurement outputs from p and the controller inputs to p.
The optional fourth argument, ricmethod, determines the method used to solve
the Riccati equations. The interconnection structure, p, is defined by
A B1 B2
p = C 1 D 11 D 12
C 2 D 21 D 22
Input arguments:
Output arguments:
k H2 optimal controller
g closed-loop system with optimal controller
norms norms of four different quantities, full information control cost
(FI), output estimation cost (OEF), disturbance feedforward cost
(DFL) and full control cost (FC), norms = [FI OEF DFL FC];
kfi full information/state feedback control law
gfi full information/state feedback closed-loop system
8-69
h2syn
The equations and corresponding nomenclature are taken from the Doyle, et
al., 1989, reference. The full information cost is given by the
1
′ ---
equation ( trace ( B 1 X 2 B 1 ) ) 2 . The output estimation cost is given
1
′ --- ′ ′
by ( trace ( F 2 Y 2 F 2 ) ) 2 , where F 2 =: – ( B 2 X 2 + D 12 C 1 ) . The disturbance
1
′ --- ′ ′
feedforward cost is ( trace ( L 2 X 2 L 2 ) ) 2 , where L2 is defined by – ( Y 2 C 2 + B 1 D 21 )
1
′ ---
and the full control cost is given by ( trace ( C 1 Y 2 C 1 ) ) 2 . X2 and Y2 are the
solutions to the X and Y Riccati equations, respectively.
The H2 solution provides an upper bound on γ for use in the hinfsyn program.
Examples Design an H2 optimal controller for a system matrix, himat_icn, with two
sensor measurements (nmeas), two error signals, two actuator inputs (ncont),
and eight states. himat_icn differs from the SYSTEM interconnection
structure himat_ic by the fact that the D11 term of himat_ic is set to be zero.
The Schur decompostion method, ricmethd = 2, will be used for solution of the
Riccati equations. The program outputs the minimum eigenvalue of X2 and Y2
during the computation.
nmeas = 2;
ncont = 2;
ricmethd = 2;
minfo(himat_icn)
system:8 states6 outputs6 inputs
[k,g] = h2syn(himat_icn,nmeas,ncont,ricmethd);
minimum eigenvalue of X2: 2.260000e-02
minimum eigenvalue of Y2: 2.251670e-02
8-70
h2syn
The H∞ and H2 norm of the resulting closed-loop system g can be calculated via
the commands hinfnorm and h2norm.
hinfnorm(g)
norm between 2.787 and 2.79
achieved near 29.9
h2norm(g)
1.594e+01
Algorithm h2syn is an M-file in µ-Tools that uses the formulae described in the Doyle, et
al., 1989, reference for solution to the optimal H2 control design problem. A
Hamiltonian is formed and solved via a Riccati equation (ric_eig and
ric_schr). The D matrix associated with the input disturbances and output
errors is restricted to be zero.
8-71
hinffi
Syntax [k,g,gfin,ax,hamx] =
hinffi(p,ncon,gmin,gmax,tol,ricmethd,epr,epp)
Description hinffi calculates an H∞ full information controller that achieves the infinity
norm gfin for the interconnection structure p. The controller, k, stablizes the
SYSTEM matrix p and is constant gain. The system p is partitioned
A B1 B2
p =
C 1 D 11 D 12
where B1 are the disturbance inputs, B2 are the control inputs, and C1 are the
errors to be kept small. B2 has the column size ncon. Within the hinffi
program, the SYSTEM matrix p is augmented with state and disturbance
measurements; i.e., the identity matrix with size equal to the number of states
of p and the identity matrix with size equal to the number of disturbances. Be
careful when closing the loop with the full information controller since the
extra measurements are only augmented inside the command hinffi. The
internal system used for control design is
A B1 B2
C 1 D 11 D 12
I 0 0
0 I 0
8-72
hinffi
the Riccati equations. The eigenvalue method is faster but can have numerical
problems, while the Schur method is slower but generally more reliable.
The algorithm employed requires tests to determine whether a solution exists
for a given γ value. epr is used as a measure of when the Hamiltonian matrix
has imaginary eigenvalues and epp is used to determine whether the Riccati
solution is positive semi-definite. The selection of epr and epp should be based
on your knowledge of the numerical conditioning of the interconnection
structure p. The conditions checked for the existence of a solution are
Input arguments:
p SYSTEM interconnection structure matrix
ncon number of controller outputs
gmin lower bound on γ
gmax upper bound on γ
tol relative difference between final γ values, iteration stopping
criteria
ricmethd 1 Eigenvalue decomposition with balancing.
–1 Eigenvalue decomposition with no balancing
2 Schur decomposition with balancing (default)
–2 Schur decomposition with no balancing
epr measure of when a real eigenvalue of the Hamiltonian matrix
is zero (default epr = 1e–10, optional)
epp positive definite determination of the X∞ solution (default
epp = 1e–6, optional)
8-73
hinffi
Output arguments:
k H∞ full information controller
g closed-loop system with H∞ full information controller
gfin final γ achieved
ax Riccati solution as a VARYING matrix with independent
variable γ
hamx Hamiltonian matrix as a VARYING matrix with independent
variable γ
Note that the outputs ax, ay, hamx, and hamy may correspond to scaled or
balanced data. The following assumptions are made in the implementation of
the hinfsyn algorithm and must be satisfied.
(i) (A,B2) is stabilizable
(ii) D12 is full column rank
A – jωI B 2
(iii) has full column rank for all ω.
C 1 D 12
Examples Given an interconnection structure sys with one control input, it is desired to
synthesize a full information controller. The upper bound on γ is 1.0 and the
lower bound is specified as 0.1. A tolerance of 0.02 is selected for the stopping
8-74
hinffi
condition for the γ iteration and the Schur method is used to solve the Riccati
equations. The command hinffi outputs the display shown for each value of γ.
The final γ value achieved is 0.2547.
ncont = 1;% number of control inputs
gmin = .1;% minimum gamma value to be tested
gmax = 1;% maximum gamma value to be tested
tol = .02;% tolerance on the gamma stopping value
ricmethd = 2;% Riccati solution via the Schur method
seesys(p)% plant interconnection structure
p=
3 1 | 4
0 0 | 1
-------------
1 0 | 0
[k,g,gf,ax,hx] = hinffi(p,1,.1,1,.02,2);
Test bounds:0.1000< gamma<= 1.0000
Algorithm hinffi uses the formulas similar to the ones described in the Glover and Doyle,
1988 paper for solution to the H∞ control design problem. See the hinfsyn
command for more information.
8-75
hinffi
8-76
hinfsyn
Syntax [k,g,gfin,ax,ay,hamx,hamy] =
hinfsyn(p,nmeas,ncon,gmin,gmax,tol,ricmethd,epr,epp)
Description hinfsyn calculates an H∞ controller, which achieves the infinity norm gfin for
the interconnection structure p. The controller, k, stabilizes the SYSTEM
matrix p and has the same number of states as p. The SYSTEM p is partitioned
A B1 B2
p = C 1 D 11 D 12
C 2 D 21 D 22
where B1are the disturbance inputs, B2 are the control inputs, C1 are the errors
to be kept small, and C2 are the output measurements provided to the
controller. B2 has column size (ncon) and C2 has row size (nmeas).
The closed-loop system is returned in g. The program provides a γ iteration
using the bisection method. Given a high and low value of γ, gmax and gmin, the
bisection method is used to iterate on the value of γ in an effort to approach the
optimal H∞ control design. If the value of gmax is equal to gmin, only one γ value
is tested. The stopping criteria for the bisection algorithm requires the relative
difference between the last γ value that failed and the last γ value that passed
be less than tol. You can select either the eigenvalue or Schur method the
Riccati equations. The eigenvalue method is faster but can have numerical
problems, while the Schur method is slower but generally more reliable.
The algorithm employed requires tests to determine whether a solution exists
for a given γ value. epr is used as a measure of when the Hamiltonian matrix
has imaginary eigenvalues and epp is used to determine whether the Riccati
solutions are positive semi-definite. The conditions checked for the existence of
a solution are:
• H and J Hamiltionian matrices (which are formed from the state-space data
of P and the γ level) must have no imaginary-axis eigenvalues.
• the stabilizing Ricatti solutions X∞ and Y∞ associated with the Hamiltionian
matrices must exist and be positive, semi-definite.
• spectral radius of (X∞,Y∞) must be less than or equal to γ2.
8-77
hinfsyn
The selection of epr and epp should be based on your knowledge of the
numerical conditioning of the interconnection structure p. The following
assumptions are made in the implementation of the hinfsyn algorithm and
must be satisfied.
(i) (A,B2) is stabilizable and (C2,A) detectable.
(ii) D12 and D21 have full rank.
A – jωI B 2
(iii) has full column rank for all ω ∈ R.
C 1 D 12
A – jωI B 1
(iv) has full row rank for all ω ∈ R.
C 2 D 21
Inputs arguments:
8-78
hinfsyn
Output arguments:
k H∞ (sub) optimal controller
g closed-loop system with H∞ controller
gfin final γ achieved
ax X∞ Riccati solution as a VARYING matrix with independent
variable γ
ay Y∞ Riccati solution as a VARYING matrix with independent
variable γ
hamx H∞ Hamiltonian matrix as a VARYING matrix with
independent variable γ
hamy J∞ Hamiltonian matrix as a VARYING matrix with
independent variable γ
Note that the outputs ax, ay, hamx, and hamy correspond to scaled or balanced
data.
The hinfsyn program displays several variables, which can be checked to
ensure that the above conditions are being satisfied. For each γ value being
tested, the minimum magnitude, real part of the eigenvalues of the X and Y
Hamiltonian matrices are displayed along with the minimum eigenvalue of X∞
and Y∞, which are the solutions to the X and Y Riccati equations, respectively.
The maximum eigenvalue of X∞Y∞, scaled by γ–2,is also displayed. A # sign is
placed to the right of the condition that failed in the printout.
8-79
hinfsyn
Examples This example is taken from the “HIMAT Robust Performance Design Example”
section in Chapter 7. himat_ic contains the open-loop interconnection
structure. Design an H∞ (sub)optimal controller for the SYSTEM matrix,
himat_ic, with two sensor measurements, two error signals, two actuator
inputs, two disturbances, and eight states. The range of γ is selected to be
between 1.0 and 10.0 with a tolerance, tol, on the relative closeness of the final
γ solution of 0.1. The Schur decompostion method, ric_schr, is used for
solution of the Riccati equations. The program outputs at each iteration the
current γ value being tested, and eigenvalue information about the H and J
Hamiltonian matrices and X∞ and Y∞ Riccati solutions. At the end of each
iteration a (p) denoting the tested γ value passed or an (f) denoting a failure is
displayed. Upon finishing, hinfsyn prints out the γ value achieved.
nmeas = 2;% number of sensor measurements
ncont = 2;% number of control inputs
gmin = 1;% minimum gamma value to be tested
gmax = 10;% maximum gamma value to be tested
tol = .1;% tolerance on the gamma stopping value
ric = 2;% Riccati equation solved via the Schur method
minfo(himat_ic)% SYSTEM interconnection structure
system: 8 states6 outputs6 inputs
[k,g] = hinfsyn(himat_ic,nmeas,ncon,gmin,gmax,tol,ric);
Test bounds: 1.0000 < gamma <=10.0000
8-80
hinfsyn
Algorithm hinfsyn uses the formulae described in the Glover and Doyle, 1988, paper for
solution to the optimal H∞ control design problem. There are a number of
research issues that need to be addressed for the “best” solution of the Riccati
equations but only two of the standard methods are included.
8-81
hinfsyne
2 2
∞
∫
gfin –2 s0
I = – --------------- ln det ( I – γ g ( jω )′ g ( jω ) ) ----------------------- dω
2π – ∞ s0 + ω
2 2
8-82
hinfsyne
Output arguments:
k H∞ (sub) optimal controller
g closed-loop system with H∞ controller
gfin final γ value achieved
ax X∞ Riccati solution as a VARYING matrix with independent
variable γ
ay Y∞ Riccati solution as a VARYING matrix with independent
variable γ
hamx H∞ Hamiltonian matrix as a VARYING matrix with
independent variable γ
hamy J∞ Hamiltonian matrix as a VARYING matrix with
independent variable γ
Note that the outputs ax, ay, hamx, and hamy correspond to scaled or balanced
data.
The hinfsyne program outputs several variables, which can be checked to
ensure that the above conditions are being met. For each γ value the minimum
magnitude, real part of the eigenvalues of the X Hamiltonian matrices is
displayed along with the minimum eigenvalue of X∞, which is the solution to
the X Riccati equation. A # sign is placed to the right of the condition that failed
in the printout. This additional information can aid you in the control design
process.
8-83
hinfsyne
Algorithm hinfsyne uses the formulas similar to the ones described in the Glover and
Doyle paper for solution to the H∞ control design problem. See the hinfsyn
command for more information.
See Also dhfsyn, hinfsyn, hinffi, hinfnorm, h2syn, h2norm, ric_eig, ric_schr,
sdhfsyn
8-84
indvcmp
Purpose 8indvcmp
Compare the independent variable data of two VARYING matrices
Description indvcmp compares the data for two VARYING matrices. If the two sets of
independent variables are within a specified tolerance of one another, then the
VARYING matrices are assumed to have identical independent variables, and
the VARYING matrices can be combined (i.e., added, subtracted, multiplied,
etc.). The results are displayed if an output argument is not provided.
Input arguments:
mat1, mat2= matrices to be compared
errcrit= 1 x 2 optional matrix containing the relative error and
absolute error bounds. The relative error is used to test the
error in independent variables whose magnitude is greater
than 1e-9, while the absolute error bound used for smaller
independent variable values. Default values are 1e-6, and
1e-13, respectively.
Output arguments:
code=0 independent variable data is different
code=1 independent variable data is identical
code=2 different number of points
code=3 at least one matrix isn’t a VARYING matrix
Examples Compare the two frequency response matrices, mat has its independent
variable at 0.01 and 0.1 and mat2 has its independent variable at 0.011 and
0.1. Given the default comparison criteria, the independent variable is
different. Changing the tolerance leads to the command checking different
indvcmp variations in the independent variable.
8-85
indvcmp
see(mat)
2 rows 3 columns
indep variable 0.01
1 2 3
4 5 6
indep variable 0.1
7 8 9
10 11 12
see(mat2)
2 rows 3 columns
indep variable 0.011
10 20 30
40 50 60
indep variable 0.1
70 80 90
100 110 120
indvcmp(mat,mat2)
code =
0
Changing the relative and absolute error bounds in indvcmp leads these two
independent variables to be deemed the same.
indvcmp(mat,mat2,[1 1])
varying data is the same
8-86
madd, msub
Description madd (msub) allows the addition or subtraction of matrices, regardless of their
type, as long as their dimensions are compatible. CONSTANT, SYSTEM, and
VARYING matrices can be added to or subtracted from one another based on
the following table.
mat1
For compatibility, the number of rows and columns of mat1 must equal the
number of rows and columns of mat2. In the case of SYSTEM matrices, the
number of inputs and outputs of mat1 must equal the number of inputs and
outputs of mat2. The same is true for VARYING matrices and in addition, the
independent variables of the VARYING matrices must be identical. Up to nine
matrices of compatible dimension can be added or subtracted by including
them as input arguments.
8-87
madd, msub
-10 | 3
-----|----
10 | 0
seesys(p1,'3.2g')
-2 | 3
----|----
1 | .1
8-88
madd, msub
Adding two SYSTEM matrices returns a SYSTEM matrix with the same
number of inputs and outputs as p and p1.
out = madd(p,p1);
seesys(out,'3.2g')
-10 0 | 3
0 -2 | 3
---------|------
10 1 | .1
minfo(out)
system: 2 states 1 outputs 1 inputs
-10 | 3
------|-----
10 | -10
-10 | 3
------|-----
10 | -10
Algorithm madd and msub call the MATLAB + and – commands consistent with the type of
matrices.
8-89
massign
Purpose 8massign
Matrix assignment for VARYING and SYSTEM matrices
Examples In the first example a VARYING matrix with two independent variables is
formed with identical (and obvious) data for each matrix.
tl = [11,12,13,14;
21,22,23,24;
31,32,33,34;
41,42,43,44];
vmat = vpck([tl;tl],[0.1,0.2]);
Now make a 2 × 2 data matrix and insert it into the VARYING matrix.
Changing the order of the row and column indices has the effect of permuting
the result. This is identical to the constant matrix case.
ri = [1,3];
ci = [4,2];
data = [0.001, 0.002; 0.003, 0.004];
vmatl = massign(vmat,ri,ci,data);
see(vmatl)
4 rows 4 columns
8-90
massign
iv = 0.1
iv = 0.2
omega = logspace(-1,2,100);
sys_g = frsp(sys,omega);
vplot('liv,lm',sys_g,'-')
8-91
massign
10 1
10 0
10 -1
10 -2
10 -3
10 -4
10 -1 10 0 10 1 10 2
8-92
mfilter
Purpose 8mfilter
Generate SYSTEM representations of a Bessel, Butterworth, Chebyshev or RC
filter
The dc gain of each filter (except even order Chebyshev) is set to unity. The
argument psbndr specifies the Chebyshev passband ripple (in dB). At the cutoff
frequency, the magnitude is -psbndr dB. For even order Chebyshev filters the
DC gain is also -psbndr dB.
The Bessel filters are calculated using the recursive polynomial formula. This
is poorly conditioned for high order filters (order > 8).
8-93
mfilter
0
10
Log Magnitude
2
10
1 0 1 2
10 10 10 10
Frequency (radians/sec)
0
Phase (degrees)
100
200
300
1 0 1 2
10 10 10 10
Frequency (radians/sec)
8-94
minfo
Purpose 8minfo
Provide matrix information
Syntax minfo(matin)
[systype,rowdata,coldata,pointdata] = minfo(matin)
Description minfo returns information about the data type, and size of the matin. With no
output assignment, minfo returns text output to the screen. The information is
determined from the data structure as defined in the “The Data Structures”
section in Chapter 2.
With output arguments, minfo returns four arguments. The first argument,
systype, is a string variable that can take one of four values. The
interpretation of the three additional output arguments is based on the
variable systype.
systype == 'vary' matin is a VARYING matrix. pointdata tells how
many independent variable values there are,
rowdata is the row dimension of a matrix, and
coldata is the column dimension
systype == 'syst' matin is a SYSTEM matrix. pointdata is the
number of states, rowdata is the number of outputs,
and coldata is the number of inputs.
systype == 'cons' matin is a regular MATLAB matrix, pointdata is set
to NaN, rowdata is the number of rows, and coldata
is the number of columns.
systype == 'empt' matin is an empty MATLAB matrix; also pointdata,
rowdata, and coldata are set to empty.
8-95
minfo
Examples minfo identifies the type of matrix being manipulated. Compare the displays
for a CONSTANT, SYSTEM, and VARYING matrix.
a = rand(2,2);b = rand(2,3);c = rand(1,2);
minfo(a)
constant: 2 rows 2 cols
sys = pck(a,b,c);
minfo(sys)
system: 2 states1 outputs3 inputs
sys_g = frsp(sys,[.1 .5 .9 1.4]);
minfo(sysg)
varying: 4 pts 1 rows 3 cols
4 pts between 0.1 and 0.4
sys = sysrand(2,3,1);
[mtype,mrows,mcols,mnum] = minfo(sys);
mtype
mtype =
syst
[mrows,mcols,mnum]
ans =
1 3 2
8-96
minv, vinv
Description minv calculates the inverse of the input matrix. For VARYING matrices, minv
returns a VARYING matrix with the inverse of each independent variable
matrix. For SYSTEM matrices, the inverse is defined as
–1 –1
mat =
A B, out = ( mat )
–1
= A – BD C – BD C
CD –1 –1
D C D
vinv is the same command as minv, but works only on CONSTANT and
VARYING matrices.
-3.2e+01 | -3.0e+01
----------|-----------
1.0e+01 | 1.0e+01
sysig = frsp(sysi,omega);
see(sbs(sysg,sysig,minv(sysig)))
1 row 3 columns
iv =
1.3000 - 0.6000i 0.6341 + 0.2927i 1.3000 - 0.6000i
iv = 10
0.1577 - 0.2885i 1.4591 + 2.6690i 0.1577 - 0.2885i
8-97
minv, vinv
8-98
mmult
Purpose 8mmult
Multiply CONSTANT, SYSTEM, and VARYING matrices
Description mmult allows the multiplication of matrices, mat1 and mat2 regardless of their
type, provided their dimensions are compatible. CONSTANT, SYSTEM and
VARYING matrices can be multiplied by one another based on the following
table.
mat1
For compatibility, the number of columns of mat1 must equal the number of
rows of mat2. In the case of SYSTEM matrices, the number of inputs of mat1
must equal the number of outputs of mat2. (An alternative term for the
multiplication of two SYSTEM matrices is cascade.) Similarly restrictions
apply for VARYING matrices. Up to nine matrices of compatible dimension can
be multiplied via the same command by including them as input arguments.
Pictorial
Representation
of Function
8-99
mmult
-1.0e+01 | 1.0e+00
----------|----------
1.0e+01 | 1.0e+00
minfo(p1)
system: 1 states 1 outputs 1 inputs
seesys(p2)
-3.0e+00 | 2.0e+00
----------|----------
4.0e+00 | 1.0e-01
minfo(p2)
system: 1 states 1 outputs 1 inputs
out = mmult(p1,p2)
seesys(out,%5.2g')
-10 4 | 0.1
0 -3 | 2
---------|------
10 0 | 0
8-100
mmult
Algorithm mmult uses the MATLAB “ * ” command when the multiplication does not
involve two SYSTEM matrices. The equation for the multiplication of two
subsystems is given by
A1 B1 A2 B2
sys1 = , sys2 = ,
C1 D1 C2 D2
A1 B1 C2 B1 D2
→ mmult(sys1,sys2) = 0 A2 B2
C1 D1 C2 D1 D2
8-101
mprintf
Purpose 8mprintf
Format output of matrix to screen
Description mprintf displays a matrix in formatted form. The optional 'format' specifies
the format exactly as in the MATLAB function sprintf. If no 'format' is
specified the default is '%.1e'. This routine is primarily for use in seesys and
does not work well for f format when the minimum field width is too small.
There is no input checking, so you can wreak havoc if you use mprintf
incorrectly. See sprintf for more details. The optional 'end of line
characters' is exactly what it says. The default is the newline C escape
sequence (\n). To get no newline at the end of each line use
mprintf(matin,'format',[]).
Examples The mprintf command displays any type of matrix. An example of its use for
SYSTEM and VARYING matrices follows.
mprintf(m)
mprintf(m,'%6.2f ')
17 9 -6 -13
1 -14 6 -13
18 -7 -4 10
3 12 -1 -O
8-102
msf, msfbatch
Description msf fits the block diagonal, frequency-dependent matrices DL(ω) and DR(ω)
(contained in the VARYING matrix dvec, with block structure implied by the
entries of blk) with rational, stable, minimum-phase D̂ L ( s ) and D̂ R ( s ) such
that
–1 –1
maxσ [ D L ( ω )M ( jω )D R ( ω ) ] ≈ D̂ L MD̂ R ∞
msf returns the stable, minimum phase system matrices dsysL and dsysR.
Note Typically, there is no need to call msf directly. The standard use of msf
is a subroutine within µ-synthesis. The programs dkit and/or dkitgui are
fully functional µ-synthesis routines.
Input arguments:
Mg is the frequency response upon which the µ calculation was
performed.
bnds is the upper bound from the µ calculation.
dvec is a frequency varying vector containing the Ds (obtained from
mu).
sens is the sensitivity of the upper bound in the µ calculation on the
Ds. The sensitivity sens is a frequency domain weight calculated
by mu.
blk is the uncertainty block structure. This should correspond with
the block strcucture used in the µ calculation (and produced bnds,
dvec and sens).
8-103
msf, msfbatch
Output arguments:
dsysL is the left (i.e., output) block diagonal scaling matrix. It is a
SYSTEM matrix (it may be CONSTANT)
dsysR is the right (i.e., input) block diagonal scaling matrix. It is the
same type as dsysL
8-104
mscl, sclin, sclout
8-105
mscl, sclin, sclout
Examples mscl scales the three input, two output VARYING matrix, matin, by –2.5.
minfo(matin)
varying- 2 pts2 rows3 cols
see(matin)
2 rows 3 columns
indep variable0.2
3 13 23
4 14 24
indep variable0.3
4 14 24
5 15 25
matout = mscl(matin,-2.5);
see(matout)
2 rows 3 columns
indep variable0.2
-7.5000 -32.5000-57.5000
-10.0000-35.0000-60.0000
indep variable 0.3
-10.0000-35.0000-60.0000
-12.5000-37.5000-62.5000
Use the sclin command to scale the first input of a SYSTEM matrix by –3.
sys = pck(ones(3,3),2*ones(3,2),3*ones(1,3),4*ones(1,2));
seesys(sys)
sysout = sclin(sys,1,-3);
8-106
mscl, sclin, sclout
seesys(sysout)
The sclout command can be used to scale the first and third outputs
of a SYSTEM matrix by the first order transfer function 10/(s +
10).
sys = sysrand(2,3,1);
seesys(sys,'%11.2e)
8-107
mscl, sclin, sclout
matin = vpck([ones(3,2);2*ones(3,2);3*ones(3,2)],[1;2;3]);
seesys(matin)
3 rows 2 columns
iv = 1
l.0e+00 l.0e+00
l.0e+00 1.0e+00
l.0e+00 l.0e+00
iv = 2
2.0e+00 2.0e+00
2.0e+00 2.0e+00
2.0e+00 2.0e+00
iv = 3
3.0e+00 3.0e+00
3.0e+00 3.0e+00
3.0e+00 3.0e+00
fac = vpck([1;2;3],[1;2;3]);
sysout = sclout(matin,l,fac)i
seesys(matin)
3 rows2 columns
iv = 1
l.0e+00 l.0e+00
l.0e+00 l.0e+00
l.0e+00 l.0e+00
iv = 2
4.0e+00 4.0e+00
2.0e+00 2.0e+00
2.0e+00 2.0e+00
iv = 3
9.0e+00 9.0e+00
3.0e+00 3.0e+00
3.0e+00 3.0e+00
8-108
mu, muunwrap, randel, unwrapd, unwrapp
8-109
mu, muunwrap, randel, unwrapd, unwrapp
8-110
mu, muunwrap, randel, unwrapd, unwrapp
The default value of options is 'lu', meaning that a lower bound will be
computed using the power method, Young and Doyle 1990 and Packard et al.
1988, and an upper bound will be computed, using the balanced/AMI
technique, Young et al., 1992, for computing the upper bound from Fan et al.,
1991.
Output arguments:
bnds A 1 × 2 vector. If matin is VARYING, so is bnds, whereas if
matin is a CONSTANT matrix, then bnds is CONSTANT.
The first column of bnds contains an upper bound to mixed
µ of matin, and the second column contains a lower bound
to mixed µ.
dvec and gvec Row vectors which contain the D and G scaling matrices
that have produced the upper bound in bnds. dvec and gvec
are the same data type as bnds and are stored as vectors to
save memory. They can be unwrapped into the appropriate
D and G matrices by using the command, muunwrap:
[dl,dr,gl,gm,gr] = muunwrap(dvec,gvec,blk);
The upper bound in bnds for a matrix M is a number β > 0 such that there are
scaling matrices Dl, Dr, Gl, Gm, Gr (see Young et al., 1992, for details) satisfying
–1
2 – --1- D l MD r 2 – --1-
σ ( I + G l ) 4 ----------------------- – jG m ( I + G r ) 4 ≤ 1
β
–1
σ ( D l MD r )
8-111
mu, muunwrap, randel, unwrapd, unwrapp
pert = unwrapp(pvec,blk);
Examples Suppose sys is a system matrix with four inputs and four outputs, and that it
is stable. sys_g is a frequency response of sys.
% ∆ is 4 1 × 1 perturbation blocks
blk = ones(4,2);
8-112
mu, muunwrap, randel, unwrapd, unwrapp
% Form M∆
mdel = mmult(sys_g,actpert);
Looking at the same frequency response sys_g with a mixed real/complex block
structure.
% ∆ is 4 1 × 1 perturbation blocks
% with the first 2 real, and the last 2 complex
blk = [-1 0; -1 0; 1 0; 1 0];
8-113
mu, muunwrap, randel, unwrapd, unwrapp
% Form M∆
mdel = mmult(sys_g,pert);
Algorithm Peter Young and Matt Newlin helped write the mu program and supporting
routines.
The lower-bound power algorithm is from Young and Doyle, 1990, and Packard
et al. 1988.
The upper-bound is an implementation of the bound from Fan et al., 1991, and
is described in detail in Young et al., 1992. In the upper bound computation, the
matrix is first balanced using either a variation of Osborne’s method (Osborne,
1960) generalized to handle repeated scalar and full blocks, or a Perron
approach. This generates the standard upper bound for the associated complex
µ problem. The Perron eigenvector method is based on an idea of Safonov,
(Safonov, 1982). It gives the exact computation of µ for positive matrices with
scalar blocks, but is comparable to Osborne on general matrices. Both the
Perron and Osborne methods have been modified to handle repeated scalar and
full blocks. Perron is faster for small matrices but has a growth rate of n3,
compared with less than n2 for Osborne. This is partly due to the MATLAB
implementation, which greatly favors Perron. The default is to use Perron for
simple block structures and Osborne for more complicated block structures. A
sequence of improvements to the upper bound is then made based on various
equivalent forms of the upper bound. A number of descent techniques are used
which exploit the structure of the problem, concluding with general purpose
AMI optimization (Boyd et al.), 1993, to obtain the final answer.
8-114
mu, muunwrap, randel, unwrapd, unwrapp
8-115
musynfit, musynflp, muftbtch
Purpose Interactive D-scaling rational fit routines used in old µ-synthesis routines
8musynfit, musynflp, muftbtch
Note These routines are included only for backwards compatibility with
versions 1.0 and 2.0. They will not be supported in future versions. They
should not be used by new users of the toolbox. All new routines should be
based on the new routine, msf, which is described on page ???.
Description musynfit fits the magnitude curve obtained by multiplying the old D frequency
response (from pre_dsysl) with the dvec data. musynfit returns stable,
minimum phase system matrices dsysL and dsysR, which can be absorbed into
the original interconnection structure. Once absorbed, a H∞ design is
performed with hinfsyn completing another D–K iteration of µ-synthesis.
For the first µ-synthesis iteration, set the variable pre_dsysl to the string
first. In subsequent iterations, pre_dsysl should be the previous (left)
rational D-scaling system matrix, dsysL. Essentially, the element-by-element
magnitudes of the matrices
mmult(unwrapd(dvec,blk),frsp(pre_dsysL,getiv(dvec))), and
frsp(dsysL,getiv(dvec)) are equal.
The (optional) variable clpg is the VARYING matrix that produced the dvec,
sens, and upbd data output from µ. The fitting procedure is interactive
(musynfit or musynflp), and fits (in magnitude) these scalings with rational,
stable transfer function matrices, D̂ ( s ) . After fitting the dvec data, plots of
–1
σ ( D f ( jω ) ⋅ clpg ( jω ) ⋅ D f ( jω ) )
and
8-116
musynfit, musynflp, muftbtch
–1
σ ( D̂ ( jω ) ⋅ clpg ( jω ) ⋅ D̂ ( jω ) )
are shown in the lower graph window for comparison. At this point, you have
the option of refitting the D data. If clpg and upbd are not provided, the default
is to plot the sens variable in the the lower graph.
Note You are strongly discouraged from calling musynfit and musynflp
directly and are encouraged to use dkit or dkitgui to perform µ-synthesis
calculations.
Input arguments:
pre_dsysl is set to the character string 'first' for the first iteration.
As the iteration proceeds, it should be the previous dsysL.
sens is the sensitivity of the upper bound in the µ calculation on
the Ds. The sensitivity sens is a frequency domain weight,
which is obtained from mu.
dvec is a frequency varying row vector containing the Ds (from mu).
blk is the block structure, same block structure used in mu.
nmeas is the number of measurements in control problem.
ncntrl is the number of controls in control problem.
clpg is the frequency response upon which the calculation was
performed (optional).
upbd is the upper bound from the µ calculation (optional).
wt is a weight used to influence the frequency range in which the
data is to be fit more accurately (optional).
dim is the highest order fit to be used (only used in muftbtch).
8-117
musynfit, musynflp, muftbtch
Output arguments:
dsysL is the output (left) block diagonal, SYSTEM scaling.
dsysR is the input (right) block diagonal, SYSTEM scaling (needs to
be inverted before being absorbed into the interconnection
structure).
musynflp has the same inputs and outputs and user interaction as
musynfit but uses a linear programing routine to do the
fitting. muftbtch is a batch version of musynflp that has no
user interaction. The extra argument dim is a required
argument of parameters for the linear program. dim has the
form [hmax htol nmin nmax] where
• hmax is a measure of the allowable error in the fit.
• htol is a measure of the accuracy with which the optimiza-
tion is carried out.
• nmin and nmax are the minimum and maximum orders con-
sidered in the problem
For more detail about the role of hmax and htol, see the reference pages for
fitmaglp and magfit. Reasonable choices are hmax = .26 and htol = .1.
The musynflp and musynfit commands provide the option of fitting the
frequency varying D-scale data by hand using the µ-Tools drawmag command.
You can invoke this option with the string ’drawmag’ in response to the prompt
ENTER ORDER OF CURVE FIT or 'drawmag'
The mouse is used in the plot window to identify the data to be fit with a stable,
minimum-phase system. See the drawmag command for more information.
Examples musynfit is used within a D–K iteration (µ-synthesis) to fit the D-scales, which
are output from the mu command. The first step in the D–Kiteration is to design
an H∞ control law. The closed-loop system is analyzed with mu based on the
block structure blk defined. The optimal D-scaling output from mu, which are
real coeeficients, are fit with real, rational, minimum-phase stable transfer
functions via musynfit. These fitted D-scales are wrapped back around the
orginal interconnection structure P. After absorbing the D-scales, another D–K
iteration is performed, starting with the design of an H∞ control law for the
8-118
musynfit, musynflp, muftbtch
modified plant. This process usually continues until the value of µ doesn’t
change significantly between control design iterations.
This example is taken from the “HIMAT Robust Performance Design Example”
section in Chapter 7. himat_ic contains the open-loop interconnection
structure. It has one multiplicative input perturbation, which is two by two,
and has two error signals, and two external disturbances. There are two
measurements, and two control inputs to the system. The block structure for
the µ-analysis problem is given by blk=[2 2; 2 2].
First step in a D–K iteration is to design an H∞ controller and analyze the
closed loop system with µ.
mkhic
omega = logspace(0,4,40);
blk = [2 2; 2 2];
[k1,g1,gf1] = hinfsyn(himat_ic,2,2,0.8,6,0.05,2);
g1_g = frsp(g1,omega);
[bnds1,dvec1,sens1,rp1] = mu(g1_g,blk);
The D-scalings output from the µ-analysis problem needs to be fit with real,
rational, stable, minimum-phase transfer functions. This is done with
musynfit. The first D-scale is fit with both a first and third-order transfer
function, with the third order transfer function selected.
8-119
musynfit, musynflp, muftbtch
[dsysL1,dsysR1] = musynfit('first',dvec1,sens1,blk,2,2);
FITTING D SCALING #1 of 1
10 1
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
data and old fit
10 1 wt for fit
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
NOTE APPROXIMATE ORDER NECESSARY FOR FIT.....
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
1) data 2) newfit 3) oldfit
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
ENTER NEW ORDER, ’drawmag’, or NEGATIVE NUMBER TO STOP
8-120
musynfit, musynflp, muftbtch
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
1) data 2) newfit 3) oldfit
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
ENTER NEW ORDER, ’drawmag’, or NEGATIVE NUMBER TO STOP
Now the fitted D-scales are absorbed into the interconnection structure,
himat_ic, to generate himat_ic2.
mu_ic1 = mmult(dsysL1,himat_ic,minv(dsysR1));
The new D-scales are fit again using the previous D-scale information.
[dsysL2,dsysR2] = musynfit(dsysL1,dvec2,sens2,blk,2,2);
8-121
musynfit, musynflp, muftbtch
The graphs displayed with musynfit for this iteration are not included. Wrap
the new fitted D-scales around the original plant interconnection structure and
start D–K iteration again.
mu_ic2 = mmult(dsysL2,himat_ic,minv(dsysR2));
[k3,g3] = hinfsyn(mu_ic2,2,2,.9,1.3,.05,2);
musynfit can be called as before with the frequency response of the closed-loop
system analyzed using mu, g1_g, and the mu upper bound, sel(bnds1,1,1)
passed. The first D-scale is fit with both a first- and third-order transfer
function and the first order transfer function selected. As you can see from the
scaled upper bound plots (the lower graph), the first-order fit does a better job
minimizing the scaled upper bound.
[dsysL1,dsysR1] = ...
musynfit('first',dvec1,sens1,blk,2,2,gl_g,sel(bnds1,1,1)
FITTING D SCALING #1 of 1
10 1
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
data and old fit
wt for fit
10 1
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
NOTE APPROXIMATE ORDER NECESSARY FOR FIT.....
8-122
musynfit, musynflp, muftbtch
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
1) mag data 2) newfit 3) previous D-K
1.5
0.5
10 0 10 1 10 2 10 3 10 4
1) mu upper bnd 2) upper bnd with rational fit
10 0
10 -1
10 -2
10 0 10 1 10 2 10 3 10 4
1) mag data 2) newfit 3) previous D-K
Scaled transfer function: optimal and rational upper bound
2
1.5
0.5
10 0 10 1 10 2 10 3 10 4
1) mu upper bnd 2) upper bnd with rational fit
ENTER ORDER OF CURVE FIT or 'drawmag' -1
8-123
musynfit, musynflp, muftbtch
As before you would aborb the fitted D-scales into the interconnection
structure, himat_ic, to generate himat_ic2.
Algorithm A frequency response is done on the previous rational D-scaling matrix. This is
multiplied by the current data in dvec, to produce the frequency varying
scaling that needs to be fit. The fit is only in magnitude, and the freedom in the
phase allows the rational function to be defined as stable, and minimum phase.
musynfit calls fitsys, which calls fitmag, flatten, and genphase. The curve
fitting is done the fitsys command.
musynflp is an alternative program that uses linear programming to do the fit.
musynflp fits the data very well within the frequency response window at the
expense of perhaps large variations outside the data window. This may lead to
problems in D–K iteration. muftbtch is a batch version of musynflp.
Reference Doyle, J.C., K. Lenz, and A.K. Packard, “Design examples using µ-synthesis:
Space shuttle lateral axis FCS during reentry,” NATO ASI Series, Modelling,
Robustness and Sensitivity Reduction in Control Systems, vol. F34 R.F. Curtin,
Editor, Springer-Verlag, Berlin-Heidelberg, 1987.
See Also drawmag, fitmag, fitmaglp, fitsys, flatten, genphase, invfreqs, magfit, msf
8-124
ncfsyn, cf2sys, emargin
1 I –1 1
----------------------------------------- := ( I – W 2 PW 1, K ∞ ) [ W 2 PW 1, I ] ð ---
b ( W 2 PW 1, K ∞ ) K∞ ε
∞
which will also give robust stability of the perturbed weighted plant
–1 ∆1
( N + ∆1 ) ( M + ∆2 ) for < b ( W 2 PW 1, K ∞ )
∆2
∞
8-125
ncfsyn, cf2sys, emargin
loop shape in frequencies where the gain of W2PW1 is either high or low, and
will guarantee satisfactory stability margins in the frequency region of gain
cross-over. In the regulator set-up, the final controller to be implemented is
W1K∞W2.
When the option 'ref' is specified, the controller includes an extra set of
reference inputs as proposed in Vinnicombe, 1993, and should be implemented
Output arguments
sysk H∞ loopshaping controller
emax Stability margin as an indication robustness to unstructured
perturbations. emax is always less than 1 and values of emax
greater than 0.3 generally indicate good robustness margins.
sysobs H∞ loopshaping observer controller. This variable is created only
if factor>1 and opt = 'ref'
8-126
ncfsyn, cf2sys, emargin
Chapter 3. The default value for the tolerance tol supplied to the hinfnorm
computation is 0.001.
8-127
nd2sys, zp2sys
2
4s + 5s + 1
sys = -----------------------------------------------------------------
4 3 2
7s + 3s + 6s + 2s + 8
A matrix
8-128
nd2sys, zp2sys
B matrix
1
0
0
0
C matrix
0 0.5714 0.7143 0.1429
D matrix
0
Algorithm nd2sys and zp2sys realize the transfer functions using the MATLAB
commands tf2ss and zp2ss.
8-129
negangle
Purpose 8negangle
Calculate the angle of elements of a matrix, which is always between [–2π,0]
Syntax y = negangle(x)
Description negangle returns the phase angles, in radians, of a matrix with complex valued
elements. The returned value is always in the range 0 to –2π radians.
Examples a = [1+i,1-i];
see(angle(a))
0.7854-0.7854
see(negangle(a))
-5.4978-0.7854
8-130
pck, pss2sys, sys2pss, unpck
Description pss2sys translates a regular MATLAB matrix that is in packed form into a
SYSTEM matrix. mat contains [A B; C D] which describes the individual
components of a SYSTEM matrix with n being the number of states (size of A).
sys2pss returns the CONSTANT matrix, mat = [A B; C D], from the input
SYSTEM matrix sys.
pck takes consistent state-space data and forms a SYSTEM matrix with the
data structure defined. Consistent state-space data requires a square A matrix,
a B matrix with the same number of rows as A, a C matrix with the same
number of columns as A, and a D matrix with same number of columns of B and
rows of C. If the fourth input argument is omitted, then the D matrix is assumed
to be identically zero, of appropriate dimensions. unpck is the inverse operation
of pck, taking a SYSTEM matrix sys and converting it to A, B, C and D
CONSTANT matrices.
Note that based on the data structure definition, a -Inf in the bottom right
corner of a matrix denotes a SYSTEM matrix, with the top, right corner
element of the matrix containing the number of states.
8-131
pck, pss2sys, sys2pss, unpck
Examples Create a SYSTEM matrix from MATLAB CONSTANT matrices via pss2sys.
Define matrices A, B, C, and D as follows.
A = [-1 1; -1 -3]; B =[2 2; 2 2]i C = [3 3]; D = [4 4]
mat = [A B; C D];
sys = pss2sys(mat,2);
minfo(mat)
3 rows4 cols: regular MATLAB matrix
mat =
-1 1 22
-1 -3 22
3 3 44
minfo(sys)
system: 2 states1 outputs2 inputs
seesys(sys)
The same SYSTEM matrix can be constructed using the pck command.
sys = pck(A,B,C,D);
minfo(sys)
system: 2 states1 outputs2 inputs
seesys(sys)
8-132
pck, pss2sys, sys2pss, unpck
8-133
pkvnorm, vnorm
Description pkvnorm sweeps through the independent variable, calculating the norm of
each matrix as specified by the input argument p, following the convention
from MATLAB’s norm command. The default for p is the largest singular value
of matin. The three output arguments all pertain to the peak and its location:
peak value, peak, the independent variable’s value, indv, and the independent
variable’s index, index.
vnorm is a VARYING matrix version of MATLAB’s norm command. The
operation of the norm command is identical to vnorm, except that vnorm also
works on CONSTANT and VARYING matrices, which produces a CONSTANT
or VARYING output. vnorm returns the matrix out with its norm at each
independent variable value.
8-134
pkvnorm, vnorm
1 row 1 column
indep variable 0.2
1
indep variable 0.6
6.4510
[peak,indv,index] = pkvnorm(matin);
peak
peak =
6.4510
indv
indv =
0.6000
index
index ==
2
8-135
ric_eig
Purpose 8ric_eig
Solution of a Riccati equation via eigenvalue decomposition
Description ric_eig (along with a call to x=x2/x1) solves the Riccati equation,
A′X + XA + XRX – Q = 0
with the constraint that the matrix A + RX has all of its eigenvalues in open
left-half plane. The data matrices A, R and Q come from the input Hamiltonian
matrix, ham, in the form
ham =
A R
Q – A′
8-136
ric_eig
Algorithm Under the assumption that the Hamiltonian matrix has a full set of
eigenvectors, the stable-invariant subspace is spanned by the eigenvectors
associated with the stable eigenvalues. Hence, an eigenvalue-eigenvector
decomposition can obtain the stable invariant subspace of the Hamiltonian
matrix, ham. Assuming there are no jω axis eigenvalues, and that there is a full
set of eigenvectors, the two components, x1 and x2, can be generated by
choosing the eigenvectors associated with the stable eigenvalues. The ric_eig
subroutine operates on the assumption that the Jordan form of the
Hamiltonian is diagonal, and returns the stable invariant subspace, as
spanned by the eigenvectors, in the two block form described above.
8-137
ric_schr
Purpose 8ric_schr
Solve a Riccati equation via Schur decomposition
Description ric_schr (along with a call to x=x2/x1) solves the Riccati equation,
A′X + XA + XRX – Q = 0
such that A + RX is stable. A real Schur decomposition can obtain the stable
invariant subspace of the Hamiltonian matrix, ham. The data matrices A, R,
and Q come from the input Hamiltonian matrix in the form
ham =
A R
Q – A′
Algorithm ric_schr calls csord to produce an ordered complex Schur form, which is
converted to a real Schur form, and yields a stable, invariant subspace of the
Hamiltonian. The csord command orders the solution with negative real
eigenvalues in the top half of the matrix and the positive real eigenvalues on
the bottom, and returns the stable solution. The input matrix is assumed to be
a Hamiltonian matrix of size 2n with n stable eigenvalues and n unstable
eigenvalues. The minimum real part of the eigenvalues is output to reig_min.
epp is an optional argument and its default value is 1e-10.
8-138
rifd
Purpose 8rifd
Display the real, imaginary, frequency and damping ratios of a CONSTANT
input vector
Syntax rifd(vec)
Description rifd displays the real, imaginary, frequency, and damping ratios of a
CONSTANT input vector. The ith frequency is given by the
1
2 2 ---
( real(sys(i)) + imag(sys(i) ) 2
8-139
samhld
Purpose 8samhld
Create a sample-hold approximation of a continuous system
Description samhld applies a sample-hold to the input of the continuous-time systems sys,
and samples the output, to produce a discrete-time system, discout. The
sampling time is the same at the input and output, and is specified by T.
Examples Construct the system sys via the nd2sys command and verify that all of its
poles are in the closed left-half plane. Perform a sample hold of the system for
a 200 Hz sample rate. All the poles for the discretized system, discout, are
within the unit disk.
sys = nd2sys([1 2 4 5],[2 7 9 2]);
spoles(sys)
-1.6114 + 1.0049i
-1.6114 - 1.0049i
-0.2773
discout = samhld (sys ,1 / 2 0 0)
seesys (discout, '%8 .4f ')
spoles(discout)
0.9920 + 0.0050i
0.9920 - 0.0050i
0.9986
sys = A B .
C D
8-140
samhld
If u(t) is held constant over the interval [kT,(k + 1)T], then over that interval,
the state evolution is governed by the differential equation
x· = A B x , x ( kT ) = x k ,
u· 0 nu × n 0nu × n u u ( kT ) uk
which captures the behavior of the continuous-time system, over one sample
period, while the input u(t) is held constant.
A B
Let à :=
0nu × n 0nu × n
( n + nu ) × ( n + nu ) ÃT
Define W ∈ R as W := e . Then W appears as
W 11 W 12
W =
0 I
Define Adisc := W11, and Bdisc := W12, and define the discretized system as
A disc B disc
discout =
C D
8-141
scliv
Purpose 8scliv
Scale the independent variable values of a VARYING matrix with an affine
transformation
Description scliv scales the independent variable of a VARYING matrix in the following
manner. Let indvi and newindvi denote the independent variable’s ith value,
before and after applying scliv. Then, for each i, they are related as
newindvi = (factor × indvi) + offset
Examples Scale the independent value of vin by a factor of 3 and offset it from its original
value by 0.5.
seeiv(vin)
l.000e+002.000e+003.000e+00 4.000e+00 5.000e+00
vout = scliv(vin,3,0.5)
seeiv(vout)
3.500e+006.500e+009.500e+00 1.250e+01 1.550e+01
8-142
sdhfnorm
Purpose 8sdhfnorm
sdhfnorm calculates the induced norm of a sampled-data system
Syntax [gaml,gamu]=sdhfsyn(p,k,h,delay,tol)
A B1 B2
p = C1 0 0
C2 0 0
where the continuous-time disturbance inputs enter through B1, the outputs
from the controller are held constant between sampling instants and enter
through B2, the continuous-time errors to be kept small correspond to the C1
partition, and the output measurements that are sampled by the controller
correspond to the C2 partition. B2 has column size (ncon) and C2 has row size
(nmeas). Note that the D matrix is assumed to be zero.
sdhfnorm calculates the maximum gain from the L2 norm of the disturbance
inputs to the L2 norm of the error outputs.
Input arguments:
p SYSTEM interconnection structure matrix, (continuous-time)
k discrete-time controller
h sampling period
delay number of samples computational delay (default = 0)
(integer ≥ 0 with default =0)
tol required relative accuracy
Output arguments:
gaml lower bound on the norm
gamu upper bound on the norm
8-143
sdhfnorm
Algorithm sdhfnorm uses variations of the formulae described in the Bamieh and Pearson
paper to obtain an equivalent discrete-time system. (These variations are done
to improve the numerical conditioning of the algorithms.) A preliminary step is
to determine whether the norm of the continuous-time system over one
sampling period without control is less than the given γ-value. This requires a
search and is, computationally, a relatively expensive step.
Reference Bamieh, B.A., and J.B. Pearson, “A General Framework for Linear Periodic
Systems with Applications to Sampled-Data Control,” IEEE Transactions on
Automatic Control, vol. AC–37, pp. 418-–435, 1992.
See Also dhfsyn, hinfsyne, hinffi, hinfnorm, hinfsyn, h2syn, h2norm, ric_eig,
ric_schr
8-144
sdhfsyn
interconnection matrix
A B1 B2
p = C1 0 0
C2 0 0
where the continuous-time disturbance inputs enter through B1, the outputs
from the controller are held constant between sampling instants and enter
through B2, the continuous-time errors to be kept small correspond to the C1
partition, and the output measurements that are sampled by the controller
correspond to the C2 partition. B2 has column size (ncon) and C2 has row size
(nmeas). Note that the D matrix is assumed to be zero.
sdhfsyn synthesizes a discrete-time controller to achieve a given norm (if
possible) or find the minimum possible norm to within some tolerance.
sdhfsyn provides a γ iteration using the bisection method. Given a high and low
value of γ, gmax and gmin, the bisection method is used to iterate on the value
of γ in an effort to approach the optimal H∞ control design. If gmax = gmin, only
one γ value is tested. The stopping criteria for the bisection algorithm requires
the relative difference between the last γ value that failed and the last γ value
that passed be less than tol. You can select either the eigenvalue or Schur
method for solution of the Riccati equations with and without balancing. The
eigenvalue method is faster but can have numerical problems, while the Schur
method is slower but generally more reliable.
The algorithm employed calculates an equivalent purely discrete-time problem
for each value of γ and then calls dhfsyn with γ = 1. The screen printing is then
derived from the tests performed by dhfsyn.
8-145
sdhfsyn
Input arguments
p SYSTEM interconnection structure matrix
nmeas number of measurements output to controller
ncon number of control inputs
gmin lower bound on γ
gmax upper bound on γ
tol relative difference between final γ values
delay number of samples computational delay (default = 0)
h time between samples
ricmethod 1 Eigenvalue decomposition with balancing
–1 Eigenvalue decomposition with no balancing
2 Schur decomposition with balancing (default)
–2 Schur decomposition with no balancing
epr measure of when a real part of an eigenvalue of the
Hamiltonian matrix is zero (default epr = 1e–10)
epp positive definite determination of the X∞ and Y∞ solution
(default epp = 1e–6)
Output arguments
k H∞ (sub) optimal controller
gfin final γ value achieved
You might design a first controller using the dhfsyn function on the SYSTEM
(samhld(p,h)), followed by sdhfnorm to determine an upper bound gmax to use
for the start of this sampled data control design iterative process.
8-146
sdhfsyn
Algorithm sdhfsyn uses variations of the formulae described in the Bamieh and Pearson
paper to obtain an equivalent discrete-time system. (These variations are done
to improve the numerical conditioning of the algorithms.) A preliminary step is
to determine whether the norm of the continuous-time system over one
sampling period without control is less than the given γ-value, this requires a
search and is computationally a relatively expensive step.
Reference Bamieh, B.A., and J.B. Pearson, “A General Framework for Linear Periodic
Systems with Applications to Sampled-Data Control,” IEEE Transactions on
Automatic Control, vol. AC–37, pp. 418–435, 1992.
See Also dhfsyn, hinfsyne, hinffi, hinfnorm, hinfsyn, h2syn, h2norm, ric_eig,
ric_schr
8-147
see, seeiv
Syntax see(mat,iv_low,iv_high)
see(matin)
seeiv(mat)
Description see displays the A, B, C, and D matrices of matin for a SYSTEM matrix or the
independent variable and the matrix at that variable if matin is a VARYING
matrix. iv_low and iv_high are the optional range of the independent
variables to be displayed. see displays the matrix itself if the input is
CONSTANT.
seeiv displays only the independent variable of the input VARYING matrix
mat. An error message is displayed if the input matrix is not a VARYING
matrix.
Examples The see command displays any type of matrix. An example of its use for
SYSTEM and VARYING matrices follows.
see(sys)
A matrix
1 1
1 1
press any key to move to B matrix
B matrix
2 2
2 2
press any key to move to C matrix
C matrix
3 3
press any key to move to D matrix
D matrix
4 4
sysg = frsp(sys,[0.4 0.9]);
see(sysg)
1 row2 columns
iv = 0.4
-1.7692 - 1.1538i-1.7692 - 1.1538i
iv = 0.9
-0.9896 - 2.2453i-0.9896 - 2.2453i
8-148
see, seeiv
8-149
seesys
Purpose 8seesys
Display a SYSTEM or VARYING matrix with sprintf formatting
Syntax seesys(matin,'format')
Examples The seesys command displays any type of matrix. An example of its use for
SYSTEM and VARYING matrices follows.
seesys(sys)
seesys(sys,'%1.0f')
1 1 | 2 2
1 1 | 2 2
-------|--------
3 3 | 4 4
8-150
sel, reordsys
Description sel selects desired rows and columns from a CONSTANT/VARYING matrix,
or outputs and inputs from a SYSTEM matrix. For CONSTANT and VARYING
matrices, the rows and cols input arguments are row vectors with the desired
rows/columns of mat specified. For SYSTEM matrices, outputs and inputs are
row vectors with the desired inputs/outputs specified. Use the string ':' to
specify all rows (inputs) and/or columns (outputs).
reordsys reorders the states of SYSTEM matrix sys as defined by the vector
of position variables, index. The index variable is restricted to be the same
length as the number of states of sys. This command can be used in conjunction
with strans and sresid to reduce the states of a SYSTEM matrix.
Examples You can use the sel command with any matrix type. First, construct and
display a one state, two output, three input SYSTEM matrix.
minfo(sys)
system:1 states2 outputs3 inputs
seesys(sys)
8-151
sel, reordsys
Reorder the outputs of sys to be output 2, 1 and repeat output 2; also reorder
the inputs to be input 3, 1 and 2.
sys2 = sel(sys,[2 1 2],[3 1 2]);
minfo(sys2)
system:1 states3 outputs3 inputs
seesys(sys2)
indep variable 1
Select the second and first outputs and the third and second inputs and display
them.
part = sel(sysg,[2 1],[3 2]);
see(part)
2 rows2 columns
indep variable 0.1
8-152
sel, reordsys
indep variable 1
8-153
siggen
Purpose 8siggen
Generate VARYING matrix functions
Syntax y = siggen('function(t)',t)
Description siggen is a general purpose signal generator. You can provide a timebase with
the argument t, and the function to be evaluated with the first argument (a
string), function(t). The output, y, will be a VARYING matrix. t could also be
VARYING, in which case the timebase is the independent variables contained
in t.
function(t) is not necessarily dependent on t. In the cases where it doesn’t
depend on t, siggen can be slow. This is because function is evaluated with a
MATLAB eval call for every element in t. For example, consider generating a
random vector. The command
u = siggen('rand(size(t))',[0:100]);
Examples The first example illustrates what is perhaps the most common use of siggen.
A single-input single-output signal is created from MATLAB mathematical
functions. It is important to use t as the independent variable in the function
string.
timebase = [0:0.05:10];
y1 = siggen('exp(0.1*t) - sin(3*t)',timebase);
minfo(y1)
varying:201 pts1 rows1 cols
vplot(y1)
title('siggen example: function depends on t')
8-154
siggen
3.5
2.5
1.5
0.5
0
0 1 2 3 4 5 6 7 8 9 10
The second example illustrates that the second argument can make use of the
independent values of a VARYING matrix. Note also that the specified function
is independent of t, and is executed at each instance of t. This example is
included to illustrate that the function string need not depend on t. In practice
the string rand(size(t)) is orders of magnitude faster than rand.
y2 = siggen('rand',y1);
minfo(y2)
varying:201 pts1 rows1 cols
vplot(y2)
title('siggen example: function independent of t')
8-155
siggen
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
8-156
siggen
-1
-2
0 1 2 3 4 5 6 7 8 9 10
siggen cannot generate stair-step signals with user-specified values. You can
do this however (for single-input/single-output signals) using the command
vpck or the µ-Tools commmand step_tr. Vectors of signals can be created with
vpck and abv. The following example demonstrates the use of vpck. vinterp is
used to plot a meaningful representation of the signal.
y4 = vpck([0:10]',[0:2:20]');
minfo(y4)
varying:11 pts1 rows1 cols
vplot(vinterp(y4,0.1))
title('Siggen example: step function')
8-157
siggen
0
0 2 4 6 8 10 12 14 16 18 20
8-158
simgui
Purpose 8simgui
A graphical user interface for time simulations of linear fractional
transformations
Syntax simgui
Description simgui provides the ability to simulate linear fractional models and plot their
responses. The standard linear fractional model considered is shown below.
• Main Simulation window, which is the main interface for the user.
• Parameter window, which is used to modify properties of the time
simulation, such as the final time, integration step size, initial conditions,
and which variables are automatically exported to the workspace.
• Plot windows, where the plots of time responses are displayed. You can open
up to six of these windows.
8-159
simgui
8-160
spoles
Purpose 8spoles
Calculate the eigenvalues of a SYSTEM A matrix
Description spoles returns the eigenvalues of the A matrix from the SYSTEM matrix sys.
Examples Find the poles of the two input, one output, three state SYSTEM matrx sys.
A = [1 1 1; 3 1 1; 1 1 -2];
B = 2*ones(3,2);
C = 3*ones(1,3);
D = 4*ones(1,2);
sys = pck(A,B,C,D);
minfo(sys)
system: 3 states1 outputs2 inputs
spoles(sys)
3.1474
-0.8186
-2.3289
eig(A)
ans =
3.1474
-0.8186
-2.3289
Algorithm spoles uses the MATLAB command schur to find the eigenvalues of the
SYSTEM A matrix. This is a more numerically reliable method than using the
eig function.
8-161
srelbal, sfrwtbal, sfrwtbld, sncfbal, sdecomp
where in the relative error case wt1 is the identity and wt2 = sysfact.
8-162
srelbal, sfrwtbal, sfrwtbld, sncfbal, sdecomp
Nr
and are given by the column vector signcf. Model reduction for these
Mr
systems can then be performed using strunc or hankmr. The method is well
suited to plant or controller reduction in feedback systems.
sdecomp decomposes a system into the sum of two systems, sys =
madd(sysst,sysun). sysst has the real parts of all its poles < bord and sysun
has the real parts of all its poles ≥ bord. bord has default value 0. The D matrix
for sysun is zero unless fl = 'd' when that for sysst is zero.
srelbal, sfrwtbal , sfrwtbld, sncfbal, and sdecomp are restricted to be used
on continuous-time SYSTEM matrices.
( s + 1 ) ( s + 10 ) ( s + 90 )
Examples Given the system sys = -------------------------------------------------------------- reduce the system to two and
( s + 2 ) ( s + 91 ) ( s + 100 )
one states, respectively. An approximate system of order 1 or 2 can be obtained
as follows.
sys = zp2sys([-1 -10 -90],[-2 -91 -100]);
[sysb,relsv,sysfact] = srelbal(sys);
disp(relsv')
8.5985e-012.0777e-012.1769e-04
sysrel1 = strunc(sysb,1);
sysrel2 = strunc(sysb,2);
The relative error in the second-order model will be negligible since relsv(3) is
very small; however, with a first-order model, it will be substantial.
8-163
srelbal, sfrwtbal, sfrwtbld, sncfbal, sdecomp
In this example the method nearly reaches the lower bound, but this cannot be
claimed in general.
Now consider approximating the unstable third order system,
10
sys = ----------------------------------------
s ( s – 1 ) ( s + 10 )
using sncfbal. First the balanced realization of the normalized left coprime
factors is calculated, then this is truncated to two states and the reduced-order
system recovered from these normalized coprime factors using starp.
sys = zp2sys([],[0 1 -10],10);
[sysnlcf,signcf] = sncfbal(sys);
disp(signcf')
9.6700e-015.2382e-012.3538e-02
sysnlcfr = strunc(sysnlcf,2);
sysr = starp(mmult([1;1],msub(sysnlcfr,[0 1])),-1,1,1)
8-164
srelbal, sfrwtbal, sfrwtbld, sncfbal, sdecomp
Algorithm The algorithms are based on the results in the following papers.
8-165
sresid, strunc
sysout = strunc(sys,ord)
Description sresid residualizes the last states of a SYSTEM matrix sys. sresid accounts
for the DC contribution of the last columns and rows of the SYSTEM A matrix
and the corresponding rows and columns of B and C. sresid assumes that the
SYSTEM matrix is ordered so that the last states are to be residualized. If the
orignal SYSTEM matrix is partitioned as
A 11 A 12 B 1
p = A 21 A 22 B 2
C1 C2 D
results in
A B A
sysout = pss2sys – 12 A 22 A 21 B 2 , ord
11 1 –1
C D C2
1
strunc truncates the states of the input system matrix sys, to a system with
state dimension equal to ord. strunc can be used in conjunction with the model
reduction routines sysbal and hankmr.
The resulting SYSTEM output matrix is
sysout = pss2sys ([A_11 B_1; C_1 D]);
8-166
sresid, strunc
Examples A two input, one output, four state SYSTEM is reduced down to a two input,
one output, two state SYSTEM via sresid and strunc. The only difference
between the two reduced-order systems is the value of their D matrices.
seesys(sys)
-1.2e-01 0.0e+00 0.0e+00 0.0e+00 | 9.le-01 5.2e-01
0.0e+00 -3.2e-01 0.0e+00 0.0e+00 | 6.le-02 3.2e-01
0.0e+00 0.0e+00 -4.3e+00 0.0e+00 | 9.le-01 9.9e-01
0.0e+00 0.0e+00 0.0e+00 -9.9e+01 | 5.le-01 4.9e-01
---------------------------------------|-------------------
2.7e-01 9.le-02 9.5e-01 7.4e-02 | 0.0e+00 0.0e+00
sys_strunc = strunc(sys,3);
seesys(sys_strunc)
-1.2e-01 0.0e+00 0.0e+00 | 9.le-01 5.2e-01
0.0e+00 -3.2e-01 0.0e+00 | 6.le-02 3.2e-01
0.0e+00 0.0e+00 -4.3e+00 | 9.le-01 9.9e-01
-------------------------------|---------------------
2.7e-01 9.le-02 9.5e-01 | 0.0e+00 0.0e+00
sys_resid = sresid(sys,3)
seesys(sys_resid)
8-167
starp
Purpose 8starp
Form the Redheffer star product of two VARYING/SYSTEM/CONSTANT
matrices. The star product is a generalization of a linear fractional
transformation
Description Connects the two matrices top and bot in the star product loop shown below.
The last dim1 outputs of top are fed to the first dim1 inputs of bot, and the first
dim2 outputs of bot are fed into the last dim2 inputs of top. The remaining
inputs and outputs constitute sysout. By this description, the dimensions must
satisfy
min(dim_out(top),dim_in(bot)) ≥ dim1
min(dim_out(bot),dim_in(top)) ≥ dim2
Further restrictions also arise
IF dim1 = dim_out(top) & dim2 = dim_out(bot)
THEN there are no outputs remaining in the interconnection
IF dim1 = dim_in(bot) & dim2 = dim_in(top)
THEN there are no inputs remaining in the interconnection
8-168
starp
Algorithm The “m-Tools Commands for LFTs” section in Chapter 4 provides details of the
star product formulae.
8-169
statecc, strans
Description statecc applies a state coordinate transformation to the matrix, yielding a new
SYSTEM matrix with
sysout = pck(tA*t,tB,C*t,D)
Examples The strans command shows the individual contributions of the modes of the
SYSTEM matrix. In this example sys, which has four states, two inputs and
one output is transformed into bidiagonal form.
see(sys)
A matrix
8-170
statecc, strans
B matrix
0.6868 0.5269
0.5890 0.0920
0.9304 0.6539
0.8462 0.4160
C matrix
D matrix
O O
sys=strc
see(sys)
A matrix
-0.0763 0 0 0
0 0.1082 –0.4681 0
0 0 0 1.4095
B matrix
-0.4731 -0.1839
0.5971 0.3199
0.2869 0.5542
-1.7033 -0.8132
C matrix
D matrix
O O
8-171
sysbal, hankmr
8-172
sysbal, hankmr
( s + 10 ) ( s + 90 )
Examples Given the system sys = ----------------------------------------------------------- , reduce the system to two and
( s + 2 ) ( s + 91 ) ( s + 10 )
10 -2
10 -3
10 -4
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
-20
Phase (radians)
-40
-60
-80
-100
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
8-173
sysbal, hankmr
Notice that there is virtually no difference between the three systems. Now we
will reduce the original system down to one state with sysbal and hankmr.
sys1s = strunc(syssb,1);
sys1sg = frsp(sys1s,w);
sys1h = hankmr(syssb,sv,1);
sys1hg = frsp(sys1h,w);
[syssb,sv] = sysbal(sys);
vplot('bode',sys_g,sys1sg,sys1hg)
tmp = 'Original 3 state system, 1 state Balanced '
tmp1 = 'and Hankel Model Reduction')
title([tmp tmp1])
10 -2
10 -3
10 -4
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
-20
Phase (radians)
-40
-60
-80
-100
10 -1 10 0 10 1 10 2 10 3
Frequency (radians/sec)
The original three state system corresponds to the solid line, the one state
balanced realization system corresponds to the dashed line, and the one state
Hankel model reduced system corresponds to the dotted line. There is
significant differences between the models and the two model reduction
techniques. Depending on the model reduction objectives, the one state models
may be inappropriate for use.
8-174
sysbal, hankmr
8-175
sysic
Purpose 8sysic
Form linear interconnections of CONSTANT and SYSTEM matrices (or
CONSTANT and VARYING matrices)
Syntax sysic
Description µ-Tools provides a simple linear system interconnection program called sysic.
It forms linear interconnections of CONSTANT and SYSTEM matrices (or
CONSTANT and VARYING matrices, though this can require a lot of memory),
by writing the loop equations of the interconnection.
Using sysic involves setting up several variables in the MATLAB workspace,
and then running the M-file sysic. The variables that are defined delineate the
details of the interconnection.
Variable Descriptions
A list and description of the variables required by sysic follow.
inputvar. This variable is a character string, with names of the various external
inputs that are present in the final interconnection. The input names are
separated by semicolons, and the entire list of input names is enclosed in
square brackets [ ]. Inputs can be multivariable signals; for instance a
windgust input, with three directions (x, y, and z) that can be specified by using
windgust{3}. This means that there is a three variable input to the
interconnection called windgust. Alternatively, this could be specified as three
separate, scalar inputs, say wingustx, windgusty, and windgustz. The order
that the input names appear in the variable inputvar is the order that the
inputs will be placed in the interconnection.
8-176
sysic
cleanupsysic. After running sysic, all of the above variables, which describe the
interconnection, are left in the workspace. These will be automatically cleared
if the optional variable cleanupsysic is set to the character string yes. The
default value of the variable is 'no' which does not result in any of the sysic
descriptions you defined to be cleared. The MATLAB matrices listed in the
variable systemnames are never automatically cleared.
Running sysic
If the variables systemnames, inputvar, and outputvar are set, and for each
name name_i appearing in systemnames, the variable input_to_name_i is set,
then the interconnection is created by running the M-file sysic. Depending on
the existence/nonexistence of the variable sysoutname, the resulting
8-177
sysic
8-178
sysic
The final interconnection structure is located in clp with two sets of inputs,
pertin and dist, and two sets of outputs w and e, corresponding to the
perturbation and error outputs.
8-179
szeros
Purpose 8szeros
Transmission zeros of a SYSTEM matrix
Description szeros calculates the transmission zeros of the input SYSTEM matrix, sys.
The output veczeros contains the vector of transmission zeros.
epp is an optional input argument which is used to test the closeness of the
generalized eigenvalues of the randomly perturbed matrices. Its default value
is the machine epsilon. Occasionally zeros at infinity are displayed as very
large values due to numerical accuracy problems.
For a square SYSTEM matrix, [A B; C D], the generalized eigenvalue test
consists of finding the roots of
det A – λI B = 0
C D
Algorithm For a square system, the transmission zeros are found via the generalized
eigenvalue problem described above. To solve for the transmission zeros of a
nonsquare SYSTEM matrix, additional random rows or columns are
augmented to the SYSTEM matrix to make it square and the corresponding
zeros are found. This is done twice, and the unchanged generalized
eigenvalues, where the difference between the eigenvalues is less than epp, are
considered to be the transmission zeros of the SYSTEM matrix.
Reference Laub, A.J., and B.C. Moore, “Calculation of transmission zeros using QZ
techniques,” Automatica, vol. 14, pp. 557–563, 1978.
8-180
trsp, dtrsp, sdtrsp
Syntax y = trsp(sys,u,tfinal,int,x0)
y = dtrsp(dsys,u,T,tfinal,x0)
[output,y,u] = sdtrsp(sys,k,input,T,tfinal,int,x0,z0)
Description trsp computes the time response of the continuous-time system, sys, with the
input, u. The input, u, is a VARYING matrix, which contains the input signal
vector at certain points in time. The input can be irregularly spaced in the
independent variable or a constant, in which case it is assumed to occur at t =
0.
The final time, tfinal, is an optional argument. If omitted, it defaults to the
maximum time in u. The time response is calculated as though the input is a
constant value between the points specified in u. If tfinal is greater than the
largest independent variable in u, the input is held at the last value in u.
For continuous-time evaluation (trsp), you can optionally specify an
integration time with the variable int. If this is omitted, or is equal to zero, an
appropriate value is calculated and displayed. The calculated integration time
depends on the minimum spacing in the input and the fastest dynamics in sys.
int will also be the independent variable step size in the regularly spaced
output, y. If a coarser output is adequate, it can be obtained with the function
vdcmate.
Initial conditions can optionally be specified with the argument, x0. This
specifies the state vector at the first time point in the input vector. If x0 is
omitted, or is a zero scalar, then it is assumed to be a zero vector.
trsp interpolates the input with a zero-order hold of step size equal to int,
discretizes the output at this same step size, and calculates the response from
the initial time to tfinal in steps of int.
dtrsp calculates the response for a discrete-time system, dsys. The time (for
the independent variable) between discrete indices is T. If the input is not
regularly spaced at intervals of time T, it is interpolated. tfinal and x0 behave
in the same manner as for trsp.
sdtrsp calculates a sampled-data time response for a closed-loop system with
a continuous generalized plant (sys) and a discrete controller (K). The
interconnection is illustrated below.
8-181
trsp, dtrsp, sdtrsp
Examples A simple SISO system illustrates the use of trsp. This example shows the
consequences of the input being assumed to be constant between time points.
sys = pck(-1,1,1);
minfo(sys)
system:1 states1 outputs1 inputs
u = vpck([0:10:50]',[0:10:50]');
y = trsp(sys,u,60);
integration step size: 0.1
interpolating input vector (zero order hold)
minfo(y)
varying:601 pts1 rows1 cols
vplot(u,'-.',y,'-')
xlabel('time: seconds')
text(10,20,'input')
text(25,10,'output')
8-182
trsp, dtrsp, sdtrsp
50
45
40
35
30
25
input
20
15
output
10
0
0 10 20 30 40 50 60
Time: seconds
At first glance the output does not seem to be consistent with the plotted input.
Remember that trsp assumes that the input is held constant between specified
values. The vplot and plot commands display a linear interpolation between
points. This can be more clearly seen by displaying the input signal
interpolated to at least as small a step size as the default integration step (here
0.1 seconds).
vplot(u,'-.',vinterp(u,0.1),'--',y,'-')
xlabel('time: seconds')
text(5,44,'dash-dot: input')
text(5,40,'dashed: interpolated input')
text(5,36,'solid: output')
8-183
trsp, dtrsp, sdtrsp
50
45 dash-dot: input
dashed: interpolated input
40
solid: output
35
30
25
20
15
10
0
0 10 20 30 40 50 60
Time: seconds
The staircase nature of the input is now evident. If you really want to have a
ramp input, the function vinterp also provides linear interpolation. A linearly
interpolated input is used in the following example.
uramp = vinterp(u,0.1,60,1);
minfo(uramp)
varying:601 pts1 rows1 cols
yramp = trsp(sys,uramp);
integration step size: 0.1
vplot(uramp,'-.',yramp,'-')
xlabel('time: seconds')
text(20,15,'output')
text(12,20,'input')
8-184
trsp, dtrsp, sdtrsp
50
45
40
35
30
25
input
20
output
15
10
0
0 10 20 30 40 50 60
Time: seconds
Note that because the input is regularly spaced, with spacing less than or equal
to the default integration time, the input is not interpolated by trsp. Since no
final time was specified in the trsp argument list, and 60 seconds was specified
to vinterp as the final time, this becomes the last time in the input vector
uramp.
To illustrate the use of dtrsp, a bilinear transformation generates a digital
system. The sample time is chosen as 1 second. The output is plotted against a
1 second interpolation of the input.
T = 1;
dsys = tustin(sys,T);
ydig = dtrsp(dsys,u,T);
vplot(ydig,'-',vinterp(u,1),'-.')
xlabel('time: seconds')
8-185
trsp, dtrsp, sdtrsp
50
45
40
35
30
25
20
15
10
0
0 10 20 30 40 50 60
Time: seconds
8-186
trsp, dtrsp, sdtrsp
ustep = step_tr(0,1,T,20*T);
y = dtrsp(dclp,ustep,T);
1.4
1.2
1 * * * * * * * * * * * * * * * * * * *
0.8
0.6
*
0.4
0.2
0*
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time: seconds
8-187
trsp, dtrsp, sdtrsp
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time: seconds
8-188
trsp, dtrsp, sdtrsp
1.2
0.8
0.6
0.4
0.2
-0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Time: seconds
Algorithm trsp first calculates an integration time (or uses the specified integration time)
to determine the sample time at which to discretize the continuous-time
system. The integration time is taken to be the inverse of 10 times the fastest
mode of the input system. The input vector is interpolated at each sample time
via a zero-order hold, and then a sample-hold of the input continous system is
performed. Finally the time response of the system is performed via a for loop
at each integration time step. dtrsp is provided a discrete time system and a
sample time. dtrsp first interpolates the input vector via a zero-order hold and
then determines the time response via a for loop at each sample time.
Caution Systems with fast dynamics lead to very small integration times.
This is both time consuming and requires a significant amount of storage. We
recommend you residualize the fastest modes of the system, which does not
affect the time response. This can be done with the µ-Tools command sresid.
8-189
tustin
Purpose 8tustin
Create a discrete-time version of a continuous-time SYSTEM matrix using a
bilinear or prewarped tustin transformation
Description The packed continuous SYSTEM matrix, csys, is converted into a discrete-time
SYSTEM matrix, dsys, using a bilinear transformation with prewarping. The
argument T is the sample time, in seconds. prewarpf is the prewarp frequency
in rads/sec. prewarpf is an optional argument, and if omitted, or equal to zero,
a bilinear transformation is performed instead.
The resulting discrete system, dsys, has the same transfer function at the
continuous system, csys, at the prewarp frequency. Choosing a prewarp
frequency close to the crossover frequency is often appropriate for a control
system. Choosing a prewarp frequency too close to the Nyquist frequency (1/2T)
can result in severe distortion at the lower frequencies. In the extreme, if
prewarp is greater than or equal to π/T, the discrete system can be unstable.
Note that the transfer function is preserved at zero frequency with a bilinear
transformation, hence having the input variable prewarpf equal to zero to
indicate a bilinear transformation is therefore consistent.
8-190
tustin
10 0
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
8-191
tustin
10 0
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
Phase (radians)
-2
-4
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
Note the distortion in the frequency of the lightly damped peak. At 5 rads/sec
both the continuous and discrete systems have the same transfer function.
sys_5 = vunpck(frsp(sys,5));
dsys2_5 = vunpck(frsp(dsys2,5,T));
err = abs(dsys2_5 - sys_5);
fprintf('error at %g rad/sec is : %g ',prewarpf,err);
error at 5 rad/sec is : 1.155158e-17
8-192
tustin
csys =
A B , using the prewarped Tustin transform
C D
A disc B disc
dsys = ,
C disc D disc
where
prewrapf
α = --------------------------------
tan ( T prewarpf/2 )
*
–1
A disc = ( I + --α1- A ) * ( I – ---α1 A )
1 –1
---
B disc = ( --α2- ) 2 * ( I – --α1- A ) *B
1 –1
---
C disc = ( --α2- ) 2 * C * ( I – ---α1 A )
–1
D disc = D + ---α1 * C * ( I – ---α1 A ) * B
Reference Oppenheim, A.V., and R.W. Schafer, Digital Signal Processing, Prentice Hall,
New Jersey, 1975.
8-193
unum, xnum, ynum
Description unum returns the input (column) dimension of SYSTEM, CONSTANT, and
VARYING matrices.
xnum returns the state dimension of SYSTEM matrices.
8-194
vabs, vimag, vreal, vfloor, vceil
Examples Construct a complex VARYING matrix and find the magnitude of the entries
and the real parts.
see(matin)
1 row 2 columns
iv = .2
0.2190 - 0.4379i0.6789 - 1.3577i
iv = .7
0.0470 - 0.0941i0.6793 - 1.3586i
8-195
vabs, vimag, vreal, vfloor, vceil
see(vabs(matin))
1 row 2 columns
iv = .2
0iv = .7
0.10521.5190
see(vreal(matin))
1 row2 columns
iv = .2
0.21900.6789
iv = .7
0.04700.6793
8-196
vdet, vdiag, vexpm, vrcond
Description These commands operate on square, CONSTANT and VARYING matrices and
they are identical to the MATLAB commands det, diag, exp, and rcond on
CONSTANT matrices.
vdet of a square, VARYING matrix, returns matout, which is a VARYING
1 × 1 matrix, containing the value of the determinant of matin at each
independent variable value.
vdiag of a square, VARYING matrix, returns matout, which is a VARYING
matrix of size min(size(matin)) × 1, containing the diagonal elements of
matin at each independent variable.
Examples vdet and vrcond work similarly to their MATLAB counterparts, det and
rcond, but on square VARYING matrices as shown below.
see(matin)
2 rows 2 columns
iv = 2.3
0.04750.3282
0.73610.6326
iv = 5.6
0.75640.3653
0.99100.2470
matout = vdet(matin);
8-197
vdet, vdiag, vexpm, vrcond
see(matout)
1 row 1 column
iv = 2.3
-0.2116
iv = 5.6
-0.1752
see(vrcond(matin))
1 row 1 column
iv = 2.3
0.1907
iv = 5.6
0.0848
8-198
vebe
Purpose 8vebe
Perform element-by-element operations on CONSTANT and VARYING
matrices
Description The vebe function allows any single argument MATLAB element-by-element
arithmetic command to operate on a VARYING matrix. The first input
argument, oper, is the character string defining the MATLAB
element-by-element command and matin is the VARYING matrix on which the
command is applied. vebe calls the MATLAB eval command to execute the
string command. Some standard MATLAB comands compatible with vebe are
sin, abs, real, imag, and gamma.
Examples In this example of vebe, the real part of a matrix is found along with gamma of
each matrix element.
see(matin)
3 rows 3 columns
iv = 4.2
iv = 11.01
matout = vebe('real',matin);
see(matout)
3 rows 3 columns
8-199
vebe
iv = 4.2
1 1 1
2 2 2
3 3 3
iv = 11.01
4 4 4
5 5 5
6 6 6
see (vebe ('gamma ', matin))
3 rows 3 columns
iv = 4.2
1 1 1
1 1 1
2 2 2
iv = 11.01
6 6 6
24 24 24
120 120 120
8-200
veig
Purpose 8veig
Calculate eigenvalues and eigenvectors of CONSTANT and VARYING
matrices
8-201
veig
iv = 0.1
0.9304 0.5269
0.8462 0.0920
iv = 0.4
0.6539 0.7012
0.4160 0.9103
evals = veig(matin);
see(evals)
2 rows1 column
iv = 0.1
1.2996
-0.2772
iv = 0.4
0.2270
1.3372
Using the same matrix and creating another two by two VARYING matrix,
solve the generalized eigenvalue problem with these two matrices.
mat1 = matin;
mat2 = vpck([4*eye(2);3*eye(2)],[.1 .4]);
[evecs,evals] = veig(mat1,mat2);
see(evecs)
2 rows 2 columns
iv = 0.1
0.8190 -0.3999
0.5738 0.9166
iv = 0.4
0.8542 0.7162
-0.5200 0.6979
see(evals)
8-202
veig
2 rows 2 columns
iv = 0.1
0.3249 0
0 -0.0693
iv = 0.4
0.0757 0
0 0.4457
8-203
veval
Purpose 8veval
Evaluate general functions of CONSTANT, SYSTEM, and VARYING matrices
Description The veval function evaluates the command oper on the input matrices. veval
works like feval but on collections of VARYING, CONSTANT, and SYSTEM
matrices. 'oper' is a character string with the name of a MATLAB function
(user written, or MATLAB supplied). The function is applied to each input
argument at the independent variable’s values. Any CONSTANT or SYSTEM
matrix is held at its value while the sweep through the independent variable is
performed. veval is currently limited to 10 output arguments, and 13 input
arguments. These are both easily changeable. veval can be used to generate
and manipulate VARYING, SYSTEM matrices or VARYING matrices whose
elements are themselves VARYING matrices. Arbitrary nesting of VARYING
matrices is possible.
The veval function is very useful for rapid prototyping of customized
commands employing VARYING matrices.
Examples To show the flexibility of veval, two random SYSTEM matrices are
constructed. The poles of each SYSTEM are determined with the spoles
command.
sys1 = sysrand(2,1,1);
sys2 = sysrand(2,1,1);
spoles(sysl)
ans =
0.1577
0.7405
spoles(sys2)
ans =
0.6273
-0.5661
8-204
veval
These two SYSTEM matrices are combined to form a VARYING matrix, vsys.
The veval command is used to find poles of the VARYING matrix, which
consists of the two SYSTEM matrices. A SYSTEM matrix is associated with
each independent variable.
vsys = vpck([sysl;sys2],[1 2]);
vsyspoles = veval('spoles',vsys);
see(vsyspoles)
2 rows1 column
iv = 1
0.1577
0.7405
iv = 2
0.6273
-0.5661
8-205
vfft, vifft, vspect
Description vfft implements the MATLAB fft command on VARYING matrix structures.
A one-dimensional FFT of length n is performed on each element of the
VARYING matrix ytime. It is assumed that the independent variable is in
units of seconds. The independent variables are regularly spaced — only the
first interval is used to determine the frequency scale. yfreq is returned with
the independent variable, frequency, in radians/second.
vifft performs the inverse FFT. This is done with the MATLAB command
ifft(yfreq) for each element of the VARYING matrix.
8-206
vfft, vifft, vspect
The signal x, must be scalar (i.e., a one row, one column, VARYING matrix). y
can be a vector signal. The row dimension of p is the same as that of y. vspect
can do single-input, multiple-output (SIMO) identification. This is illustrated
in the following example. Refer also to the example in the Tutorial chapter.
vfft, vifft, and vspect have not been optimized for speed. The appropriate
row and column data is extracted from the VARYING matrices with the µ-Tools
commands, sel and xtract. sbs and abv are used to create the final output.
8-207
vfft, vifft, vspect
The signal u is the input to the system. siggen is used to generate some random
noise on the output signal, y.
y =
madd(trsp(sys,u),siggen('[0.01*rand(size(t));0.025*rand(size(t))
]',t));
integration step size: 0.05
vplot(y)
title('vspect example: output waveform with noise')
xlabel('time: seconds')
0.4
0.3
0.2
0.1
-0.1
-0.2
-0.3
0 20 40 60 80 100 120
time: seconds
The vspect command specifies a 1024 point window, with 512 points of
overlap. A Hamming window is applied to the data.
P = vspect(u,y,1024,512,'hamming');
3 hamming windows in averaging calculation
8-208
vfft, vifft, vspect
10 0
magnitude
10 -1
10 -2
10 -3
10 -2 10 -1 10 0 10 1 10 2
frequency: rad/sec
Algorithm vfft, vifft, and vspectrum call the MATLAB commands fft and ifft.
Reference Ljung, L., System Identification: Theory for the User, Prentice Hall, New
Jersey, 1987.
Oppenheim, A.V., and R.W. Schafer, Digital Signal Processing, Prentice Hall,
New Jersey, 1975.
8-209
vfind
Purpose 8vfind
Unary find function across independent variable
Description vfind is a unary find function that searches across independent variable
values. The condition to be tested can be any valid MATLAB conditional
statement, using the string mat to identify the matrix, and iv as the
independent variable’s value. Both the values and indices of the applicable
independent variables are returned.
Examples Suppose that matin is a VARYING matrix. In order to find those entries for
which the product of the norm of the matrix, and the independent variable is
greater than 2, use vfind as follows.
[iv_value,iv_index] = vfind('iv*norm(mat)>2',matin);
matpropv = xtract(matin,iv_value); % extract by value
matpropi = xtracti(matin,iv_index); % extract by index
pkvnorm(msub(matpropv,matpropi)) % compare - both are the same
8-210
vinterp, vdcmate
Description In the first form, vinterp produces a regularly spaced interpolated version of
the input VARYING matrix. The input arguments are
stepsize independent variable stepsize
finaliv end value for independent variable (Optional: the default is
the final independent variable in the input)
order type of interpolation (optional, default = 0)
0 zero-order hold
1 linear interpolation
The end value for the independent variable may or may not be in the actual
output. This is consistent with the usual MATLAB treatment of regularly
spaced vectors. For example, consider
iv = [1:2:6];
disp(iv)
1 3 5
Note that the value of 6 does not appear in the vector.
In the second form, vinterp produces a VARYING matrix vout that is an
interpolated version of vin. The independent variables of vout are the same as
the independent variables of varymat. The input arguments are
varymat VARYING matrix with desired independent variables
order type of interpolation (optional, default = 0)
0 zero-order hold
1 linear interpolation
8-211
vinterp, vdcmate
Examples siggen creates a sinewave. This is effectively sampled by vdcmate and then
interpolated by vinterp. Note that the default interpolation is a zero-order
hold, giving a stair-step output, yi. If a linearly interpolated output were
specified, it would look identical to yd since the MATLAB plot command
displays a linear interpolation.
timebase = [0:0.005:20];
y = siggen('sin(2*pi*t)',timebase);
minfo(y)
varying:4001 pts1 rows1 cols
yd = vdcmate(y,210);
minfo(yd)
varying:20 pts1 rows1 cols
yi = vinterp(yd,0.005,20,0);
minfo(yi)
varying:4001 pts1 rows1 cols
axis([0,20,-1.5,1.5])
vplot(y,yd,yi)
title('vdcmate/vinterp example: undersampled sine wave')
xlabel('time: seconds')
0.5
-0.5
-1
-1.5
0 2 4 6 8 10 12 14 16 18 20
time: seconds
8-212
vldiv, vpinv, vrdiv
Description These commands operate on VARYING matrices and they are identical to the
MATLAB commands \, pinv, and / on CONSTANT matrices.
vldiv of a VARYING matrix returns out, which is a VARYING matrix,
containing the value of the left division (mat1(i)mat2(i)) at each independent
variable value. vldiv is identical to the MATLAB command \ for CONSTANT
matrices.
vpinv of a VARYING matrix returns out, which is the pseudo-inverse of mat at
each independent variable. out is of the same dimension as vcjt(out), and
satisfies mat = mmult(mat,out,mat). tol is used within the svd routine to
determine zero singular values. The default value of tol is 1e–12. vpinv is
identical to the MATLAB command pinv for CONSTANT matrices.
vrdiv of a VARYING matrix returns out, which is a VARYING matrix,
containing the value of the right division (mat1(i)/mat2(i)) at each
independent variable value. vrdiv is identical to the MATLAB command / for
CONSTANT matrices.
8-213
vpck, vunpck, var2con
Description The data structure for a VARYING matrix consists of the sampled matrix
values stacked one upon each other, and the particular independent variable
values. vpck places the stacked data from the input variable, matin, and the
vector, indv, which represents the independent variable values, into a new
matrix, matout, with the correct structure and data structure of a VARYING
matrix.
The command vunpck performs the inverse operation; unpacking a VARYING
matrix into stacked data varydata, row pointers rowpoint, a vector of
independent variables indv, and an error flag err. The value of rowpoint(i)
points to the row of data that corresponds to the first row of the ith value of
matin. indv is a column vector with the independent variable values. The error
flag is normally 0 but it is set to 1 if the input matrix is a SYSTEM.
var2con converts VARYING matrices to CONSTANT matrices. If there is one
input argument, mat, and it is a VARYING matrix, then the output matout is
the CONSTANT matrix in mat associated with the independent variable’s first
value. The optional second output argument is this independent variable’s
value. If two input arguments are used, then the first is a VARYING matrix,
and the second is a desired independent variable’s value. The command finds
the matrix in mat whose independent variable’s value is closest to desiv, and
returns this matrix as a CONSTANT matrix.
8-214
vpck, vunpck, var2con
iv = 0.1
1 1 1
2 2 2
iv = 2.3
3 3 3
4 4 4
iv = 5.6
5 5 5
6 6 6
8-215
vplot
Purpose 8vplot
Plot multiple VARYING matrices on the same graph
Syntax vplot('plot_type',vmat1,vmat2,...)
vplot('plot_type',vmat1,'linetype1',...)
vplot('bode_l',top_axis_limits,bottom_axis_limits,vmat1,vmat2,...)
Description The vplot command calls the standard MATLAB plot command for plotting.
The optional plot_type argument specifies the type of graph, and selects
between the various logarithmic or linear graph types. The plot_type
specification choices are
iv,d matrix (decimal) vs. independent variable
iv,m magnitude vs. independent variable
iv,lm log(magnitude) vs. independent variable
iv,p phase vs. independent variable
liv,d matrix vs. log(independent variable)
liv,m magnitude vs. log(independent variable)
liv,lm log(magnitude) vs. log(independent variable)
liv,p phase vs. log(independent variable)
ri real vs. imaginary (parametrized by independent variable)
nyq real vs. imaginary (parametrized by independent variable)
nic Nichols chart
bode Bode magnitude and phase plots
bode_g Bode magnitude and phase plots with grids
bode_l Bode magnitude and phase plots with axis limits
bode_gl Bode magnitude and phase plots with grids and axis limits
8-216
vplot
The remaining arguments of vplot take the same form as the MATLAB plot
command. Line types (for example,'+', 'g-.', or '*r') can be optionally
specified after any VARYING matrix argument.
There is a subtle distinction between CONSTANT and VARYING matrices
with only one independent variable. A CONSTANT is treated as such across all
independent variables, and consequently shows up as a line on any graph with
the independent variable as an axis. A VARYING matrix with only one
independent variable will always show up as a point. You may need to specify
one of the more obvious point types in order to see it (e.g., '+', 'x’, etc.).
Examples Two SISO second-order systems are created, and their frequency responses are
calculated for each over different frequency ranges.
a1 = [-1,1;-1,-0.5];
b1 = [0;2]; c1 = [1,0]; d1 = 0;
sys1 = pck(a1,b1,c1,d1);
minfo(sys1)
system:2 states1 outputs1 inputs
a2 = [-.1,1;-1,-0.05];
b2 = [1;1]; c2 = [-0.5,0]; d2 = 0.1;
sys2 = pck(a2,b2,c2,d2);
minfo(sys2)
system:2 states1 outputs1 inputs
omega = logspace(-2,2,100);
sys1_g = frsp(sys1,omega);
omega2 = [ [0.05:0.1:1.5] [1.6:.5:20] [0.9:0.01:1.1] ];
omega2 = sort(omega2);
sys2_g2 = frsp(sys2,omega2);
The following plot uses the 'liv,lm' plot_type specification. Note that the
CONSTANT matrix is seen over all values of the independent variable. This is
only true because it is displayed as a line type. If it were displayed as a point,
8-217
vplot
then one would see points only on each of the side axes. The single valued
VARYING matrix (rspot) is shown only at the appropriate independent
variable value.
vplot('liv,lm',sys1_g,'b-.',[1+i;0.5-0.707*i],'g--',...
rspot,'r*',sys2_g2);
xlabel('log independent variable')
ylabel('log magnitude')
title('plot_type specification: liv,lm')
10 0
10 -1
log magnitude
10 -2
10 -3
10 -4
10 -2 10 -1 10 0 10 1 10 2
You can customize vplot to select the type of axis uses for log magnitude and
phase plots. The default is to plot the log magnitude on a base 10 scale and plot
phase in radians. It is a simple modification to select a dB scale and phase in
degrees. Documentation of the modification is provided in the M-file vplot. You
can copy the command vplot to a private directory (for example, matlab/
toolboxes/mu_cmds on UNIX systems) and make the appropriate
modifications.
8-218
vplot
Several control design plot functions are also provided. These are bode, nic,
and nyq, for Bode, Nichols, and Nyquist, respectively. The following three plots
demonstrate each of these commands.
vplot('bode',sys1_g,'b',sys2_g2,'g+');
title('plot_type specification: bode')
+ +
+ + + +
+ + + ++
+++++++++++++++++++++++++++++
10 -1
10 -2
10 -3
10 -4
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
4
++
+
+
+
++
Phase (radians)
2 +
+
++
+
++++
+ + + + + ++
++++++++++
++++++++++++++++++++
0
-2
+ + + + + + + + ++
+
-4
10 -2 10 -1 10 0 10 1 10 2
Frequency (radians/sec)
The log magnitude and phase axes are labeled automatically. You can change
these labels. Documentation for doing this is in the Help facility for vplot.
vplot('nic',sys1_g,'b-.',[1+i;0.5-0.707*i],'go',rspot,...
rspot,'r*',sys2_g2);
title('plot_type specification: nic')
xlabel('phase (degrees)')
ylabel('log magnitude (dB)')
title('plot_type specification: nic (Nichols Chart)')
8-219
vplot
10
*
o
0 o
-10
log magnitude (dB)
-20
-30
-40
-50
-60
-70
-80
-350 -300 -250 -200 -150 -100 -50 0
phase (degrees)
The default axis scale selection for the Nichols plot is dB versus phase in
degrees. This corresponds to the usual choice for this plot and can be different
from the axis scale selection for bode, liv, lm, liv, p, etc. Again you can change
this if required.
vplot('nyq',sys1_g,'b-.',[1+i;0.5-0.707*i],'go',rspot,...
xlabel('nyquist diagram (real)')
vplot('liv,lm',sys1_g,'b-.',[1+i;0.5-0.707*i],'g--',...
rspot,'r*',sys2_g2);
ylabel('imaginary')
title('plot_type specification: nyq')
8-220
vplot
2
imaginary
1 o
o
-1
*
-2
-5 -4 -3 -2 -1 0 1 2
8-221
vpoly, vroots
Description vpoly forms an n + 1 element VARYING row vector whose elements form the
coefficients of the characteristic polynomial, det(sI – matin(i)), if matin is an
n × n VARYING matrix. The coefficients are ordered in descending powers of s.
If the input is a column vector vecin containing the roots of a polynomial,
vpoly(vecin) returns a VARYING row vector whose elements are the
coefficients of the corresponding characteristic polynomial.
vroots returns as a VARYING column vector vecout whose elements are the
roots of the polynomial at each independent variable, if vecin is a VARYING
row vector containing the coefficients of a polynomial. vpoly and vroots are
identical to the MATLAB poly and roots commands, but also work on
VARYING matrices.
Examples Given a 3 × 3 VARYING matrix, find the characteristic polynomial and its
roots. Compare this to finding the eigenvalues of the input matrix via veig.
see(matin)
3 rows 3 columns
iv = 0.1
1 2 3
4 5 6
7 8 9
iv = 0.4
10 11 12
13 14 15
16 17 18
matout = vpoly(matin);
see(matout)
1 row 4 columns
iv = 0.1
l.0000e+00 -1.5000e+01 -1.8000e+01 -1.4483e-14
iv = 0.4
l.0000e+00 -4.2000e+01 -1.8000e+01 1.2818e-14
8-222
vpoly, vroots
vecout = vroots(matout);
see(vecout)
3 rows 1 column
iv = 0.1
1.6117e+01
-1.1168e+00
-8.0463e-16
iv = 0.4
4.2424e+01
-4.2429e-01
7.1212e-16
evals = veig(matin);
see(evals)
3 rows 1 column
iv = 0.1
1.6117e+01
-1.1168e+00
-8.0463e-16
iv = 0.4
4.2424e+01
-4.2429e-01
7.1212e-16
Algorithm vpoly and vroots call the MATLAB poly and roots commands.
8-223
vsvd, vrho, vschur
Syntax s = vsvd(matin)
[u,s,v] = vsvd(matin)
out = vrho(matin)
t = vschur(matin)
[u,t] = vschur(matin)
Examples Construct a random VARYING matrix and find its singular values.
see(matin)
2 rows 2 columns
iv = 0.1
0.93040.5269
0.84620.0920
iv = 0.4
0.65390.7012
0.41600.9103
[u,s,v] = vsvd(matin);
8-224
vsvd, vrho, vschur
see(u)
2 rows 2 columns
iv = 0.1
0.7884 0.6152
0.6152 -0.7884
iv = 0.4
0.6909 -0.7229
0.7229 0.6909
see(s)
iv = 0.1
1.3400 0.2689
iv = 0.4
1.3681 0.2219
see(v)
2 rows 2 columns
iv = 0.1
0.9359 -0.3522
0.3522 0.9359
iv = 0.4
0.5501 -0.8351
0.8351 0.5501
Algorithm vrho, vschur, and vsvd call the MATLAB commands svd, eig, and schur
See Also eig, hess, pkvnorm, mu, qz, schur, svd, veig, vnorm
8-225
vzoom
Purpose 8vzoom
Freeze plot axes by clicking mouse twice in plot window
Syntax vzoom('axis')
Description vzoom uses the MATLAB functions ginput and axis to freeze the axes by
clicking the mouse twice in the plot window that defines minimum and
maximum values for x and y. The clicking may be done in any order.
The axis argument specifies the type of graph, and can select between the
various logarithmic or linear graph types, just as in vplot. Unlike vplot, the
axis argument is not optional. The axis specification choices are
Note that the axis specification is the same as for vplot, with the addition of
the last four possibilities. The function is not defined for 'bode'.
8-226
wsgui
Purpose 8wsgui
A graphical user interface for the MATLAB workspace
Syntax wsgui
Description wsgui is a graphical user interface (GUI) for the MATLAB workspace. It allows
you to view, delete, and save variables in the workspace, drag these variables
to other µ-Tools GUIs, dkitgui and simgui, drop boxes, and export variables
from the µ-Tools GUI interfaces to the MATLAB workspace.
The wsgui Workspace Manager window appears as shown on the following
page.
8-227
wsgui
The scrollable table can be moved up/down one page by pressing above/below
the slider. Pressing the arrows at the end of the slider moves the table one line.
A filter is used to make viewing of a reduced number of selections easy. The
Prefix, Suffix and matrix type filters are on the bottom of the scrollable
table. The matrix type filter is a pop-up menu to the right of Suffix. The
Custom filter, which is shown if * is pressed, allows you to create a more
complicated selection criteria. Press the pushbutton marked with an *; this
pushbutton is to the right of the pop-up menu, to switch to the custom filter. A
detailed description of wsgui is provided in the “Workspace User Interface Tool:
wsgui” section in Chapter 6.
Examples An example of using wsgui is shown in the “Workspace User Interface Tool:
wsgui” section in Chapter 6.
8-228
wcperf
Purpose 8wcperf
Computes upper and lower bounds for the worst-case gain of a linear system
subjected to structured, bounded, LFT perturbations. Also computes
worst-case structured perturbation of a specified H∞
where ∆ has block structure as defined by ∆, which is described via the matrix
uncblk. The worst-case performance curve, ƒ(α), is defined as
f ( α ) := max F U ( M, ∆ ) ∞
∆ ∈ $∆ , max w σ ( ∆ ( jω ) )ðα
Both lower and upper bounds for ƒ are returned as VARYING matrices in
lowbnd and uppbnd. Each VARYING matrix is guaranteed to have at least npts
values of the independent variable α, spread uniformly between 0 and the
stability limit.
The first output argument, delta_wc, is the “worst-case” perturbation from ∆
with norm equal to the value of alpha. delta_wc has the block-diagonal
structure associated with uncblk, and causes the LFT FU (M,∆wc) to have norm
equal to the value of lowbnd associated with the independent variable value α
= alpha.
8-229
xtract, xtracti
iv = 0.1
2.9703e+00 - 2.9703e-0li5.9406e+00 - 5.9406e-0li
3.9604e+00 - 3.9604e-0li7.9208e+00 - 7.9208e-0li
iv = 0.4
2.5862e+00 - 1.0345e+00i5.1724e+00 - 2.0690e+00i
3.4483e+00 - 1.3793e+00i6.8966e+00 - 2.7586e+00i
iv = 0.9
1.6575e+00 - 1.4917e+00i3.3149e+00 - 2.9834e+00i
2.2099e+00 - 1.9890e+00i4.4199e+00 - 3.9779e+00i
8-230
xtract, xtracti
matl = xtract(mat,.3,.8);
see(matl)
2 rows2 columns
iv = 0.4
2.5862e+00 - 1.0345e+00i5.1724e+00 - 2.0690e+00i
3.4483e+00 - 1.3793e+00i6.8966e+00 - 2.7586e+00i
matl = xtracti(mat,2);
see(matl)
2 rows2 columns
iv = 0.4
2.5862e+00 - 1.0345e+00i5.1724e+00 - 2.0690e+00i
3.4483e+00 - 1.3793e+00i6.8966e+00 - 2.7586e+00i
8-231
8-232
Index
Symbols
8-109 dhfsyn 8-35
example 3-46
dhinfsyn
A example 3-43
abv 8-19 D-K iteration 5-3
example 2-12 commands 5-13
discussion 5-14
D-scalings 5-7
B full D-scaling 5-10
blknorm 8-22 HIMAT dkit example 7-100
HIMAT example 7-98 HIMAT example 7-85
holding Dfixed 5-9
holding Kfixed 5-10
C µ upper bound 5-6
cf2sys 8-125
problem formulation 5-3
HIMAT example 7-127
D-K iteration user interface 6-19
cjt 8-24
auto iteration 6-36
cmmusyn 8-27
D-K iteration 6-30
CONSTANT matrices 2-8
example 6-19
diagram 2-14
main window 6-20
control design
options 6-39
spinning satellite example 6-14
parameter window 6-26
cos_tr 8-28
setup window 6-20
crand 8-30
dkit 8-38
crandn 8-30
HIMAT example 7-100
csord 8-32
HIMAT setup file 7-102
dkitgui 6-2, 6-18, 8-44
unstable example 7-28
D
dragging and dropping icons 6-66
data structures 2-3
dkitgui 6-66
daug 8-19
example 6-68
Destabilizing perturbations
simgui 6-66
constructing 7-41
wsgui 6-66
dhfnorm 8-34
drawmag 8-46
dhfnormBody text>
D-scalings
example 3-15
automatic prefitting 7-109, 8-42
I-1
Index
H
H∞ Control
commands 3-16
I
ifft
design example 3-18
example 2-27
I-2
Index
I-3
Index
I-4
Index
I-5
Index
V vpoly 8-222
vabs 8-195 vrcond 8-197
var2con 8-214 vrdiv 8-213
example 2-10 vreal 8-195
VARYING matrices 2-6 vrho 8-224
diagram 2-14 vroots 8-222
varyrand 8-30 vschur 8-224
vceil 8-195 vspect 8-206
vcjt 8-24 example 2-30
vdcmate 8-211 vsvd 8-224
example 2-27 vtp 8-24
vdet 8-197 vunpck 8-214
vdiag 8-197 example 2-7
vebe 8-199 vzoom 8-226
example 2-18
veig 8-201
veval 8-204 W
example 2-18 wcperf 8-229
vexpm 8-197 workspace user interface 6-3
vfft 8-206 clearing and saving 6-9
example 2-25, 2-27 customize 6-7
vfind 8-210 workspace user interface filter 6-4
vfloor 8-195 font and lines 6-10
vifft 8-206 worst-case performance
example 2-27 unstable example 7-146
vimag 8-195 worst-case perturbations
vinterp 8-211 shuttle example 7-65
example 2-25 wsgui 8-227
vinv 8-97
vldiv 8-213
vnorm 8-134 X
example 3-15 xnum 8-194
I-6
Index
Y
ynum 8-194
Z
zp2sys 8-128
I-7
Index
I-8