0% found this document useful (0 votes)
188 views

Identification and Control of Dynamical Systems Using Neural Network PDF

1. Neural networks can effectively identify and control nonlinear dynamical systems through appropriate model structures for identification and control using neural networks. Both multilayer and recurrent neural network configurations are discussed. 2. The paper proposes novel interconnected configurations of multilayer and recurrent neural networks for system identification and adaptive control of unknown nonlinear dynamical systems. Dynamic backpropagation methods are discussed for parameter adjustment. 3. Simulation results show the identification and adaptive control approaches suggested using neural networks are feasible in practice. However, the paper notes many theoretical questions remain to be addressed regarding modeling nonlinear dynamical systems using neural networks.

Uploaded by

mauricetappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
188 views

Identification and Control of Dynamical Systems Using Neural Network PDF

1. Neural networks can effectively identify and control nonlinear dynamical systems through appropriate model structures for identification and control using neural networks. Both multilayer and recurrent neural network configurations are discussed. 2. The paper proposes novel interconnected configurations of multilayer and recurrent neural networks for system identification and adaptive control of unknown nonlinear dynamical systems. Dynamic backpropagation methods are discussed for parameter adjustment. 3. Simulation results show the identification and adaptive control approaches suggested using neural networks are feasible in practice. However, the paper notes many theoretical questions remain to be addressed regarding modeling nonlinear dynamical systems using neural networks.

Uploaded by

mauricetappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1 IEEE TRANSACTlONS ON NEURAL NETWORKS. VOL. I . NO. I .

MARCH 1990

Identification and Control of Dynamical Systems


Using Neural Networks
KUMPATI S . NARENDRA FELLOW, IEEE. AND KANNAN PARTHASARATHY

Abstract-The paper demonstrates that neural networks can be used are well known for such systems 13. In this paper our
effectively for the identification and control of nonlinear dynamical interest is in the identification and control of nonlinear
systems. The emphasis of the paper is on models for both identification
and control. Static and dynamic back-propagation methods for the ad-
dynamic plants using neural networks. Since very few re-
justment of parameters are discussed. In the models that are intro- sults exist in nonlinear systems theory which can be di-
duced, multilayer and recurrent networks are interconnected in novel rectly applied, considerable care has to be exercised in the
configurations and hence there is a real need to study them in a unified statement of the problems, the choice of the identifier and
fashion. Simulation results reveal that the identification and adaptive controller structures, as well as the generation of adaptive
control schemes suggested are practically feasible. Basic concepts and
definitions are introduced throughout the paper, and theoretical ques-
laws for the adjustment of the parameters.
tions which have to be addressed are also described. Two classes of neural networks which have received
considerable attention in the area of artificial neural net-
works in recent years are: 1) multilayer neural networks
I. INTRODUCTION and 2) recurrent networks. Multilayer networks have

M ATHEMATICAL systems theory, which has in the


past five decades evolved into a powerful scientific
discipline of wide applicability, deals with the analysis
proved extremely successful in pattern recognition prob-
lems [2]-[5] while recurrent networks have been used in
associative memories as well as for the solution of opti-
and synthesis of dynamical systems. The best developed mization problems [6]-[9]. From a systems theoretic point
aspect of the theory treats systems defined by linear op- of view, multilayer networks represent static nonlinear
erators using well established techniques based on linear maps while recurrent networks are represented by nonlin-
algebra, complex variable theory, and the theory of or- ear dynamic feedback systems. In spite of the seeming
dinary linear differential equations. Since design tech- differences between the two classes of networks, there are
niques for dynamical systems are closely related to their compelling reasons to view them in a unified fashion. In
stability properties and since necessary and sufficient con- fact, it is the conviction of the authors that dynamical ele-
ditions for the stability of linear time-invariant systems ments and feedback will be increasingly used in the future,
have been generated over the past century, well-known resulting in complex systems containing both types of net-
design methods have been established for such systems. works. This, in turn, will necessitate a unified treatment
In contrast to this, the stability of nonlinear systems can of such networks. In Section I11 of this paper this view-
be established for the most part only on a system-by-sys- point is elaborated further.
tern basis and hence it is not surprising that design pro- This paper is written with three principal objectives.
cedures that simultaneously meet the requirements of sta- This first and most important objective is to suggest iden-
bility, robustness, and good dynamical response are not tification as well as controller structures using neural net-
currently available for large classes of such systems. works for the adaptive control of unknown nonlinear dy-
In the past three decades major advances have been namical systems. While major advances have been made
made in adaptive identification and control for identifying in the design of adaptive controllers for linear systems
and controlling linear time-invariant plants with unknown with unknown parameters, such controllers cannot be used
parameters. The choice of the identifier and controller for the global control of nonlinear systems. The models
structures is based on well established results in linear suggested consequently represent a first step in this direc-
systems theory. Stable adaptive laws for the adjustment tion. A second objective is to present a prescriptive
of parameters in these cases which assure the global sta- method for the dynamic adjustment of the parameters
bility of the relevant overall systems are also based on based on back propagation. The term dynamic back prop-
properties of linear systems as well as stability results that agation is introduced in this context. The third and final
objective is to state clearly the many theoretical assump-
Manuscript received August 25, 1989; revised November 1989. This tions that have to be made to have well posed problems.
work was supported by the National Science Foundation under grant EET- Block diagram representations of systems commonly used
8814747 and by Sandia National Laboratories under Contract 84-1791. in systems theory, as well as computer simulations, are
The authors are with the Department of Electrical Engineering, Yale
University, New Haven, CT 06520. included throughout the paper to illustrate the various
IEEE Log Number 8933588. concepts introduced. The paper is organized as follows:

1045-9227/90/0300-0004$01.OO 0 1990 IEEE

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA AND PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS

Section I1 deals with basic concepts and notational details notes the output of the identification model and hence
used throughout the paper. In Section 111, multilayer and y A e is the error between the output generated by P
recurrent networks are treated in a unified fashion. Sec- and the observed output y . A more detailed statement of
tion IV deals with static and dynamic methods for the ad- the identification problem of dynamical systems is given
justment of parameters of neural networks. Identification in Section 11-C.
models are introduced in Section V while Section VI deals 2. The Weierstrass Theorem and the Stone- Weier-
with the problem of adaptive control. Finally, in Section strum Theorem: Let C ( [ a , b ] )denote the space of con-
VII, some directions are given for future work. tinuous real valued functions defined on the interval [ a ,
b ] with the norm off E C ( [ a , b ] ) defined by
11. PRELIMINARIES, BASICCONCEPTS,AND NOTATION
In this section, many concepts related to the problem of
identification and control are collected and presented for
easy reference. While only some of them are directly used The famous approximation theorem of Weierstrass states
in the procedures discussed in Sections V and VI, all of that any function in C([ a , b ] ) can be approximated ar-
them are relevant for a broad understanding of the role of bitrarily closely by a polynomial. Alternately, the set of
neural networks in dynamical systems. polynomials is dense in C ( [ a , b]). Naturally, Weier-
strass's theorem and its generalization to multiple dimen-
A . Characterization and Identijication of Systems sions finds wide application in the approximation of con-
System characterization and identification are funda- tinuous functions f : using polynomials (e.g.,
mental problems in systems theory. The problem of char- pattern recognition). A generalization of Weierstrass's
acterization is concerned with the mathematical represen- theorem due to Stone, called the Stone-Weierstrass theo-
tation of a system; a model of a system is expressed as an rem can be used as the starting point for all the approxi-
operator P from an input space into an output space mation procedures for dynamical systems.
and the objective is to characterize the class 6 to which Theorem: (Stone-Weierstrass [lo]): Let be a com-
P belongs. Given a class 6 and the fact that P E*@, the pact metric space. If 6 is a subalgebra of C ( which
problem of identificaeon is to Getermine a class 6 c 6 contaip the constant functions and separates points of
and an element P E 6 so that P approximates P in some then 6 is dense in C ( h).
desired sense. In static systems, the spaces and are In the problems of interest to us we shall assume that
subsets of Rn and respectively, while in dynamical the plant P to be identified belongs to the space 6 of
systems they are generally assumed to be bounded Leb- bounded, continuous, time-invariant and causal operators
esgue integrable functions on the interval [0, TI or [0, 111. By the Stone-Weierstrass theorem, if 6 satisfies the
In both cases, the operator P is defined implicitly by conditions of the theorem, a model belonging to 6 can be
the specified input-output pairs. The choice of the class chosen which approximates any specified operator P E 6 .
of identification moGels 6 ,as well as the specific method A vast literature exists on the characterization of non-
used to determine P , depends upon a variety of factors linear functionals and includes the classic works of Vol-
which are related to the accuracy desired, as well as an- terra, Wiener, Barret, and Urysohn. Using the Stone-
alyticalAtractability. These include the adequacy of the Weierstrass theorem it can be shown that a given nonlin-
model P to represent P , its simplicity, the ease with which ear functional under certain conditions can be represented
it can be identified, how readily it can be extended if i,t by a corresponding series such as the Volterra series or
does not satisfy specifications, and finally whether the P the Wiener series. In spite of the impressive theoretical
chosen is to be used off line or on line. In practical appli- work that these represent, very few have found wide ap-
cations many of these decisions naturally depend upon the plication in the identification of large classes of practical
prior information that is available concerning the plant to nonlinear systems. In this paper our interest is mainly on
be identified. representations which permit on-line identification and
1. Identijication of Static and Dynamic Systems: The control of dynamic systems in terms of finite dimensional
problem of pattern recognition is a typical example of nonlinear difference (or differential) equations. Such non-
identification of static systems. Compact sets U, C A" are linear models are well known in the systems literature and
mapped into elements y, E Rm;(i 1, 2, in the are considered in the following subsection.
output space by a decision function P . The elements of U,
B. Input-State-Output Representation of Systems
denote the pattern vectors corresponding to class y,. In
dynamical systems, the operator P defining a given plant The method of representing dynamical systems by vec-
is implicitly defined by the input-output pairs of time tor differential or difference equations is currently well
functions u ( t ) ,y ( t ) , t E [ 0 , T I . In both cases the objec- established in systems theory and applies to a fairly large
tive is to determine P so that class of systems. For example, the differential equations

P(u)I) 5 119
U E YII IIP(u)
for some desired 0 and a suitably defined norm (de-
noted by on the output space. In (l), P ( u ) jt de-

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
6 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. I . NO. MARCH 1990

where x ( t )
[u,(t>u , 2(t),
[ x l ( t ) ,x 2 ( t ) , x , , ( t ) l T ,u ( t )
u p ( t ) l Tand Y ( t ) A [ Y l ( t ) ,Y 2 ( t ) ,
qxw) model M

ym t lT represent a p input m output system of or- Identification


der n with t ) representing the inputs, xi(t ) the state
variables, and y i ( t ) the outputs of the system. 9 and C(k)

are static nonlinear maps defined as 9 Rp and (a) (b)


Rm. The vector x( t ) denotes the state of the Fig. (a) Identification. (b) Model reference adaptive control.
system at time t and is determined by the state at time to
C t and the input u defined over the interval [ t o , t). The
output y ( t ) is determined completely by the state of the ters. The objective is to construct a suitable identification
system at time t . Equation (2) is referred to as the input- model (Fig. l(a)) which when subjected to the same input
state-output representation of the system. In this paper we u ( k ) as the plant, produces an output j j p ( k ) which ap-
will be concerned with discrete-time systems which can proximates y p ( k ) in the sense described by (1).
be represented by difference equations corresponding to 2. Control: Control theory deals with the analysis and
the differential equations given in (2). These take the form synthesis of dynamical systems in which one or more
variables are kept within prescribed limits. If the func-
x(k 1) 9 [ x ( k ) ,u ( k ) ] tions 9 and in (3) are known, the problem of control is
to design a controller which generates the desired control
Y(k) W k ) ] (3)
input u(k) based on all the information available at that
where u x and y are discrete time sequences. instant k. While a vast body of frequency and time-do-
Most of the results presented can, however, be extended main techniques exist for the synthesis of controllers for
to continuous time systems as well. If the system de- linear systems of the form described in (4)with A, B, and
scribed by (3) is assumed to be linear and time invariant, C known, similar methods do not exist for nonlinear sys-
the equations governing its behavior can be expressed as tems, even when the functions 9 and P are spec-
x(k 1) Bu(k) ified. In the last three decades there has been a great deal
of interest in the control of plants when uncertainty exists
Y(k) W k ) regarding the dynamics of the plant 13. To assure math-
where A , B, and C a r e ( n n ) , ( n p ) , and ( m n ) ematical tractability, most of the effort has been directed
matrices, respectively. The system is then parameterized towards the adaptive control of linear time-invariant plants
by the triple C , A, B The theory of linear time-invari- with unknown parameters. Our interest in this paper lies
ant systems, when C , A, and B are known, is very well primarily in the identification and control of unknown
developed and concepts such as controllability, stability, nonlinear dynamical systems.
and observability of such systems have been studied ex- Adaptive systems which make explicit use of models
tensively in the past three decades. Methods for deter- for control have been studied extensively. Such systems
mining the control input U to optimize a performance are commonly referred to as model reference adaptive
criterion are also well known. The tractability of these control (MRAC) systems. The implicit assumption in the
different problems may be ultimately traced to the fact formulation of the MRAC problem is that the designer is
that they can be reduced to the solution of n linear equa- sufficiently familiar with the plant under consideration so
tions in n unknowns. In contrast to this, the problems in- that he can specify the desired behavior of the plant in
volving nonlinear equations of the form (3), where the terms of the output of a reference model. The MRAC
functions 9 and P are known, result in nonlinear alge- problem can be qualitatively stated as follows (Fig. l(b)).
braic equations for the solution of which similar powerful a . Model reference adaptive control: A plant P with
methods do not exist. Consequently, as shown in the fol- an input-output pair u(k), y p ( k ) } is given. A stable
lowing sections, several assumptions have to be made to reference model M is specified by its input-output pair
make the problems analytically tractable. r ( k ) , y , ( k ) } where r : N is a bounded function.
The output y m ( k )is the desired output of the plant. The
C. Identijication and Control aim is to determine the control input u ( k ) for all k 1 ko
so that
I . Identijication: When the functions 9 and in (3),
or the matrices A, B, and C in are unknown, the prob-
lem of identification of the unknown system (referred to
as the plant in the following sections) arises [12]. This for some specified constant E 2 0.
can be formally stated as follows As described earlier, the choice of the identification
The input and output of a time-invariant, causal dis- model (i.e., its parameterization) and the method of ad-
crete-time dynamical plant are u and y,, respec- justing its parameters based on the identification error
tively, where u is a uniformly bounded function of ei ( k ) constitute the two principal parts of the identifica-
time. The plant is assumed to be stable with a known pa- tion problem. Determining the controller structure, and
rameterization but with unknown values of the parame- adjusting its parameters to minimize the error between the

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
NARENDRA AND PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS 1

output of the plant and the desired output, represent the an-1, Pm I I T will be denoted by p a?d
corresponding parts of the control problem. In Section that of, the identification model [&o, bn-19
11-C-3, some well-known methods for setting up an iden- Pm-iITbyB.
tification model and a controller structure for a linear plant Linear time-invariant plants which are controllable can
as well as the adjustment of identification and control pa- be shown to be stabilizable by linear state feedback. This
rameters are described. Following this, in Section 11-C-4, fact has been used to design adaptive controllers for such
the problems encountered in the identification and control plants. For example, if an upper bound on the order of
of nonlinear dynamical systems are briefly presented. the plant is known, the control input can be generated as
3. Linear Systems: For linear time-invariant plants a linear combination of the past values of the input and
with unknown parameters, the generation of identification output respectively. If ( k ) represents the control param-
models are currently well known. For a single-input sin- eter vector, it can be shown that a constant vector ex-
gle-output (SISO) controllable and observable plant, the ists such that when 8 ( k ) the plant together with the
matrix A and the vectors B and C in (4) can be chosen in controller has the same input-output characteristics as the
such a fashion that the plant equation can be written as reference model. Adaptive algorithms for adjusting k )
n-l m- I in a stable fashion are now well known and have the gen-
eral form shown in (8).
4. Nonlinear Sysrems: The importance of controllabil-
where ai and are constant unknown parameters. A sim- ity and observability in the formulation of the identifica-
ilar representation is also possible for the multi-input tion and control problems for linear systems is evident
multi-output (MIMO) case. This implies that the output from the discussion in Section 11-C-3. Other well-known
at time k 1 is a linear combination of the past values of results in linear systems theory are also called upon to
both the input and the output. Equation motivates the choose a reference model as well as a suitable parameter-
choice of the following identification models: ization of the plant and to assure the existence of a desired
controller. In recent years a number of authors have ad-
n-1
1)
i=O
&i(k)jp(k i ) c
dressed issues such as controllability, observability, feed-
back stabilization, and observer design for nonlinear sys-
m- I tems [ 1 3 ] - [ 1 6 ] . In spite of such attempts constructive

j=O
Bj(k)U(k c
procedures, similar to those available for linear systems,
do not exist for nonlinear systems. Hence, the choice of
Parallel model identification and controller models for nonlinear plants
is a formidable problem and successful identification and
n-l
control has to depend upon several strong assumptions re-
1)
i=O
&i(k)yp(k c
garding the input-output behavior of the plant. For ex-
m- I
ample, if a SISO system is represented by the equation
Bj(k)U(k j e-
( 3 ) , we shall assume that the state of the system can be
reconstructed from n measurements of the input and out-
put. More precisely, y p ( k ) \ k [ x ( k ) ] ,y p ( k 1)
Series-parallel model *[*[x(k), u(k)]], yp(k n 1)
where & i ( i 0, 1 , n 1 ) and B j ( j 0, 1 , * [ * [ x ( k ) , u ( k ) l , u ( k 111, u(k n 2111
yield n nonlinear equations in n unknowns x ( k ) if u ( k ) ,
m 1 ) are adjustable parameters. The output of
u(k n 2), y p ( k ) , yp(k n 1 ) are
the parallel identification model (6) at time k 1 is ( k
1 and is a linear combination of its past values as well specified and we shall assume that for any set of values
of u ( k ) in a compact region in a unique solution to
as those of the input. In the series-parallel model, ( k
the above problem exists. This permits identification pro-
1 ) is a linear combination of the past values of the input
and output of the plant. To generate stable adaptive laws, cedures to be proposed for nonlinear systems along lines
similar to those in the linear case.
the series-parallel model is found to be preferable. In such
Even when the function is known in (3) and the state
a case, a typical adaptive algorithm has the form
vector is accessible, the determination of u for the plant
to have a desired trajectory is an equally difficult problem.
&i(k 1)
Hence, for the generation of the control input, the exis-
&(k) tence of suitable inverse operators have to be assumed. If
a controller structure is assumed to generate the input
e(k l)yp(k u further assumptions have to be made to assure the
1 clLd y: ( k i u 2( k existence of a constant control parameter vector to achieve
the desired objective. All these indicate that considerable
progress in nonlinear control theory will be needed to ob-
where 0 determines the step size. In the following tain rigorous solutions to the identification and control
discussions, the constant vector of plant parameters ao, problems.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. I . NO. I . MARCH 1990

In spite of the above comments, the linear models de-


scribed in Section 11-C-3 motivate the choice of structures
for identifiers and controllers in the nonlinear case. It is
in these structures that we shall incorporate neural net-
works as described in Sections V and VI. A variety of
considerations discussed in Section I11 reveal that both
multilayer neural networks as well as recurrent networks,
which are currently being extensively studied, will feature hp"t Hidden Hidden
pattern layer layer layer
as subsystems in the design of identifiers and controllers
for nonlinear dynamical systems. Fig. 2. three layer neural network.

111. MULTILAYER AND RECURRENT NETWORKS


The assumptions that have to be made to assure well
posed problems using models suggested in Sections V and Fig. 3 . A block diagram representation of a three layer network.
VI are closely related to the properties of multilayer and
recurrent networks. In this section, we describe briefly the
two classes of neural networks and indicate why a unified sists of a single layer network N I , included in feedback
treatment of the two may be warranted to deal with more configuration, with a time delay (Figs. 4 and Such a
complex systems in the future. network represents a discrete-time dynamical system and
can be described by
A . Multilayer Networks
A typical multilayer network with an input layer, an
~ ( k NI[x(k)],
output layer, and two hidden layers is shown in Fig. 2. Given an initial value xo, the dynamical system evolves
For convenience we denote this in block diagram form as to an equilibrium state if NI is suitably chosen. The set of
shown in Fig. 3 with three weight matrices W ' , W 2 ,and initial conditions in the neighborhood of xo which con-
W 3 and a diagonal nonlinear operator r with identical sig- verge to the same equilibrium state is then identified with
moidal elements y [i.e., y(x) 1 e P x / l e-"] fol- that state. The term "associative memory" is used to de-
lowing each of the weight matrices. Each layer of the net- scribe such systems. Recently, both continuous-time and
work can then be represented by the operator discrete-time recurrent networks have been studied with
constant inputs [ 1 7 ] . The inputs rather than the initial
N ~ [ u ] r[wL] (9) conditions represent the patterns to be classified in this
and the input-output mapping of the multilayer network case. In the continuous-time case, the dynamic system in
can be represented by the feedback path has a diagonal transfer matrix with
identical elements l / ( s a) along the diagonal. The
y N[u] r [ W 3 r [ W 2 I ' [ W 1 u ] ] ]N 3 N 2 N I [ u ] . system is then represented by the equation
X N~[x] I
In practice, multilayer networks have been used success- so that x ( t ) E R" is the state of the system at time t , and
fully in pattern recognition problems The weights the constant vector I E W" is the input.
of the network W ' , W 2 ,and W 3are adjusted as described
in Section IV to minimize a suitable function of the error C. A Unijied Approach
e between the output y of the network and a desired output In spite of the seeming differences between the two ap-
This results in the mapping function N U ] realized by proaches to pattern recognition using neural networks, it
the network, mapping vectors into corresponding output is clear that a close relation exists between them. Recur-
classes. Generally a discontinuous mapping such as a rent networks with or without constant inputs are merely
nearest neighbor rule is used at the last stage to map the nonlinear dynamical systems and the asymptotic behavior
input sets into points in the range space corresponding to of such systems depends both on the initial conditions as
output classes. From a systems theoretic point of view, well as the specific input used. In both cases, this depends
multilayer networks can be considered as versatile nonlin- critically on the nonlinear map represented by the neural
ear maps with the elements of the weight matrices as pa- network used in the feedback loop. For example, when
rameters. In the following sections we shall use the terms no input is used, the equilibrium state of the recurrent
"weights" and "parameters" interchangeably. network in the discrete case is merely the fixed point of
the mapping N I . Thus the existence of a fixed point, the
B. Recurrent Networks conditions under which it is unique, the maximum num-
Recurrent networks, introduced in the works of Hop- ber of fixed points that can be achieved in a given network
field [6] and discussed quite extensively in the literature, are all relevant to both multilayer and recurrent networks.
provide an alternative approach to pattern recognition. Much of the current literature deals with such problems
One version of the network suggested by Hopfield con- and for mathematical tractability most of them as-

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL O F DYNAMICAL SYSTEMS 9

Fig. 6. (a) Representation (b) Representation 2 . (c) Representation 3.


(d) Representation 4.

rT;T
multiplication by a constant and time delay, the class of
Fig. 4. The Hopfield network. nonlinear dynamical systems that can be generated using
generalized neural networks can be represented in terms
of transfer matrices of linear systems [i.e., W ( z ) ] and
nonlinear operators N [ . 1 . Fig. 6 shows these operators
connected in cascade and feedback in four configurations
which represent the building blocks for more complex
systems. The superscript notation N' is used in the figures
to distinguish between different multilayer networks in any
specific representation.
Fig. 5 . Block diagram representation of the Hopfield network. From the discussion of generalized neural networks, it
follows that the mapping properties of Ni 3 and conse-
quently N [ (as defined in (10)) play a central role in all
sume that recurrent networks contain only single layer analytical studies of such networks. It has recently been
networks (i.e., NI I). As mentioned earlier, inputs when shown in [21], using the Stone-Weierstrass theorem, that
they exist are assumed to be constant. Recently, two layer a two layer network with an arbitrarily large number of
recurrent networks have also been considered [19] and nodes in the hidden layer can approximate any continuous
more general forms of recurrent networks can be con- functionfE C(R', W") over a compact subset of W".This
structed by including multilayer networks in the feedback provides the motivation to assume that the class of gen-
loop [20]. In spite of the interesting ideas that have been eralized networks described is adequate to deal with a
presented in these papers, our understanding of such sys- large class of problems in nonlinear systems theory. In
tems is still far from complete. In the identification and fact, all the structures used in Section V and VI for the
control problems considered in Sections V and VI, mul- construction of identification and controller models are
tilayer networks are used in cascade and feedback config- generalized neural networks and are closely related to the
urations and the inputs to such models are functions of configurations shown in Fig. 6. For ease of discussion in
time. the rest of the paper, we shall denote the class of functions
generated by a network containing N layers by the symbol
D. Generalized Neural Networks 3ZT,i2,. . r i N + , . Such a network has i, inputs, i N + l outputs
From the above discussion, it follows that the basic ele- and ( N 1 sets of nodes in the hidden layers, each con-
ments in a multilayer network is the mapping N , taining i 2 , i3, nodes, respectively.
r W1 while the addition of the time delay element z -
in the feedback path (Fig. results in a recurrent net- IV. BACKPROPAGATION IN STATIC AND DYNAMIC
work. In fact, general recurrent networks can be con- SYSTEMS
structed composed of only the basic operations of 1) de- In both static identification (e.g., pattern recognition)
lay, 2) summation, and 3) the nonlinear operator Ni ]. and dynamic system identification of the type treated in
In continuous-time networks, the delay operator is re- this paper, if neural networks are used, the objective is to
placed by an integrator. In some cases (as in (11)) mul- determine an adaptive algorithm or rule which adjusts the
tiplication by a constant is also allowed. Hence such net- parameters of the network based on a given set of input-
works are nonlinear feedback systems which consist only output pairs. If the weights of the networks are considered
of elements N I .I, in addition to the usual operations as elements of a parameter vector 8 , the learning process
found in linear systems. involves the determination of the vector which opti-
Since arbitrary linear time-invariant dynamical systems mizes a performance function J based on the output error.
can be constructed using the operations of summation, Back propagation is the most commonly used method for

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I E E E TRANSACTIONS ON NEURAL NETWORKS. VOL. NO. MARCH

this purpose in static contexts. The gradient of the per- where the summation is carried out over all patterns in a
formance function with respect to 0 is computed as V O J given set If the input patterns are assumed to be pre-
and 8 is adjusted along the negative gradient as sented at each instant of time, the performance criterion
J may be interpreted as the sum squared error over an
enom ~VOJ10=6,0m e
interval of time. It is this interpretation which is found to
where 7, the step size, is a suitably chosen constant and be relevant in dynamic systems. In the latter case, the
Onom denotes the nominal value of 8 at which the gradient inputs and outputs are time sequences and the perfor-
is computed. In this section, a diagrammatic representa- mance criterion J has the form 1/ T ) T + I e2(i ) ,
tion of back propagation is first introduced. Following where T i s a suitably chosen integer.
this, a method of extending this concept to dynamical sys- While strictly speaking the adjustment of the parame-
tems is described and the term dynamic back propagation ters should be carried out by determining the gradient of
is defined. Prescriptive methods for the adjustment of J in parameter space, the procedure commonly followed
weight vectors are suggested which can be used in the is to adjust it at every instant based on the error at that
identification and control problems of the type discussed instant and a small step size 7. If 8, represents a typical
in Sections V and VI. parameter, de/dt9, has to be determined to compute the
In the early 1960’s, when the adaptive identification and gradient as er ae 30,). The back propagation method is
control of linear dynamical systems were extensively a convenient method of determining this gradient.
studied, sensitivity models were developed to generate the Fig. 7 shows the diagrammatic representation of back
partial derivatives of the performance criteria with respect propagation for the three layer network shown in Fig. 2 .
to the adjustable parameters of the system. These models The analytical method of deriving the gradient is well
were the first to use sensitivity methods for dynamical known in the literature and will not be repeated here. Fig.
systems and provided a great deal of insight into the nec- 7 merely shows how the various components of the gra-
essary adaptive system structure [ 2 2 ] - [ 2 5 ] .Since concep- dient are realized. In our example, it is seen that signals
tually the above problem is identical to that of determin- U , U , and z and y’(V), y’(Z), and y’( L), as well as the
ing the parameters of neural networks in identification and error vector, are used in the computation of the gradient
control problems, it is clear that back-propagation can be (where y’(x) is the derivative of with respect to
extended to dynamical systems as well. qm, p q , and np multiplications are needed to compute the
partial derivatives with respect to the elements of W 3 , W 2 ,
A. A Diagrammatic Representation of Back and W’ respectively. The structure of the weight matrices
Propagation in the network used to compute the derivatives is seen to
be identical to that in the original network while the signal
In this section we introduce a diagrammatic represen- flow is in the opposite direction, justifying the use of the
tation of back propagation. While the diagrammatic and term “back propagation. For further details regarding
algorithmic representations are informationally equiva- the diagrammatic representation, the reader is referred to
lent, their computational efficiency is different since the [26] and [ 2 7 ] . The advantages of the diagrammatic rep-
former preserves information about topological and geo- resentation mentioned earlier are evident from Fig. 7.
metric relations. In particular, the diagrammatic represen- More relevant to our purpose is that the same represen-
tation provides a better visual understanding of the entire tation can be readily modified for the dynamic case. In
process of back propagation, lends itself to modifications fact, the diagrammatic representation was used exten-
which are computationally more efficient and suggests sively in all the simulation studies described in Sections
novel modifications of the existing structure to include V and VI.
other functional extensions.
In the three layered network shown in Fig. 2 , ‘U A [ U , ,
B. Dynamic Back Propagation
U2, U,] denotes the input pattern vector while y T
[ V I , Y2, y , ] is the output vector. v T [ U , , v2, In a causal dynamical system the change in a parameter
and zT 42 [ z , , z 2 , z y ] are the outputs at at time k will produce a change in the output y ( t ) for all
the first and the second hidden layers, respectively. t 1 k. For example, given a nonlinear dynamical system
w~},,~,, and w ; } , are ~ ~the weight matrices x ( k 1 ) + [ x ( k ) , u ( k ) , 01; y ( k ) q [ x ( k ) ]where
associated with the three layers as shown in Fig. 2 . The t9 is a parameter, U is the input and is the state vector
vectors V E Z E and 7E are as shown in Fig. 2 defined in (3), the partial derivative of y ( k ) with respect
with y(i7;) vi, Y(Zk) zk and y yl where V i , Z k , to t9 can be obtained by solving the linear state equations
and j$ are elements of i7, and 7 respectively. If y i
Y d l , Yd2, yd,] is the desired output vector, the out- z(k A(k)z(k) ~ ( k )z(k0)
, 0
put error vector for a given input pattern U is defined as e
y yd. The performance criterion J is then defined as w(k) C(k)z(k) (12)

where z ( k ) dx(k)/dO E V,A ( k ) +x(k)E


v ( k ) a 0 ( k ) E ‘El’’, w ( k ) dy(k)/dO R” and C ( k )

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL O F DYNAMICAL SYSTEMS I1

ae (k)/ae, is computed by static back propagation. How-


ever, if is a typical parameter of N2

ay, c ay1 au/


ae, av, ae,'
Since av/ae, can be computed using the method de-
scribed in representation l and ayl/av can be obtained by
static back propagation, the product of the two yield the
partial derivative of the signal yI with respect to the pa-
rameter
Representation 3 shows a neural network connected in
feedback with a transfer matrix W ( z ) . The input to the
nonlinear feedback system is a vector U ( k ) . If 0, is a typ-
mult,plications ical parameter of theneural letwork, the aim is to deter-
-Vw,J -VwJ -VwJ mine the derivatives ay,(k)/aO, f o r i 1, 2, m and
Fig. 7 . Architecture for back propagation all k 2 0. We observe here for_the first time a situation
not encountered earlier, in t h g ayI k)_/ae, is the solution
of a difference equation, i.e., ayI(k)/ae, is affected by its
( k ) Rm and are Jacobian matrices and own past values
the vector represents the partial derivative of with
respect to 8. Equation (12) represents the linearized equa-
tions of the nonlinear system around the nominal trajec-
tory and input. If A ( k ) , v(k), and C ( k ) can be com-
puted, w ( k ) , the partial derivative of y with respect to e In (13), ay/aej is a vector and aN[ v ] / a u and aN[ u]/aOj
can be obtained as the output of a dynamic sensitivity are the Jacobian matrix and a vector, respectively, which
model. are evaluated around the nominal trajectory. Hence it rep-
In the previous section, generalized neural networks resecs a linearized difference equation in the variables
were defined and four representations of such networks &/aej. Since a N [ v ] / a u and aN[v]/dej can be com-
with dynamical systems and multilayer neural networks puted at every instant of time, the desired partial deriva-
connected in series and feedback were shown in Fig. 6. tives can be generated as the output o_fa d_ynamicalsystem
Since complex dynamical systems can be expressed in shown in Fig. 8(a) (the bar notation ay/aOj is used in (13)
terms of these four representations, the back-propagation to distinguish between ay/%, and aN[ v]/aej).
method can be extended to such systems if the partial de- In the final representation, the feedback system is pre-
rivative of the outputs with respect to the parameters can ceded by a neural network N2. The presence of N2 does
be determined for each of the representations. In the fol- not affect the computation of the partial derivatives of the
lowing we indicate briefly how (12) can be specialized to output with respect to the parameters of NI. However, if
these four cases. In all cases it is assumed that the partial 0, is a typical parameter of N2, it can be shown that ay/aej
derivative of the output of a multilayer neural network can be obtained as
with respect to one of the parameters can be computed
using static back propagation and can be realized as the
output of the netwark in Fig. 7.
In representation 1 , the desired output Yd ( k ) as well as
the error e ( k ) 2 y ( k ) yd(k) are functions of time. or alternately it can be represented as the output of the
Representation 1 is the simplest situation that can arise in dynamical system shown in Fig. 8(b) whose inputs can be
dynamical systems. This is because computed at every instant of time.
In all the problems of identification and control that we
will be concerned with in the following sections, the ma-
trix W ( z ) is diagonal and consists only of elements of the
form (i.e., a delay of diunits). Further since dynamic
where 0, is a typical parameter of the network N. Since back propagation is considerably more involved than static
du/aOj can be computed at every instant using static back back propagation, the structure of the identification
propagation, de ( k ) /dej can be realized as the output of a models is chosen, wherever possible, so that the latter can
dynamical system W ( z ) whose inputs are the partial de- be used. The models of back propagation developed here
rivatives generated. can be applied to general control problems where neural
In representation 2, the determination of the gradient is networks and linear dynamical systems are interconnected
rendered more complex by the presence of neural network in arbitrary configurations and where static back propa-
N' If is a typical parameter of N I , the partial derivative gation cannot be justified. For further details the reader is

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
12 IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. NO. MARCH 1990

Model IV: y p ( k 1)
=f[y,(k), Yp(k I), Yp(k n
u(k),u(k I), u(k m I)]
(14)
where [ U ( k ) , yp ( k ) ] represents the input-output pair of
(a) (b) the SISO plant at time k , and m n. The block diagram
Fig. 8. (a) Generation of gradient in representation 3 . (b) Generation of representation of the various models are shown in Fig. 9.
gradient in representation 4. The functions f : P n L{ in Models I1 and 111 and
f:Rn+m in Model IV, and g : W m in ( 1 4 ) are
assumed to be differentiable functions of their arguments.
referred to A paper based on [27] but providing de- In all the four models, the output of the plant at the time
tails concerning the implementation of the algorithms in k 1 depends both on its past n values y p ( k i ( i
practical applications is currently under preparation. 0, n 1 ) as well as the past m values of the
input u ( k - j ( j 0 , 1 , m 1 ) . Thedependence
V. IDENTIFICATION on the past values y p ( k i is linear in Model I while in
Model I1 the dependence on the past values of the input
As mentioned in Section 111, the ability of neural net- U(k j is assumed to be linear. In Model 111, the non-
works to approximate large classes of nonlinear functions linear dependence of y p( k 1 on y p( k i and ( k
sufficiently accurately make them prime candidates for use j is assumed to be separable. It is evident that Model IV
in dynamic models for the representation of nonlinear in which yp( k 1 is a nonlinear function of y p ( k i
plants. The fact that static and dynamic back-propagation and u ( k j subsumes Models 1-111. If a general non-
methods, as described in Section IV, can be used for the linear SISO plant can be described by an equation of the
adjustment of their parameters also makes them attractive form (3) and satisfies the stringent observability condition
in identifiers and controllers. In this section four models discussed in Section 11-C-4, it can be represented by such
for the representation of SISO plants are introduced which a model. In spite of its generality, Model IV is, however,
can also be generalized to the multivariable case. Follow- analytically the least tractable and hence for practical ap-
ing this, identification models are suggested containing plications some of the other models are found to be more
multilayer neural networks as subsystems. These models attractive. For example, as will be apparent in the follow-
are motivated by the models which have been used in the ing section, Model I1 is particularly suited for the control
adaptive systems literature for the identification and con- problem.
trol of linear systems and can be considered as their gen- From the results given in Section 111, it follows that
eralization to nonlinear systems. under fairly weak conditions on the functionfand/or in
(14), multilayer neural networks can be constructed to ap-
A . Characterization proximate such mappings over compact sets. We shall as-
sume for convenience that f and/or g belong to a known
The four models of discrete-time plants introduced here class in the domain of interest, so that the
can be described by the following nonlinear difference plant can be represented by a generalized neural network
equations: as discussed in Section 111. This assumption motivates the
choice of the identification models and allows the state-
Model I: y , ( k 1) ment of well posed identification problems. In particular,
n-l the identification models have the same structure as the
i=O
C aiyp(k i) plant but contain neural networks with adjustable param-
eters.
Let a nonlinear dynamic plant be represented by one of
the four models described in (14). If such a plant is to be
identified using input-output data, it must be further as-
sumed that it has bounded outputs for the class of per-
missible inputs. This implies that the model chosen to
represent the plant also enjoys this property. In the case
of Model I, this implies that the roots of the characteristic
equation a0zn-' ana2z o lie
in the interior of the unit circle. In the other three cases
no such simple algebraic conditions exist. Hence the study
of the stability properties of recurrent networks contain-
ing multilayer networks represents an important area of
research.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS

Fig. 9. Representation of SISO plants. (a) Model I. (b) Model Model


(d) Model 1V.

The models described thus far are for the representation assumption, weight matrices of the neural networks in the
of discrete-time plants. Continuous-time analogs of these identification model exist so that, for the same initial con-
models can be described by differential equations, as ditions, both plant and model have the same output for
stated in Section 11. While we shall deal exclusively with any specified input. Hence the identification procedure
discrete-time systems, the same methods also carry over consists in adjusting the parameters of the neural net-
to the continuous time case. works in the model using the method described in Section
IV based on the error between the plant and model out-
B. Identijication puts. However, as shown in what follows, suitable pre-
cautions have to be taken to ensure that the procedure re-
The problem of identification, as described in Section sults in convergence of the identification model parameters
11-C, consists of setting up a suitably parameterized iden- to their desired values.
tification model and adjusting the parameters of the model 1. Parallel Identijication Model: Fig. 10(a) shows a
to optimize a performance function based on the error be- plant which can be represented by Model I with n 2
tween the plant and the identification model outputs. Since and m 1. To identify the plant one can assume the
the nonlinear functions in the representation of the plant structure of the identification model shown in Fig. 10(a)
are assumed to belong to a known class 37. I N + , in and described b y the equation
the domain of interest, the structure of the identification
model is chosen to be identical to that of the plant. By

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I4 IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I . NO. I . MARCH 1990

(a) (b)
Fig. (a) Parallel identification model. (b) Series-parallel identification
model.

mentioned in Section 11-C-3, this is referred to as a Nonlinear


parallel model. Identification then involves the estimation
of the parameters as well as the weights of the neural
network using dynamic back propagation based on the er-
ror e ( k ) between the model output j p ( k ) and the actual Neural
output ( k ) . network

From the assumptions made earlier, the plant is


bounded-input bounded-output (BIBO) stable in the pres-
ence of an input (in the assumed class). Hence, all the Fig. Identification of nonlinear plants using neural networks
signals in the plant are uniformly bounded. In contrast to
this, the stability of the identification model as described
here with a neural network cannot be assured and has to tages over the parallel model. Since the plant is assumed
be proved. Hence if a parallel model is used, there is no to be BIBO stable, all the signals used in the identification
guarantee that the parameters will converge or that the procedure (i.e., inputs to the neural networks) are
output error will tend to zero. In spite of two decades of bounded. Further, since no feedback loop exists in the
work, conditions under which the parallel model param- model, static back propagation can be used to adjust the
eters will converge even in the linear case are at present parameters reducing the computational overhead substan-
unknown. Hence, for plant representations using Models tially. Finally, assuming that the output error tends to a
I-IV, the following identification model, known as the small value asymptotically so that ( k ) ( k ) , the se-
series-parallel model, is used. ries-parallel model may be replaced by a parallel model
2. Series-Parallel Model: In contrast to the parallel without serious consequences. This has practical impli-
model described above, in the series-parallel model the cations if the identification model is to be used off line.
output of the plant (rather than the identification model) In view of the above considerations the series-parallel
is fed back into the identification model as shown in Fig. model is used in all the simulations in this paper.
10(b). This implies that in this case the identification
model has the form C. Simulation Results
In this section simulation results of nonlinear plant
&oyp(k) &lyp(k N[u(k)]. identification using the models suggested earlier are pre-
sented. Six examples are presented where the prior infor-
We shall use the same procedure with all the four models mation available dictates the choice of one of the Models
described earlier. The series-parallel identification model I-IV. Each example is chosen to emphasize a specific
corresponding to a plant represented by Model IV has the point. In the first five examples, the series-parallel model
form shown in Fig. 11. TDL in Fig. 11 denotes a tapped is used to identify the given plant and static back-propa-
delay line whose output vector has for its elements the gation is used to adjust parameters of the neural networks.
delayed values of the input signal. Hence the past values A final example is used to indicate how dynamic back
of the input and the output of the plant form the input propagation may be used in identification problems. Due
vector to a neural network whose output j p ( k ) corre- to space limitations, only the principal results are pre-
sponds to the estimate of the plant output at any instant sented here. The reader interested in further details is re-
of time k . The series-parallel model enjoys several advan- ferred to

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS

. I . I . I .

300

(a) (b)
Fig. 12. Example (a) Outputs of the plant and identification model when
adaptation stops at k 500. (b) Response of plant and identification
model after identification using a random input.

I . Example I : The plant to be identified is governed This corresponds to Model 11. A series-parallel identifier
by the difference equation of the type discussed earlier is used to identify the plant
yp(k 1) 0.3yp(k) 0.6yp(k 1) f[ ~ ( k ) ] from input-output data and is described by the equation

(15) jp(k N[Yp(kLYp(k 01 u(k) (17)


where the unknown function has the form f U ) 0 . 6 sin
(nu) 0 . 3 sin (37ru) 0.1 sin (5nu). From (15), it is where N is a neural network with N E The iden-
clear that the unforced linear system is asymptotically sta- tification process involves the adjustment of the weights
ble and hence any bounded input results in a bounded out- of N using back propagation.
put. In order to identify the plant, a series-parallel model Some prior information concerning the input-output
governed by the difference equation behavior of the plant is needed before identification can
be undertaken. This includes the number of equilibrium
jp(k 0 . 3 y p ( k ) 0.6yp(k 1 ) N[u(k)] states of the unforced system and their stability proper-
was used. The weights in the neural network were ad- ties, the compact set to which the input belongs and
justed at every instant of time Ti 1) using static back whether the plant output is bounded for this class of in-
propagation. The neural network belonged to the class puts. Also, it is assumed that the mapping N can approx-
0 , ~the gradient method employed a step size imate f over the desired domain.
X ; , 2 ~ , 1and
of 17 0.25. The input to the plant and the model was a a . Equilibrium states of the unforced system: The
sinusoid U ( k ) sin (2nk/250). As seen from Fig. 12(a), equilibrium states of the unforced system y p ( k 1)
the output of the model follows the output of the plant f [ y , ( k ) , y p ( k l ) ] withfas defined in (16) are (0, 0 )
almost immediately but fails to do so when the adaptation and ( 2 , 2 ) , respectively, in the state space. This implies
process is stopped at k 500, indicating that the identi- that while in equilibrium without an input, the output of
fication of the plant is not complete. Hence the identifi- the plant is either the sequence 0 or the sequence 2
cation procedure was continued for 50 000 time steps Further, for any input U ( k ) 5, the output of the plant
using a random input whose amplitude was uniformly dis- is uniformly bounded for initial conditions (0, 0 ) and (2,
tributed in the interval 1, 1 ] at the end of which the 2 ) and satisfies the inequality yp( k ) 13.
adaptation was terminated. Fig. 12(b) shows the outputs Assuming different initial conditions in the state space
of the plant and the trained model. The nonlinear function and with zero input, the weights of the neural network
in the plant in this case is f [ U ] u3 0 . 3 ~ 0.4~. ~ were adjusted so that the error e ( k 1) yp(k 1 )
As can be seen from the figure, the identification error is N [ y p ( k ) , y p ( k 1 )] is minimized. When the weights
small even when the input is changed to a sum of two converged to constant values, the equation j p( k 1
sinusoids U ( k ) sin (2nk/250) sin (2nk/25) at k N [ j p ( k ) ,j p( k 1 ) ] was simulated for initial conditions
250. within a radius of 4. The identified system was found to
2. Example 2: The plant to be identified is described have the same trajectories as the plant for the same initial
by the second-order difference equation conditions. The behavior of the plant and the identified
model for different initial conditions are shown in Fig. 13.
Yp(k 1 ) = f [ Y p ( k ) , Y p ( k I ) ] It must be emphasized here that in practice the initial con-
where ditions of the plant cannot be chosen at the discretion of
f [ Y p ( k ) ,Yp(k I ) ] the designer and must be realized only by using different
inputs to the plant.
b. Identijication: While the neural network realized
above can be used in the identification model, a separate

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I . NO. I . MARCH 1990

(a) (b)
Fig. 13. Example 2: Behavior of the unforced system. (a) Actual plant.
(b) Identified model of the plant.

simulation was carried out using both inputs and outputs


u- mtn[h k/25]
and a series-parallel model. The input U ( k ) was assumed
to be an i.i.d. random signal uniformly distributed in the
interval -2, 21 and a step size of 9 0.25 was used in
the gradient method. The weights in the neural network
were adjusted at intervals of five steps using the gradient
of E t = , - , e 2 ( i Fig. shows the outputs of the plant
and the model after the identification procedure was ter-
minated at k
3. Example 3: In Example 2, the input is seen to occur
linearly in the difference equation describing the plant. In
this example the plant is described by Model IIIand has
the form
Fig. 14. Example 2: Outputs of the plant and the identification model.

example, the plant is assumed to be of the form


which corresponds t o f [ y p ( k ) ] y p ( k ) / ( yP(kl2)
Yp(k 1 ) = f [ Y p ( k ) , Y , ( k
and g [ u ( k ) ] u 3 ( k )in series-parallel identifi-
cation model consists of two neural networks Nf and Ng 2 ) , 4%u(k 111
belonging to 32:,20, and can be described by the differ-
ence equation where the unknown function f has the form

The estimates f and g are obtained by using neural net-


works Nfand N g . The weights in the neural networks were In the identification model, a neural network N belonging
adjusted at every instant of time using a step size of to the class '3Z:.20, I is used to approximate the function
0.1 and was continued for 000 time steps. Since the f. Fig. shows the output of the plant and the model
input was a random input in interval -2, 21, g approx- when the identification procedure was carried out for
imates only over this interval. Since this in turn re!ults steps using a random input signal uniformly dis-
in the variation of y p over the interval l o ] , ap- tributed in the interval ] and a step size of 9
proximates f over the latter interval. The functions f and 0.25. mentioned earlier, during the identification pro-
g as well as f and over their respective domains are cess a series-parallel model is used, but after the identi-
shown in Fig. 15(a) and (b). In Fig. 15(c), the outputs of fication process is terminated the performance of the
the plant as well as the identification model for an input model is studied using a parallel model. In Fig. the
U(k) sin (2ak/25) sin (2nk/10) are shown and are input to the plant and the identified model is given by
seen to be indistinguishable. u(k) sin (27rk/250) for k and u(k) 0.8 sin
4. Example 4: The same methods used for identifica- (2ak/250) 0.2 sin (2nk/25) for k 500.
tion of plants in examples can be used when the un- 5. Example 5: In this example, it is shown that the
known plants are known to belong to Model IV. In this same methods used to identify SISO plants can be used to

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL O F DYNAMICAL SYSTEMS

0.er..

5a
.2

I .

(C)
Fig. 15. Example 3: (a) Plots of the functions f and f . (b) Plots of the
functions and g. (c) Outputs of the plant and the identification model.

neural networks NI and N 2 and is described by the equa-


tion

The identification procedure was carried out for


time steps using a step size of 0.1 with random inputs
uI ( k )and u2( k )uniformly distributed in the interval
11. The responses of the plant and the identification model
for a vector input [sin (2ak/25), cos (2ak/25)IT are
shown in Fig. 17.
Comment: In examples 1, 3, 4, and the adjustment
Fig. 16. Example 4: Identification of Model IV. of the parameters was carried out by computing the gra-
dient of e ( k ) at instant k while in example 2 adjustments
were based on the gradient of an error function evaluated
identify MIMO plants as well. The plant is described by
over an interval of length 5. While from a theoretical point
the equations
of view it is preferable to use a larger interval to define
the error function, very little improvement was observed
in the simulations. This accounts for the fact that in ex-
amples 3, and 5 adjustments were based on the instan-
taneous rather than an average error signal.
6. Example 6: In examples 1-5, a series-parallel iden-
tification model was used and hence the parameters of the
This corresponds to the multivariable version of Model neural networks were adjusted using the static back-prop-
11. The series-parallel identification model consists of two agation method. In this example, we consider a simple

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I8 IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I . NO. I . MARCH 1990

20

(a) (b)
Fig. 17. Example 5: Responses of the plant and the identification model.

-3

Fig. 18. Example 6: (a) Outputs of plant and identification model. (b)f[u]
and N [ U ] for 1.

first order nonlinear system which is identified using the ion using the method discussed in Section IV and used in
dynamic back-propagation method discussed in Section the following rule to update 8:
IV. The nonlinear plant is described by the difference
equation
Yp(k 0.8YJk) + f [ u ( k ) I
where 17 is the step size in the gradient procedure.
where the functionf[u] (U 0.8)u(u 0.5) is un- Fig. 18(a) shows the outputs of the plant and the iden-
known. However, it is assumed that f can be approxi- tification model when the weights in the neural network
mated to the desired degree of accuracy by a multilayer were adjusted after an interval of 10 time steps using a
neural network. step size of 0.01. The input to the plant (and the
The identification model used is described by the dif- model) was u ( k ) sin ( 2 r k / 2 5 ) . In Fig. 18(b), the
ference equation functionf(u) (U 0.8)u(u as well as the
j p ( k 1) 0.8jp(k) N [ u ( k ) ] function realized by the three layer neural network after
50 000 steps for U 11, are shown. As seen from
and the neural network belonged to the class '32~.20.10,1.the figure, the neural network approximates the given
The model chosen corresponds to representation 1 in Sec- function quite accurately.
tion IV (refer to Fig. 6(a)). The objective is to adjust a
total of 261 weights in the neural network so that e ( k ) VI. CONTROL OF DYNAMICAL SYSTEMS
a yp(k) 0 asymptotically. Defining the per- As mentioned in Section 11, for the sake of mathemat-
formance criterion to be minimized as J ( 1 / 2 T ) ical tractability most of the effort during the past two dec-
cf,k- T + e 2 ( i the partial derivative of with respect ades in the model reference adaptive control theory has
to a weight 8, in the neural network can be computed as been directed towards the control of linear time-invariant
(aJ/M,) ( l / T ) C f = k - T + I e ( i ) ( a e ( i ) / M , ) . The plants with unknown parameters. Much of the theoretical
quantity ( d e ( i can be computed in a dynamic fash- work in the late 1970's was aimed at determining adaptive

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA A N D PARTHASARATHY. IDENTIFICATION A N D CONTROL O F DYNAMICAL SYSTEMS

laws for the adjustment of the control parameter vector


8 ( k ) which would result in stable overall systems. In 1980
[30]-[33], it was conclusively shown that for both dis-
crete-time and continuous-time systems, such stable Controller
U

adaptive laws could be determined provided that some


prior information concerning the plant transfer function I
was available. Since that time much of the research in the Fig. 19. Direct adaptive control.
area has been directed towards determining conditions
which assure the robustness of the overall system under
different types of perturbations. Reference
model
In contrast to the above, very little work has been re-
ported on the global adaptive control of plants described
by nonlinear difference or differential equations. It is in
the control of such systems that we are primarily inter-
ested in this section. Since, in most problems, very little
theory exists to guide the analysis, one of the aims is to Controller
U

1
indicate precisely how the nonlinear control problem dif-
I
fers from the linear one and the nature of the theoretical
questions that have to be answered. Fig. 20. Indirect adaptive control.
Algebraic and Analytic Parts of Adaptive Control Prob-
lems: In conventional adaptive control theory, two stages
are generally distinguished in the adaptive process. In the used in the linear case. However, in place of the linear
first, referred to as the algebraic part, it is first shown gains, nonlinear neural networks are used.
that the controller has the necessary degrees of freedom Methods for identifying nonlinear plants using delayed
to achieve the desired objective. More precisely, if some values of both plant input and output were discussed in
prior information regarding the plant is given, it is shown the previous section and Fig. 1 1 shows a general identi-
that a controller parameter vector exists for every value fication model. Fig. 21 shows a controller whose output
of the plant parameter vector p, so that the output of the is the control input to the plant and whose inputs are the
controlled plant together with the controller approaches delayed values of the plant input and output, respectively.
the output of the reference model asymptotically. The an- Indirect Control: At present, methods for directly
alytic part of the problem is then to determine stable adjusting the control parameters based on the output error
adaptive laws for adjusting 8 ( k ) so that limk,, O ( k ) (between the plant and the reference model outputs) are
and the output error tends to zero. not available. This is because the unknown nonlinear plant
Direct and Indirect Control: For over 20 years, two in Fig. 21 lies between the controller and the output error
distinct approaches have been used [l] to control a plant e,. Hence, until such methods are developed, adaptive
adaptively. These are 1) direct control and 2) indirect control of nonlinear plants has to be carried out using in-
control. In direct control, the parameters of the controller direct methods. This implies that the methods described
are directly adjusted to reduce some norm of the output in Section V have to be first used on line to identify the
error. In indirect control, the parameters of the plant are input-output behavior of the plant. Using the resulting
estimated as the elements of a vector b ( k ) at any instant identification model, which contains neural networks and
k and the parameter vector 0 ( k ) of the controller is chosen linear dynamical elements as subsystems, the parameters
assuming that p ( k ) represents the true value p of the plant of the controller are adjusted. This is shown in Fig. 22.
parameter vector. Even when the plant is assumed to be It is this procedure of identification followed by control
linear and time invariant, both direct and indirect adap- that is adopted in this section. Dynamic back propagation
tive control result in overall nonlinear systems. Figs. 19 through a system consisting of only neural networks and
and 20 represent the structure of the overall adaptive sys- linear dynamic elements was discussed in Section IV to
tem using the two methods for the adaptive control of a determine the gradient of a performance index with re-
linear time-invariant plant 13. spect to the adjustable parameters of a system. Since iden-
tification of the unknown plant is carried out using only
A. Adaptive Control of Nonlinear Systems Using Neural neural networks and tapped delay lines, the identification
Networks model can be used to compute the partial derivatives of a
performance index with respect to the controller parame-
For a detailed treatment of direct and indirect control ters.
systems the reader is referred to 13. The same approaches
which have proved successful for linear plants can also be B. Simulation Results
attempted when nonlinear plants have to be adaptively The procedure adopted to adaptively control a nonlin-
controlled. The structure used for the identification model ear plant depends largely on the prior information avail-
as well as the controller are strongly motivated by those able regarding the unknown plant. This includes knowl-

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
8 ,
I

20 IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I , NO. I . MARCH 1990

Reference
model
h where the function

Nonlinear h
b f[Yp(k), Yp(k

is assumed to be unknown. A reference model is de-


scribed by the second-order difference equation
ym(k 1) 0.6ym(k) 0.2ym(k r(k)
Fig. 21. Direct adaptive control of nonlinear plants using neural networks.
where r ( k ) is a bounded reference input. If the output
a
error e, ( k ) is defined as e, ( k ) yp ( k ) ym(k),the aim
Reference
of control is to determine a bounded control input u ( k )
model such that limk, e,( k ) 0. If the function f in
is known, it follows directly that at stage k, U ( k ) can be
Neural computed from a knowledge of yp(k) and its past values
network N'
as
U(k) -f[Yp(k), Yp(k 0.6Yp(k)
TDL
0.2yp(k 1) r(k) (20)
Neural
network N e
resulting in the error difference equation e, ( k
0.6eC(k) 0.2eC(k Since the reference model is
asymptotically stable, it follows that limk,, e , ( k ) 0
for arbitrary initial conditions. However, sincef is un-
I known, it is estimated on line asfas discussed in example
Fig. 2 2 . Indirect adaptive control using neural networks. 2 using a neural network N and the series-parallel method.
The control input to the plant at any instant k is com-
puted using N [ in place off as
edge of the number of equilibrium states of the unforced
system, their stability properties, as well as the amplitude U@) - N [ Yp(kL Yp(k 0.6Yp(k)
of the input for which the output is also bounded. For 0.2yp(k 1) r(k). (21)
example, if the plant is known to have a bounded output
for all inputs U belonging to some compact set then This results in the nonlinear difference equation
the plant can be identified off line using the methods out- Y,(k
lined in Section V. During identification, the weights in
the identification model can be adjusted at every instant = f [ Y p ( k ) , Yp(k N[Yp(kL Yp(k
of time I;. or at discrete time intervals 7;. 0.6yp(k) 0.2yp(k r(k)
Once the plant has been identified to the desired level of (22)
accuracy, control action can be initiated so that the output governing the behavior of the plant. The structure of the
of the plant follows the output of a stable reference model. overall system is shown in Fig. 23.
It must be emphasized that even if the plant has bounded In the first stage, the unknown plant was identified off
outputs for bounded inputs, feedback control may result line using random inputs as described in example 2. Fol-
in unbounded solutions. Hence, for on-line control, iden- lowing this, (21) was used to generate the control input.
tification and control must proceed simultaneously. The The response of the controlled system with a reference
time intervals Tiand T,, respectively, over which the input r ( k ) sin ( 2 ~ k / 2 5 )is shown in Fig. 24(b).
identification and control parameters are to be updated In the second stage, both identification and control were
have to be judiciously chosen in such a case. implemented simultaneously using different values of Ti
Five examples, in which nonlinear plants are adaptively and T,. The asymptotic response of the system when iden-
controlled, are included below and illustrate the ideas dis- tification and control start at k 0 with T, is
cussed earlier. As in the previous section, each example shown in Fig. Since it is desirable to adjust the
is chosen to emphasize a specific point. control parameters at a slower rate than the identification
I. Example 7: We consider here the problem of con- parameters, the experiment was repeated with 7;. 1 and
trolling the plant discussed in example 2 which is de- T, 3 and is shown in Fig. 25(b). Since the identification
scribed by the difference equation process is not complete for small values of k, the control
can be theoretically unstable. However, this was not ob-
Yp(k =f[Yp(k),Yp(k U(k) served in the simulations. If the control is initiated at time

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL O F DYNAMICAL SYSTEMS

I
t
Fig. 23. Example 7: Structure of the overall system.

4l 80 80 80

(a) (b)
Fig. 24. Example 7: (a) Response for no control action. (b) Response for
r sin with control.

I
OMO iwco om0 iwoo
(a) (b)
Fig. 25. Example 7: (a) Response when control is initiated at with
T, T, I . (b) Response when control is initiated at k and T,
and 3.

k 0 using nominal values of the parameters of the neural The simulations reported above indicate that for stable
network with Ti T, 10, the output of the plant was and efficient on-line control, the identification must be suf-
seen to increase in an unbounded fashion as shown in Fig. ficiently accurate before control action is initiated and
26. hence and T, should be chosen with care.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I . NO. I , MARCH 1990

ru*.in[2*/25]

40- Tt Tc

& m -
7 1 -
5
aE
10

IO IS
Fig. 26. =o with Fig. 27. Example 8: Responses of the reference model and the plant when
no control action is taken.

2. Example 8: The unknown plant in this case corre- uniformly over the interval - 2 , 21. At the end of this
sponds to Model I1 and can be described by a difference interval, the control is implemented as given in ( 2 4 ) . The
equation of the form response of the plant as well as the reference model are
shown in Fig. 28. In Fig. 28(a) the reference input is r ( k )
Yp(k 1) sin ( 2 d / 2 5 ) , while in Fig. 28(b) the reference input
is r ( k ) sin ( 2 n k / 2 5 ) sin ( 2 x k 1 1 0 ) . In both cases
the control system is found to perform satisfactorily. Since
the plant is identified over a s;fficien!ly long time with a
general input, the parameters and are found to con-
verge to 1.005 and 0 . 8 0 2 3 , respectively, which are close
It is assumed that the parameters Pi 0 , 1, ,m to the true values of 1 and 0.8.
1 ) are unknown, but that is nonzero with a known In Fig. 29 the response of the controlled plant to a ref-
sign. The specific plant used in the simulation study was erence input r ( k ) sin ( 2 a k / 2 5 ) is shown, when iden-
tification and control are initiated at k 0. Since the input
Yp(k 1)
5Yp(k)Yp(k 1) Bo
is not sufficiently general, ( k ) and PI ( k ) tend to values
1 y;(k) y;(k 1 ) y;(k 2) 4.71 and 3.59 so that the asymptotic values of the param-
eter errors are large. In spite of this, the output error is
u(k) 0.8u(k 1). (23) seen to tend to zero for values of k greater than 9900. This
The output of the stable reference model is described by example reveals that good control may be possible with-
out good parameter identification.
ym(k 1) 0.32ym(k) 0.64ym(k 1) 3. Example 9: In this case, the plant is described by
the same equation as in ( 2 3 ) with 0.81.4( k 1 replaced
0.5ym(k 2) r(k) by 1. l u ( k 1 and the same procedure is adopted as in
where r is the uniformly bounded reference input. The example 8 to generate the control input. It is found that
responses of the reference model and the plant when r ( k ) the output error is bounded and even tends to zero while
u ( k ) sin ( 2 ? r k / 2 5 )are shown in Fig. 27. While the the control input grows in an unbounded fashion (Fig. 30).
output of the reference model is also a sinusoid of the This is a phenomenon which is well known in adaptive
same frequency, the response of the plant is seen to con- control theory and arises due to the presence of zeros of
tain higher harmonics. It is assumed that sgn +1 the plant transfer function lying outside the unit circle. In
and that 2 0.1. This enables a projection type algo- the present context ( k ) 1. l u ( k 1 can be zero even
rithm to :b used in the identification procecure so that the as ( k ) 1.1 ) k tends to 00 in an oscillatory fashion.
estimate of satisfies the inequality 2 0.1. The The same phenomenon can also occur in systems where
control input any instant of time k is generated as the dependence of y p on in nonlinear.
4. Example 10: The control of the nonlinear multivari-
able plant with two inputs and two outputs, discussed in
example 5 , is considered in this example and the plant is
described by (18). The reference model is linear and is
Blu(k 0.32yp(k) 0.64yp(k 1)
described by the difference equations
(24) 0.5yp(k 2) r(k)].
In Fig. 2 8 , the plant is identified over a period of 50 000
time steps using an input which is random and distributed

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS 23

71
71

E
a

na 2 60080 5OlOO

(a) (b)
Fig. 28. Example 8: Identification followed by control.

(a) (b)
Fig. 29. Example 8: Initial response when control action is taken at k
0 with T, (b) Asymptotic response with T, T,

300

.. ,I I ,

I "
I . I <

raln[2nk/25]

T,-TFl

-300
24 60 30 10 60

(a) (b)
Fig. 30. Example 9: (a) Outputs of the reference model and the plant when
control is initiated at k 0. (b) The feedback control input U .

where rl and r2 are bounded reference inputs. The plant are evident from the figure. The outputs of the controlled
is identified as in example 5 and control is initiated after plant and the reference model are shown and indicate that
the identification process is complete. The responses of the output error is almost zero.
the plant as compared to the reference model for the same 5. Example In examples 7-10, the output of the
inputs are shown in Fig. 3 1. The improvement in the re- plant depends linearly on the control input. This makes
ponses, when the neural networks in the identification the computation of the latter relatively straightforward. In
model are used to generate the control input to the plant this example the plant is described by Model I11 and has

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
23 IEEE TRANSACTIONS ON NEURAL NETWORKS. VOL. I . NO. I . MARCH 1990

I " " " ~ " 1

Response with feedback control


I n t h feedback control

0 20 40

(C) (d)
Fig. Example (a), (b) Outputs of the reference model and the plant
when no control action is taken. (c), (d) Outputs of the reference model
and the plant with feedback control.

the form

which was identified successfully in example 3 . Choosing


the reference model as
ym(k 1) 0.6ym(k) r(k)
the aim once again is to choose u ( k ) so that limk+m
J y p ( k ) -Ym(k)l I ~ ~ [ Y =, I~ p / ( l+ ~ ; ) a n d g [ u l
u3, the control input in this case is chosen as

U@) -f[Y,(k)I 0.6Yp(k) r(k)] (25)


wherefand g-I are th,e estimates o f f and g-I, respec-
tively. The estimates f and t are obtained as described
earlier using neural networks Nf and N g . Since [ U ] has
been realized as the output of a neural network N,, the The determination of N,. was carried out over 25 000 time
weights of a neural network N , E I (shown in Fig. steps using a random input uniformly distributed in the
32) can be adjusted so that NR N,( r ) ] r as r ( k ) varies interval -4, 41 and a step size of 0.01. Since the
over the interval - 4 , 4 ] . The range - 4 , 4 ] was chosen plant nonlinearities f and g as well as g-' have been es-
for r ( k ) since this assures that the input to the identifi- timated using neural networks NfiN g , and N,., respec-
cation mogel varies over the same range for which the tively, the control input to the plant can be determined
estimatesfand t are valid. In Fig. 33 N , [ N , ( r ) ] is plot- using ( 2 5 ) . The output of the plant to a reference input
ted against r and is seen to be unity over the entire range. r(k ) sin ( 2 n k / 2 5 ) sin ( 2 n k / 10) is shown in Fig.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
I

NARENDRA A N D PARTHASARATHY: IDENTIFICATION A N D CONTROL OF DYNAMICAL SYSTEMS 25

Fig. 33. Example 1 1 : Plot of the function NR[iVc(r)].

20 60 BO 20 40 BO

(a) (b)
Fig. 34. Example (a) Outputs of the reference model and plant without
a feedback controller. (b) Outputs of the reference model and plant with
a feedback controller.

34(a) when a feedback controller is not used; the response The simulation studies on low order nonlinear dynamic
with a controller is shown in Fig. 34(b). The response in systems reveal that identification and control using the
Fig. 34(b) is identical to that of the reference model and methods suggested can be very effective. There is every
is almost indistinguishable from it. Hence, from this ex- reason to believe that the same methods can also be used
ample we conclude that it may be possible in some cases successfully for the identification and control of multivar-
to generate a control input to an unknown plant so that iable systems of higher dimensions. Hence, they should
almost perfect model following is achieved. find wide application in many areas of applied science.
Several assumptions were made concerning the plant
VII. COMMENTS
AND CONCLUSIONS characteristics in the simulation studies to achieve satis-
factory identification and control. For example, in all
In this paper models for the identification and control cases the plant was assumed to have bounded outputs for
of nonlinear dynamic systems are suggested. These the class of inputs specified. An obvious and important
models, which include multilayer neural networks as well extension of the methods in the future will be to the con-
as linear dynamics, can be viewed as generalized neural trol of unstable systems in some compact domain in the
networks. In the specific models given, the delayed values state space. All the plants were also assumed to be of rel-
of relevant signals in the system are used as inputs to mul- ative degree unity (i.e., input at k affects the output at
tilayer neural networks. Methods for the adjustment of k l ) , minimum phase (i.e., no unbounded input lies in
parameters in generalized neural networks are treated and the null space of the operator representing the plant) and
the concept of dynamic back propagation is introduced in Models I1 and I11 used in control problems assumed that
this context to generate partial derivatives of a perfor- inverses of operators existed and could be approximated.
mance index with respect to adjustable parameters on line. Future work will attempt to relax some or all of these
However, in many identifiers and controllers it is shown assumptions. Further, in all cases the gradient method is
that by using a series-parallel model, the gradient can be used exclusively for the adjustment of the parameters of
obtained with the simpler static back-propagation method. the plant. Since it is well known that such methods can

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
26 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. I . NO. I . MARCH 1990

lead to instability for large values of the step size it is [I51 A. Isidori, A. J . Krener, C. Gori Giorgi, and S . Monaco, “Nonlinear
decoupling via feedback: A differential geometric approach,” IEEE
essential that efforts be directed towards determining sta- Trans. Automat. Conrr., vol. AC-26, pp. 331-345, 1981.
ble adaptive laws for adjusting the parameters. Such work 161 S . S . Sastry and A. Isidori, “Adaptive control of linearizable sys-
is currently in progress. tems,” IEEE Trans. Auromar. Conrr., vol. 34, no. pp. 1123-
1131, Nov. 1989.
A number of assumptions were made throughout the pa- 171 F. J. Pineda, “Generalization of back propagation to recurrent net-
per regarding the plant to be controlled for the methods works,”Phys. Rev. Leu., vol. 59, no. 19, pp. 2229-2232, Nov. 1987.
to prove successful. These include stability properties of 181 J. H. Li, A. N. Michel and W. Porod, “Qualitative analysis and syn-
thesis of a class of neural networks,” IEEE Trans. Circuirs Syst., vol.
recurrent networks with multilayer neural networks in the 35, no. 8, pp. 976-986, Aug. 1988.
forward path, controllability, observability, and identifi- 191 B. Kosko, “Bidirectional associative memories,’’ IEEE Trans. Sysr.,
ability of the models suggested as well as the existence of Man, Cybern., vol. 18, no. pp. 49-60, Jan./Feb. 1988.
[20] K. S. Narendra and K . Parthasarathy, “Neural networks and dynam-
nonlinear controllers to match the response of the refer- ical systems. Part I: A gradient approach to Hopfield networks,”
ence model. At the present stage of development of non- Center Syst. Sci., Dept. Electrical Eng., Yale University, New Ha-
linear control theory, few constructive methods exist for ven, CT, tech. rep. 8820, Oct. 1988.
[21] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward
checking the validity of these assumptions in the context networks are universal approximators,” Dept. Economics, University
of general nonlinear systems. However, the fact that we of California, San Diego, CA, discussion pap., Dept. Economics,
are dealing with special classes of systems represented by June 1988.
[22] K. S. Narendra and L. E. McBride, Jr., “Multiparameter self-opti-
generalized neural networks should make the develop- mizing system using correlation techniques,” IEEE Trans. Automar.
ment of such methods more tractable. Hence, concurrent Confr., vol. AC-9, pp. 31-38, 1964.
theoretical research in these areas is needed to justify the [23] P. V. Kokotovic, “Method of sensitivity points in the investigation
and optimization of linear control systems,’’ Auromar. Remote Conrr.,
models suggested in this paper. vol. 25, pp. 1512-1518, 1964.
[24] L. E. McBridge, Jr. and K. S. Narendra, “Optimization of time-vary-
ACKNOWLEDGMENT ing systems,” IEEE Trans. Auromar. Conrr., vol. AC-IO, no. 3 , pp.
289-294, 1965.
The authors would like to thank the reviewers and the [25] J. B. CNZ, Jr., Ed.. System Sensitivity Analysis, Benchmark Papers
associate editor for their careful reading of the paper and in Electrical Engineering and Computer Science. Stroudsburg, PA:
Dowden, Hutchinson and Ross, 1973.
their helpful comments. [26] K. S. Narendra and K. Parthasarathy, “A diagrammatic representa-
tion of back propagation,” Center for Syst. Sci., Dept. of Electrical
REFERENCES Eng., Yale University, New Haven, CT, tech. rep. 8815, Aug. 1988.
[27] K. S. Narendra and K. Parthasarathy, “Back propagation in dynam-
[I] K. S . Narendra and A. M. Annaswamy, Stable Adaptive Systems. ical systems containing neural networks,” Center Syst. Sci., Dept.
Englewood Cliffs, NJ: Prentice-Hall, 1989. Electrical Eng., Yale University, New Haven, CT, tech. rep. 8905,
[2] D. J . Burr, “Experiments on neural net recognition of spoken and Mar. 1989.
written text,” IEEE Trans. Acousr., Speech, Signal Processing, vol. [28] K. S . Narendra and K. Parthasarathy, “Neural networks and dynam-
36, no. 7, pp. 1162-1 168, July 1988. ical systems. Part Identification,” Center for Syst. Sci., Dept. of
[3] R. P. Gorman and T. J. Sejnowski, “Learned classification of sonar Electrical Eng., Yale University, New Haven, CT, tech. rep. 8902,
targets using a massively parallel network,” IEEE Trans. Acousr., Feb. 1989.
Speech, Signal Processing, vol. 36, no. 7, pp. 1135-1 140, July 1988. [29] K. S . Narendra and K. Parthasarathy, “Neural networks and dynam-
[4] T. J. Sejnowski and C. R. Rosenberg, “Parallel networks that learn ical systems. Part Control,” Center Syst. Sci., Dept. Electrical
to pronounce English text,” Complex Syst., vol. I , pp. 145-168, Eng., Yale University, New Haven, CT, tech. rep. 8909, May 1989.
1987. [30] K. S . Narendra, Y. H. Lin, and L. S . Valavani, “Stable adaptive
[5] B. Widrow, R. G. Winter, and R. A. Baxter, “Layered neural nets controller design-Part Proof of stability,” IEEE Trans. Automar.
for pattern recognition,” IEEE Trans. Acousr., Speech, Signal Pro- Contr., vol. 25, pp. 440-448, June 1980.
cessing, vol. 36, no. 7, pp. 1109-1 118, July 1988. (311 A. S. Morse, “Global stability of parameter adaptive systems.” IEEE
[6] J. J. Hopfield, “Neural networks and physical systems with emergent Trans. Auromat. Conrr., vol. 25, pp. 433-439, June 1980.
collective computational abilities,” Proc. Nut. Acad. Sci., U . S . , vol. [32] G. C. Goodwin, P. J . Ramadge, and P. E. Caines, “Discrete time
79, pp. 2554-2558, Apr. 1982. multivariable adaptive control,” IEEE Trans. Auromar. Conrr., vol.
[7] J. J. Hopfield and D. W. Tank, “Neural computation of decisions in 25, pp. 449-456, June 1980.
optimization problems,” Biolog. Cybern., vol. 52, pp. 141-152, [33] K. S . Narendra and Y. H. Lin, “Stable discrete adaptive control,”
1985. IEEE Trans. Automat. Conrr., vol. 25, vol. 456-461, June 1980.
[8] D. W. Tank and 3. J. Hopfield, “Simple ‘neural’ optimization net-
works: An A/D converter, signal decision circuit, and a linear pro-
gramming circuit,” IEEE Trans. Sysr., Man, Cybern., vol.
no. 5, pp. 533-541, May 1986.
[9] H. Rauch and T. Winarske, “Neural networks for routing commu-
nication traffic,” IEEE Control Mag., vol. 8, no. 2, pp. 26-30,
Apr. 1988.
[IO] N. B. Haaser and J. A. Sullivan, Real Analysis. New York: Van
Nostrand Reinhold, 1971.
[ I I] P. G. Gallman and K. S . Narendra, “Identification of nonlinear sys- Kumpati S. Narendra (S’55-M’60-SM’63-
tems using a Uryson model,” Becton Center, Yale University, New F’79) received the B.S. degree with honors in
Haven, CT, tech. rep. CT-38, Apr. 1971; also Automarica, Nov. electrical engineering from Madras University in
1976. 1954 and the M.S. and Ph.D degrees from Har-
121 L. Ljung and T. Soderstrom, Theory and Pracrice ofRecursive Iden- vard University in 1955 and 1959, respectively.
tijcarion. Cambridge, MA: M.I.T. Press, 1985. From 1961 to 1965 he was an Assistant Pro-
[I31 S . N. Singh and W. J. Rugh, “Decoupling in a class of nonlinear fessor in the Division of Applied Physics at Har-
systems by state variable feedback,” Trans. ASME, vol. 94, pp. 323- vard. He joined the Department of Engineering
324, 1972. and Applied Science at Yale in 1965 and was made
[I41 E. Freund, “The structure of decoupled nonlinear systems,” Int. J. Professor in 1968. He was the Chairman of the
Conrr., vol. 21, pp. 651-654, 1975. Department of Electncal Engineering from 1984

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.
8 , I

NARENDRA AND PARTHASARATHY: IDENTIFICATION AND CONTROL OF DYNAMICAL SYSTEMS 27

to 1987 and currently he is the Director of the Center for Systems Science Taylor Award of the IEEE Systems, Man, and Cybernetics Society and the
at Yale. He is the author of numerous technical publications in the area of George S. Axelby Best Paper Award of the IEEE Control Systems Society
systems theory. He is the author of the books Frequency Domain Criteria in 1988.
for Absolute Stability (coauthor J. H. Taylor) published by Academic Press
in 1973; Stable Adaptive Systems (coauthor A. M. Annaswamy); and
Learning Automata-An Introduction (coauthor M. A. L. Thathachar) pub-
lished by Prentice-Hall in 1989. He is also the editor of the books Appli-
cations ofAdaptive Conrrol (coeditor R. V. Monopoli) published by Aca- Kannan Parthasarathy was born in India on Au-
demic Press in 1980 and Adaprive and Learning Systems (Plenum Press, gust 31, 1964. He received the B.Tech. degree in
1986). He is currently editing a reprint volume entitled Recent Advances electrical engineering from the Indian Institute of
in Adaptive Control (coeditors R. Ortega and P. Dorato) which will be Technology, Madras, in 1986 and the M.S. de-
published by the IEEE Press. His research interests are in the areas of gree and M.Phil. degree in electrical engineering
stability theory, adaptive control, learning automata and the control of from Yale University in 1987 and 1988, respec-
complex systems using neural networks. He has served on various national tively. At present he is a doctoral candidate at Yale
and international technical committees as well as several editorial boards University. His research interests include adap-
of technical journals. tive and learning systems, neural networks and
Dr. Narendra is a member of Sigma Xi and the American Mathematical their application to adaptive control problems.
Society and a Fellow of the American Association for the Advancement of Mr. Parthasarathv is the reciDient of the Con-
Science and the IEE (U.K.). He was the recipient of the 1972 Franklin V. necticut High Technology Graduate Scholarship for the year 1989-1990.

Authorized licensd use limted to: IE Xplore. Downlade on May 13,20 at 1:542 UTC from IE Xplore. Restricon aply.

You might also like