Bruce J West - Fractal Physiology and Chaos in Medicine-World Scientific (2013)
Bruce J West - Fractal Physiology and Chaos in Medicine-World Scientific (2013)
FRACTAL PHYSIOLOGY
AND CHAOS IN MEDICINE
2nd Edition
Bruce J West
Army Research Off ice, USA
:RUOG6FLHQWLÀF
NEW JERSEY t LONDON t SINGAPORE t BEIJING t SHANGHAI t HONG KONG t TA I P E I t CHENNAI
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
ISBN 978-981-4417-79-2
Preface ix
1 Introduction 1
1.1 What is Linearity? . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Why Uncertainty? . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 How Does Nonlinearity Change Our View? . . . . . . . . . 12
1.4 Complex Networks . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Summary and a Look Forward . . . . . . . . . . . . . . . . 21
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References 281
Index 313
This book is concerned with the application of fractals and chaos (as well
as other concepts from nonlinear dynamics systems theory) to biomedical
phenomena. In particular, I have used biomedical data sets and modern
mathematical concepts to argue against the outdated notion of homeosta-
sis. It seems to me that health is at least a homeodynamics process with
multiple steady states — each being capable of survival. This idea was
developed in collaboration with my friend and colleague A. Goldberger
during long discussions in which we attempted to learn each others disci-
plines. This book is not restricted to our own research, however, but draws
from the research of a large number of investigators. Herein we seek breadth
rather than depth in order to communicate some of the excitement being
experienced by scientists making applications of these concepts in the life
sciences. I have tried in most cases to motivate a new mathematical con-
cept using a biomedical data set and have avoided discussing mathematics
for its own sake. Herein the phenomena to be explained take precedence
over the mathematics and therefore one will not find any proofs, but some
attempt has been made to provide reference as to where such proofs can
be found.
ix
I wish to thank all those who have provided help and inspiration over the
years, in particular L. Glass, A. Goldberger, A. Mandell and M. Shlesinger;
with special thanks to A. Babloyantz for a critical reading of an early
version of the manuscript. I also wish to thank Ms. Rosalie Rocher for her
expert word processing of the manuscript and W. Deering for making the
time to complete this work available to me.
Bruce J. West
Denton, TX
July 4, 1990
Second printing
I am gratified that this small book is going into a second printing. If
time had allowed I might have updated the examples given in the last
chapter and winnowed out some of the more speculative comments sprin-
kled throughout. However, much that was tentative and speculative ten
years ago, has since become well documented, if not universally accepted.
So if I had started down the road of revision I could easily have written an
entirely new book. Upon rereading the text I decided that it accomplished
its original purpose rather well, that being, to communicate to a broad au-
dience the recent advances in modeling that have had, and are continuing
to have, a significant influence on physiology and medicine. So I decided to
leave well enough alone.
Bruce J. West
Research Triangle Park, NC
January 1, 2000
Second Edition
In the first edition of this book fractal physiology was a phrase intended
to communicate what I and a few others thought, along with nonlinear
dynamics and chaos, was a dominant feature of phenomena in physiology
and medicine. In the nearly quarter century since its publication fractal
physiology has matured into an active area of research on its own. In a
similar way nonlinear dynamic models have replaced earlier, less inclusive
and more restrictive, linear models of biomedical phenomena. Much of the
content of the earlier book was preliminary and tentative, but has withstood
the test of time and settled into a life of its own. But rather than taking
victory laps I have elected to leave relatively unchanged those sections that
were correct and useful and to supplement the text with discussion of and
reference to the breakthroughs that have occurred in the intervening years.
The driver dozing behind the wheel of a car speeding along the highway,
the momentary lapse in concentration of the airtraffic controller, or the
continuous activity of an airplane pilot could all benefit from a diagnostic
tuned to the activity of the brain associated with wakeful attentiveness. As
systems become more complex and operators are required to handle ever-
increasing amounts of data and rapidly make decisions, the need for such a
diagnostic becomes increasingly clear. The development of new techniques
to assess the state of the operator in real time has now become available, so
that the capability exists for alerting the operator, or someone in the com-
mand structure, to the possibility of impending performance breakdown.
This is an example of how new ideas for understanding dynamics networks
have been used in biomedical and social systems.
Clinicians with their stethoscopes poised over the healthy heart, radiolo-
gists tracking the flow of blood and bile, and physiologists probing the ner-
vous system are all, for the most part unknowingly, exploring the frontiers
of chaos and fractals. The related topics of chaos and fractals are central
concepts in the discipline of nonlinear dynamics developed in physics and
mathematics over the past quarter century. Perhaps the most compelling
applications of these concepts are not in the physical sciences but rather in
physiology and medicine where fractals and chaos have radically changed
long-held views about order and variability in health and disease. One of
the things I attempt to document here is that a healthy physiological net-
1
tury and Cybernetics by N. Wiener just after World War Two. The systems
approach has gained currency in the past quarter century particularly as
regards the importance of nonlinear dynamics [381]. This contemporary
perspective supports the notion that homeostasis is overly restrictive and
one might more reasonably associate a principle of homeodynamics with
the present day understanding of biomedical processes. Such a principle
would require the existence of multiple metastable states for any physio-
logical variable rather than a single steady state.
In the physical sciences erratic fluctuations are often the result of the
phenomenon of interest being dynamically coupled to an unknown and of-
ten unknowable environment. This is how the phenomenon of diffusion is
understood, as we subsequently discuss. However in the life sciences this
model is probably unsuitable. The erratic behavior of healthy physiolog-
ical networks should not be interpreted solely as transient perturbations
produced by a fluctuating environment, but should also include the normal
’chaotic’ behavior associated with a new paradigm of health. The articula-
tion of such a principle was probably premature in the first edition of this
book twenty odd years ago, but the evidence garnered in the intervening
years supports its acceptance.
Herein I review such mathematical concepts as strange attractors, the
generators of chaos in many situations and fractal statistics arguing that
far from being unusual, these still somewhat unfamiliar mathematical con-
structs may be the dynamical maps of healthy fluctuations in the heart,
brain and other organs observed under ordinary circumstances. Broad in-
verse power-law spectra of time series representing the dynamic behavior
of biological systems appear to be markers of physiological information,
not ‘noise’. Thus, rather than neglecting such irregular behavior scientists
now attempt to extract the information contained therein when assessing
physiologic networks.
The activity of cardiac pulses and brain waves are quite similar to a wide
variety of other natural phenomena that exhibit irregular and apparently
unpredictable or random behavior. Examples that immediately come to
mind are the changes in the weather over a few days time, the height of the
next wave breaking on the beach as I sit in the hot sun, shivering from a
cold wind blowing down my back, and the infuriating intermittency in the
time intervals between the drips from the bathroom faucet just after I crawl
into bed at night. In some cases such as the weather, the phenomenon ap-
pears to be always random, but in other cases such as the dripping faucet,
sometimes the dripping is periodic and other times each drip appears to be
independent of the preceding one, thereby forming a irregular sequence in
time [316]. The formal property that all these phenomena share is nonlin-
earity, so that my initial presentations focus on how nonlinear models differ
from linear ones. In particular I examine how simple nonlinearities can gen-
erate aperiodic processes, and consequently apparently random phenomena
in a brainwave context [32] in a cardiac environment [127] and in a broad
range of other biomedical situations [381].
R = αF + β (1.1)
R = α1 F1 + α2 F2 (1.2)
N
R=α·F= α j Fj . (1.3)
j=1
In this last equation, we see that the total response of the system, here
a scalar, is a sum of the independent applied forces Fj each weighted by
its own sensitivity coefficient αj . These ideas carry over to more general
systems where F is a generalized time dependent force vector and R is the
generalized scalar response.
As discussed by Lavrentév and Nikol’skii [195] one of the most fruitful
and brilliant ideas of the second half of the seventeenth century was the
concept that a function and the geometric representation of a line are re-
lated. Geometrically the notion of a linear relation between two quantities
implies that if a graph is constructed with the ordinate denoting the values
of one variable and the abscissa denoting the values of the other then the
relation in question appears as a straight line. In systems of more than
two variables, a linear relation defines a higher order ‘flat’ surface. For ex-
ample, three variables can be realized as a three-dimensional coordinate
space, and the linear relation defines a plane in this space. One often sees
this notion employed in the analysis of data by first transforming one or
more of the variables to a form in which the data is anticipated to lie on
a straight line. Thus, one often searches for a representation in which lin-
ear ideas may be valid since the analysis of linear systems is completely
understood, whereas that for nonlinear systems of various kinds is still rel-
atively primitive [364, 385]. Of course nothing is free, so the difficulty of
the original problem reasserts itself in properly interpreting the nonlinear
transformation.
The two notions of linearity that we have expressed here, algebraic and
geometric, although equivalent, have quite different implications. The latter
use of the idea is a static graph of a function expressed as the geometrical
locus of the points whose coordinates satisfy a linear relationship. The
former expression in our examples has to do with the response of a system
to an applied force which implies that the system is dynamic, that is,
the physical observables change over time even though the force-response
relation may be independent of time. This change of the observable in
time is referred to as the evolution of the system and for only the simplest
systems is the relation between the dependent and independent variables
a linear one. Even the relation between the position and time of a falling
object is nonlinear, even though the force law of gravity is linear. We have
ample opportunity to explore the distinction between the above static and
dynamic notions of linearity. It should be mentioned that if the axes for
the graphical display exhaust the independent variables that describe the
system, then the two interpretations dovetail.
In the child’s swing example, specifying the height of the swing or equiv-
alently its angular position, completely determines the instantaneous con-
figuration of the swing. The swing after all is constrained by the length of
the chains and is therefore one-dimensional. As time moves on the point
(swing seat) traces out a curve, called an orbit or trajectory, that describes
the history of the system’s evolution. Each point in phase space is a state
of the system. Thus, an orbit gives the sequence of states occupied by the
system through time, but does not indicate how long the system occupies
a particular state. The state of an individual’s health, to give a ‘simple’
example, consists of their age, weight, height, blood pressure, and all the
sundry measures physicians have come to rely on as the various technolo-
gies have become available. The ‘space’ of health has an axis for each of
these variables and one’s life might be viewed as a trajectory in this high-
dimensional space. Height, weight and other factors change as life unfolds
but the trajectory never strays too far from a region we associate with
health. Such details are discussed subsequently.
This geometrical representation of dynamics is one of the more useful
tools in dynamic systems theory for analyzing the time-dependent prop-
erties of nonlinear systems. By nonlinear we now know that we mean the
output of the system is not proportional to the input. One implication of
this is the following: If the system is linear, then two trajectories initiated
at nearby points in phase space would evolve in close proximity, so that
at any point in future time the two trajectories (and therefore the states
of the system they represent) would also be near one another. If the sys-
tem is nonlinear then two such trajectories could diverge from one another
and at subsequent times (exactly how long is discussed subsequently) the
two trajectories become arbitrarily far apart, that is, the distance between
the orbits does not evolve in a proportionate way. Of course this need not
necessarily happen in a nonlinear system; it is a question of stability.
The accepted criteria for understanding a given phenomena varies as
one changes from discipline to discipline since different disciplines are at
different levels of scientific maturity. In the early developmental stage of a
discipline one is often satisfied with characterizing a phenomenon by means
of a detailed verbal description. This stage of development reaches maturity
when general concepts are introduced which tie together observations by
means of one or few basic principles, for example, Darwin [73] did this for
biological evolution through the introduction of: (1) the principle of uni-
versal evolution, (2) the law of natural selection, and (3) the law of survival
of the fittest. Freud [106] did this with human behavior through the intro-
duction of concepts such as conversion hysteria and the gross properties of
the systems examined. As observational techniques became more refined
additional detailed structures associated with these gross properties were
uncovered. In the examples cited the genetic structure of the DNA molecule
has for some replaced Darwin’s notion of ’survival of the fittest’ and causal
relations for social behavior are now sought at the level of biochemistry
[75]. The schism between Freud’s vision of a grand psychoanalytic theory
and microbiology is even greater. The criteria for understanding the latter
stages of development are quite different from those in the first stage. At
these ‘deeper’ levels the underlying principles must be universal and tied
to the disciplines of mathematics, physics, and chemistry. This is no less
true for medicine as we pass from the clinical diagnosis of a ailment to its
laboratory cure. Thus, concepts such as energy and entropy appear in the
discussion of microbiological processes and are used to guide the progress
of research in these areas.
The mathematical models that have historically developed throughout
Natural Philosophy have followed the paradigms of physics and chemistry.
Not just in the search for basic postulates that are universally applicable
and from which one can draw deductions, but more restrictively at the op-
erational level the techniques that have been adopted, with few exceptions,
have been linear. One example of this, the implications of which prove to
be quite important in physiology, has to do with the ability to isolate and
measure, that is, to operationally define a variable. In Natural Philosophy
this operational definition of a variable becomes intertwined with the con-
cept of linearity and therein lies the problem. To unambiguously define a
variable it must be measured in isolation, that is, in a context in which
the variable is uncoupled from the remainder of the universe. This situa-
tion can sometimes be achieved in the physical sciences (leaving quantum
mechanical considerations aside), but not so in the social and life sciences.
Thus, one must assume that the operational definition of a variable is suf-
ficient for the purposes of using the concept in the formulation of a model.
This assumption presumes that the interaction of the variable with other
‘operationally defined’ variables constituting the system is sufficiently weak
that for some specified conditions the interactions may be neglected. In the
physical sciences one has come to call such effects ‘weak interactions’ and
perturbation theories have been developed to describe successively stronger
interactions between a variable and the physical system of interest. Not only
is there no a priori reason why this should be true in general, but in point
of fact there is a great deal of experimental evidence that it is not true.
Consider the simple problem of measuring the physical dimensions of a
tube, when that tube is part of a complex physiological structure such as the
lung or the cardiovascular system. Classical measuring theory tells us how
we should proceed. After all, the diameter of the tube is just proportional
to a standard unit of length with which the measurement is taken. Isn’t it?
The answer to this question may be no. The length of a cord or a tube is
not necessarily given by the classical result. In a number of physical and
biomedical systems there may in fact be no fundamental scale of length (be
it distance or time) with which to measure the properties of the system,
the length may depend on how we measure it. The experimental evidence
for and implications of this remark are presented in Chapter Two where
we introduce and discuss the concept of fractal introduced by Mandelbrot
(1924-2010) [217, 219] and first discussed quantitatively in a physiologic
context by West and Goldberger [367] and followed by a parade of others.
In 1733 Jonathan Swift wrote: “So, Nat’ralist observe, a Flea Hath
smaller Fleas that on him prey, And these have smaller Fleas to bit ’em,
And so proceed ad infinitum.”; and some 129 years later de Morgan mod-
ified these verses to: “Great fleas have little fleas upon their backs to bite
’em and little fleas have lesser fleas, and so ad infinitum.” These couplets
capture an essential feature of what is still one of the more exciting con-
cepts in the physical, social and life sciences. This is the notion that the
dynamical activity observed in many natural phenomena is related from
one level to the next by means of a scaling relation. These poets observed
a self-similarity between scales, small versions of what is observed on the
largest scales repeat in an ever decreasing cascade of activity at smaller
and smaller scales. Processes possessing this characteristic are known as
geometric fractals. There is no simple compact definitions of a fractal, but
all attempts at one incorporates the idea that the whole is made up of
parts similar to the whole in some way. For example, those processes de-
scribed by fractal time manifest their scale invariance through their spectra
in which the various frequencies contributing to the dynamics are tied to-
gether through an inverse power law of the form 1/f a , where f is the
ity of the average value (most probable value; predicted value) so that the
largest fraction of the experimental results is concentrated at the center of
the distribution. The farther a value is from the peak the fewer times it is
observed in the experimentally data.
FIGURE 1.1. The universal Normal distribution is obtained by subtracting the average
value from each data element and dividing by the standard deviation. The peak is the
most probable value and the width of the distribution is unity since the variable has
been normalized by the standard deviation.
the CLT: 1) the errors are independent; 2) the errors are additive; 3) the
statistics of each error are the same and 4) the width of the distribution
is finite. These four assumptions were either explicitly or implicitly made
by Gauss, Adrian and Laplace and result in the Normal distribution. I
emphasize that this distribution requires linearity.
The Normal distribution has been used as the backbone for describ-
ing statistical variability in the physical, social and life sciences well into
the twentieth century. The entire nineteenth and most of the twentieth
century was devoted to experimentally verifying that the statistical fluc-
tuations observed in naturally occurring phenomena are Normal. It was
disconcerting when such careful experiments began to reveal that complex
phenomena have fluctuations that are not Normal, such as the distribu-
tion of income determined using the data collected by the Maquis Vilfredo
Frederico Damaso Pareto (1848–1923) at the end of the nineteenth century.
The income distribution was determined by Pareto to be an inverse power
law, with a long tail, and now bears his name. There is a great deal to
say about such distributions and where they are found in medicine [381].
In subsequent chapters I show that the properties necessary to prove the
CLT are violated by complex phenomena, particularly the assumptions of
independence and additivity and their violations give rise to inverse power
laws.
information were available about the initial state of the system, so that in
principle the evolution of the system would be predictable.
Crutchfield et al. [65] and others have pointed out, this viewpoint has
been altered by the discovery that simple deterministic systems with only a
few degrees of freedom can generate random behavior. They emphasize that
the random aspect is fundamental to the system dynamics and gathering
more information does not reduce the degree of uncertainty. Randomness
or uncertainty generated in this way is now called chaos. The distinction
between the ‘traditional’ view and the ‘modern’ view of randomness is
captured in the quotations from Henri Poincaré (1854–1912) [275]:
A very small cause which escapes our notice determines a
considerable effect that we cannot fail to see, and then we say
that the effect is due to chance. If we knew exactly the laws of
nature and the situation of the universe at the initial moment,
we could predict exactly the situation of that same universe
at a succeeding moment. But even if it were the case that the
natural laws had no longer any secret for us, we could still only
know the initial situation approximately. If that enabled us to
predict the succeeding situation with the same approximation,
that is all we require, and we should say that the phenomenon
had been predicted, that it is governed by laws. But it is not
always so; it may happen that small differences in the initial
conditions produce very great ones in the final phenomena. A
small error in the former will produce an enormous error in the
latter. Prediction becomes impossible, and we have the fortu-
itous phenomenon.
Laplace believed in strict determinism and to his mind this implied
complete predictability. Uncertainty for him is a consequence of impre-
cise knowledge, so that probability theory is necessitated by incomplete
and imperfect observations. Poincaré on the other hand sees an intrinsic
inability to make predictions due to a sensitive dependence of the evolution
of the system on the initial state of the system. This sensitivity arises from
an intrinsic instability of the system as first explained in a modern context
by Lorenz [205].
Recall the notion of a phase space and of a trajectory to describe the
dynamics of a system. Each choice of an initial state produces a different
trajectory. If however there is a limiting set in phase space to which all
trajectories are drawn after a sufficiently long time, we say that the sys-
tem dynamics are described by an attractor. An attractor is the geometric
limiting set on which all the trajectories eventually find themselves, that
is, the set of points in phase space to which the trajectories are attracted.
Attractors come in many shapes and sizes, but they all have the property
of occupying a finite volume of phase space. Initial points off the attractor
initiate trajectories that are drawn to it if they lie in the attractor’s basin
of attraction. As a system evolves it sweeps through the attractor, going
through some regions rather rapidly and others quite slowly, but always
staying on the attractor. Whether or not the system is chaotic is determined
by how two initially adjacent trajectories cover the attractor over time. As
Poincaré stated, a small change in the initial separation (error) of any two
trajectories produces an enormous change in their final separation (error).
The question is how this separation is accomplished on an attractor of
finite size. The answer has to do with the layered structure necessary for
an attractor to be chaotic.
Rössler [298] described chaos as resulting from the geometric operations
of stretching and folding often called the bbaker’s transformation. The con-
ceptual baker in this transformation takes some dough and rolls it out on
a floured bread board. When thin enough he folds the dough back onto
itself and rolls it out again. To transform this image into a mathematically
precise statement we assume that the baker rolls out the dough until it is
twice as long as it is wide (the width remains constant during this opera-
tion) and then folds the extended piece back reforming the initial square.
For a cleaner image we may assume that the baker cuts the dough before
neatly placing the one piece atop the other. Arnol’d gave a memorable im-
age of this process using the image of the head of a cat (cf. Arnol’d and
Avery [13]).
In Figure 1.2a cross section of the square of dough is shown with the head
of cat inscribed. After the first rolling operation the head is flattened and
stretched, that is, it becomes half its height and twice its length. It is then
cut in the center and the segment of dough to the right is set above the one
on the left to reform the initial square, as depicted in the center frame. The
operation is repeated again and we see that at the right the cat’s head is now
been championed by Shaw [316, 317] and Nicolis [248, 249]. One can view
the preparation of the initial state of the system as initializing a certain
amount of information. The more precisely the initial state can be speci-
fied, the more information one has available. This corresponds to localizing
the initial state of the system in phase space, the amount of information is
inversely proportional to the volume of state space localized by measure-
ment. In a regular attractor, trajectories initiated in a given local volume
stay near to one another as the system evolves, so the initial information is
preserved in time and no new information is generated. Thus, the initial in-
formation can be used to predict the final state of the system. On a chaotic
attractor the stretching and folding operations smear out the initial vol-
ume, thereby destroying the initial information as the system evolves and
the dynamics create new information. As a result the initial uncertainty in
the specification of the system is eventually spread over the entire attractor
and all predictive power is lost, that is, all causal connection between the
present and the future is lost. This is referred to as sensitive dependence
on initial conditions.
Let us denote the region of phase space as initially occupied by Vi (initial
volume) and the final region by Vf . The change in the observable informa-
tion I is then determined by the change in value from the initial to the
final state [248, 316]
Vf
δI = log2 . (1.4)
Vi
The rate of information creation or dissipation is given by
dI 1 dV
= (1.5)
dt V dt
where V is the time-dependent volume over which the initial conditions
are spread. In non-chaotic systems, the sensitivity of the flow in the initial
conditions grows with time at most as a polynomial, for example, let ω(t)
be the number of distinguishable states at time t so that
ω (t) ∝ tn . (1.6)
The relative size of the volume and the relative number of states in this
case remains the same
Vf ωf
= (1.7)
Vi ωi
so that for the rate of change in the information [316]
dI n
∼ . (1.8)
dt t
Thus, the rate of information generation converges to zero as t → ∞ and
the final state is predictable from the initial information. On the other
hand, in chaotic systems the sensitivity of the flow on initial conditions
grow exponentially with time,
dI
∼ n. (1.10)
dt
This latter system is therefore a continuous source of information, the at-
tractor itself generates the information independently of the initial condi-
tions. This property of chaotic dynamic systems was used by Nicolis and
Tsuda [248] to model cognitive systems. The concepts from chaotic at-
tractors are used for information processing in neurophysiology, cognitive
psychology and perception [249]. To pursue these latter applications in any
detail would take up too far afield, but we continue to mention the existence
of such applications where appropriate.
The final measure of the degree of chaos associated with an attractor with
which I am concerned is the set of Lyapunov exponents. These exponents
quantify the average exponential convergence or divergence of nearby tra-
jectories in the phase space of the dynamical systems. Wolf [403] believed
the spectrum of Lyapunov exponents provides the most complete qualita-
tive and quantitative characterization of chaotic behavior. A system with
one or more positive Lyapunov exponents is defined to be chaotic. The local
stability properties of a system are determined by its response to pertur-
bations; along certain directions the response can be stable whereas along
others it can be unstable. If we consider a d−dimensional sphere of initial
conditions and follow the evolution of this sphere in time, then in some di-
rections the sphere will contract, whereas in others it will expand, thereby
forming a d−dimensional ellipsoid. Thus, a d−dimensional system can be
characterized by d exponents where the j th Lyapunov exponent quantifies
the expansion or contraction of the flow along the j th ellipsoidal principal
axis. The sum of the Lyapunov exponents is the average divergence, which
for a dissipative system (possessing an attractor) must always be negative.
Consider a three-dimensional phase space in which the attractor can be
characterized by the triple of Lyapunov exponents (λ1 , λ2 , λ3 ). The quali-
tative behavior of the attractor can be specified by determining the signs of
the Lyapunov exponents only, that is, (signλ1 , signλ2 , signλ3 ). As shown
(−,−,−) (0,−,−)
torus strange attractor
(0,0,−) (+,0,−)
FIGURE 1.3. The signs of Lyapunov exponents of different attractor types in a three-
dimensional phase space. From the upper left, going clock-wise, we have a fixed-point,
a Van der Poll limit cycle, a two - dimensional torus, and a two-dimensional projection
of a Rössler oscillator.
The triple (0, 0, −) has two neutral directions and one that is contracting
so that the attractor is the 2-torus depicted in Figure 1.3c. The surface of
the torus is neutrally stable and trajectories off the surface are drawn onto
it asymptotically.
Finally(+, 0, −) corresponds to a chaotic attractor in which the trajecto-
ries expand in one direction, are neutrally stable in another and contracting
in a third. In order for the trajectories to continuously expand in one di-
rection and yet remain on a finite attractor, the attractor must undergo
stretching and folding operations in this direction. Much more is said about
this stretching and folding operation on such attractors in Chapter 3.
It should be emphasized that the type of attractor describing a system’s
dynamics is dependent on certain parameter values. I review the relation
between parameter values and some forms of the dynamic attractor in
Chapter 3 and show therein how a system can undergo transitions from
simple periodic motion to apparently unorganized chaotic dynamics. It is
therefore apparent that the Lyapunov exponents are dependent on these
control parameters.
The notion of making a transition from periodic to chaotic dynamics
lead Mackey and Glass [212] to introduce the term dynamical disease to
denote pathological states of physiological systems over which control has
been lost. Rapp et al. [286] as well as Goldberger and West [127] make the
general observation that chaotic behavior is not inevitably pathological.
That is to say that, for some physiological processes, chaos may be the
normal state of affairs and transitions to and from the steady state and
periodic behavior may be pathological. Experimental support for this latter
point of view is presented subsequently.
1 z
P (z, t) = F z , (1.15)
tμ tμ
as discussed in the sequel. Note that in a standard diffusion process Z(t) is
the displacement of the diffusing particle from its initial position at time t,
μ = 12 and the functional form of Fz (·) is a Normal distribution. However,
for general complex phenomena there is a broad class of distributions for
which the functional form of Fz (·) is not Normal and the scaling index
μ = 12 . All this is made clear subsequently.
concepts that must be developed for latter use and this is done through
various worked out examples. The whole idea of modeling physiological
networks by continuous differential equations is discussed in the context of
bio-oscillators, which are nonlinear oscillators capable of spontaneous exci-
tation, and strange attractors, which are sets of dissipative nonlinear equa-
tions capable of generating aperiodic time series. The distinction between
limit cycle attractors and strange attractors is basic to the understanding
of biomedical time series data taken subsequently.
Not only continuous differential equations are of interest in Chapter
Three, but so too are discrete equations. Discrete dynamical models ap-
pear in a natural way to describe the time evolution of biosystems in which
successive time intervals are distinct, for example, to model changes in pop-
ulation levels between successive generations where change occurs between
generations and not within a given generation. These discrete dynamical
models are referred to as mappings and may be used directly to model
the evolution of a network or they may be used in conjunction with time
series data to deduce the underlying dynamical structure of a biological
process. As in the continuum case the discrete dynamic equations can have
both periodic and aperiodic solutions, that is to say the maps also generate
chaos in certain parameter regimes. Since such physiological processes as
the interbeat interval of the mammalian heart can be characterized as a
mapping, that is, one beat is mapped into the next beat by the ‘cardiac
map’ it is of interest to know how the intervals between beats are related to
the map. We discuss how a map can undergo a sequence of period doubling
bifurcations to make a transition from a periodic to a chaotic solution. The
latter solution has been used by some to describe the normal dynamic state
of the human heart.
As we mentioned earlier, one indicator of the qualitative dynamics of a
system, whether it is continuous or discrete, is the Lyapunov exponent. In
either case its sign determines whether nearby orbits exponentially sepa-
rate from one another in time. Chapter Three presents the formal rules
for calculating this exponent in both simple systems and for general N-
dimensional maps. Of particular concern is how to relate the Lyapunov
exponents to the information generated by the dynamics. This question is
particularly important in biological networks because it provides one of the
measures of a strange attractor. Other measures that are discussed include
the power spectrum of a time series, that is, the Fourier transform of the
two-point correlation function; the correlation dimension (a bound on the
fractal dimension) obtained from the two-point correlation function on a
dynamical attractor; and the phase space portrait of the attractor recon-
structed from the data. These latter two measures are shown to be essential
ture of the attractor underlying the brain wave activity. First we examine
normal brain wave activity and find that one can both construct the phase
space portraits of the attractors and determine the fractional dimension of
the attractors. A number of difficulties associated with the data processing
techniques are uncovered in these analyses and ways to improve the effi-
ciency of these methods are proposed. One result that clearly emerges from
the calculations is that the dimension of the ‘cognitive attractor’ decreases
monotonically as a subject changes from quiet, awake and eyes open to
deeper stages of sleep.
On the theoretical side the model of Freeman [105], which he developed to
describe the dynamics of the olfactory system in a rat, is briefly discussed.
It is found that the basal olfactory EEG signal is not sinusoidal, but is
irregular and aperiodic. This intrinsic unpredictability is captured by the
model in that the solutions are chaotic attractors for certain classes of
parameter values. These theoretical results are quite in keeping with the
experimental observations of normal EEG records.
One of the more dramatic results that has been obtained is the precip-
itous drop in the correlation dimension of the EEG time series when an
individual undergoes an epileptic seizure. The brain’s attractor seems to
have a dimensionality on the order of 4 or 5 in deep sleep and to have
the much lower dimensionality of approximately 2 in the epileptic state.
This sudden drop in dimensionality was successfully captured in Freeman’s
model in which he calculated the EEG time series for a rat undergoing a
seizure.
The closing chapter attempts to loosely weave together the strings of
chaos theory, fractal geometry and statistics, complexity theory and a num-
ber of the other techniques developed in this revision in the context of the
nascent discipline of Network Science. A brief introduction into complex
networks that has blossomed in the past decade is presented in Chapter Six,
particularly as the ideas apply to physiologic networks. In order to make the
discussion concrete the decision making model [340, 341] (DMM) is used to
develop a number of the theoretical concepts such as synchronization and
criticality. The inverse power laws of connectivity and the time intervals
between events are shown to be emergent properties of network dynamics
and do not require separate assumptions regardless of whether physical,
social or physiological networks are under investigation. Some additional
detail is given on the network theory explanation of neuronal avalanches
[35] in normal cognitive behavior and the new disease of multiple organ
dysfunctional syndrome (MODS) [49].
0
0.5 1 1.5
FIGURE 2.1. The strength of a bone increases with the cross-sectional area A l2
whereas its weight increases as the volume W l3 . The intersection of the two curves
yields A = W . Beyoind this point the structure becomes unstable and collapses under
its own weight.
FIGURE 2.2. The photograph shows a rubber cast of the human bronchial tree, from
the trachea to the terminal bronchioles. The mammalian lung, has long been a paradigm
of natural complexity, challenging scientists to reduce its structure and growth to simple
rules.
1.0 REST
0.8
0.6
INTERBEAT INTERVAL
1.2
1.0
0.8
ACTIVE
0.6
0.4
0.2
FIGURE 2.3. The contrast in the heart rate variabiity for a healthy individual between
the resting state and that of normal activity is quite dramatic. Any model that is to
successfully describe cardiovascular dynamics must be able to explain both the order of
the resting state and the variability of the active state.
If, for example, I feel my pulse while resting, my heart rate appears rela-
tively regular. However, if I were to record the activity of my heart during
a vigorous day’s activity, a far different impression of the normal heart-
beat would be obtained. Instead of exclusively observing some apparently
regular steady state, the record would show periods of sharp fluctuations
interspersed between these apparently regular intervals, see Figure 2.3. Any
useful model of lung anatomy would explain both its variability and order.
The same criteria can be adopted for judging the success of any model of
In applying the new scaling ideas to physiology scientists have seen that
irregularity, when admitted as fundamental rather than treated as a patho-
logical deviation from some classical ideal, can paradoxically suggest a more
powerful unifying theory. To describe the advantage of the new concepts I
must first review some classical theories of scaling.
level k level k + 1
rk+1
rk
rk+1
lk
lk+1
FIGURE 2.5. Sketch of a branching structure such as a blood vessel or bronchial airway
with the parameters used in a bifrucating network model.
Δpk 8νlk
Ωk = = . (2.3)
Qk πrk4
The total resistance for a network branch with m identical tubes in paral-
lel is 1/m the resistance of each individual tube. Thus, in this oversimplified
case we can write the total network resistance as
8ν 1 lk
N
8νl1
ΩT = + . (2.4)
πr04 π j=1 Nj rj4
In order to minimize the resistance for a given mass Rachevsky first ex-
pressed the initial radius r0 in terms of the total mass of the network. The
optimum radii for the different branches of the bifurcation network having
the total mass M are then determined such that the total resistance is a
−1/3
minimum ∂Ω T
∂rj = 0 yielding the equality rk = Nk r0 . The ratio of the
1/3
rk+1 /rk = (Nk /Nk+1 ) (2.5)
This is the classic ‘cube law’ branching of Thompson [336] in which he used
the ‘principle of similitude’. The value 2−1/3 was also obtained by Weibel
and Gomez [358] for the reduction in the diameter of bronchial airways for
the first ten generations of the bronchial tree. However they noted a sharp
deviation away from this constant fractional reduction beyond the tenth
generation as shown in Figure 2.6.
Theodore Wilson [399] subsequently offered an alternate explanation for
the proposed exponential decrease in the average diameter of a bronchial
tube with generation number by demonstrating that this is the functional
form for which a gas of a given composition can be provided to the alveoli
with minimum metabolism or entropy production in the respiratory mus-
culature. His hypothesis was that the characteristics of the design of physi-
ologic networks take values for which a given function can be accomplished
with minimum total entropy production. This principle was articulated
in great detail somewhat later by Glansdorf and Prigogine [113] in a much
broader physical context that includes biological systems as a special appli-
cation. Rather than minimum entropy production Rashevsky believed that
the optimal design is accomplished with minimum of material used and
energy expended. Each of these principles takes the form of minimizing the
variation of the appropriate quantity between successive generations. The
relative merits of which quantity is to be minimized and why this is a rea-
sonable modeling strategy is not taken up here, but rather we stress that
the anatomic data apparently suggest an underlying principle that guides
the morphogenesis of the bronchial tree. We return to the question of the
possible relation between scaling and morphogenesis in due course.
−1
−2
−3
−4
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28
GENERATION (z)
FIGURE 2.6. The human lung cast data of Weibel and Gomez [358] for 23 generations
are indicated by the circles and the prediction using the exponential form for the average
diameter is given by the straight line. The fit is quite good until z = 10, after which
there is a systematic deviation of the anatomic data from the theoretical curve.
Note that the analyses up to this point are consistent with the data for
ten generations of the bronchial tree. However, when we examine Weibel
and Gomez’s data for the entire span of the bronchial tree data (more than
twenty generations) a remarkable systematic deviation from the exponen-
tial behavior appears as depicted in Figure 2.6. Weibel and Gomez [358]
attributed this deviation to a change in the flow mechanism in the bronchial
tree from that of minimum resistance to that of molecular diffusion. I con-
tend that the observed change in the average diameter can equally well be
explained without recourse to such a change in flow properties. Recall that
the arguments reviewed neglect the variability in the linear scales at each
generation and uses only average values for lengths and diameters. The
distribution of linear scales at each generation accounts for the deviation
FIGURE 2.7. A Cantor set can be generated by removing the middle third of a line
sement at each generation z. The set of points remaining in the limit z → ∞ os called a
Cantor set. The line segments are distributed more and more sparsely with each iteration,
and the resulting set of points is both discontinuous and inhomogeneous.
that the mass points are initially distributed along a line of unit length. In
cutting out the middle third of the line, we redistribute the mass along the
remaining two segments so that the total mass of the set remains constant.
At the next stage, where the middle third is cut out of each of the two
line segments, we again redistribute the mass so that none is lost. We now
define the parameter a as the ratio of the total mass to the mass of each
segment after one trisecting operation. Thus, after z trisections the number
of line segments is
N (z) = az . (2.7)
We also define a second parameter b as the ratio of the length of the orig-
inal line to the length of each remaining segment, which for the case of
trisections gives
η(z) = bz . (2.8)
The fractal dimension of the resulting Cantor set is
ln N (z) ln a
D= = . (2.9)
ln η(z) ln b
Note that the dimension is independent of z and therefore is equal to the
asymptotic fractal dimension. In this example, since each segment receives
half the mass of its parent, a = 2 and since we are cutting out the middle
third b = 3 so that the fractal dimension is ln 2/ ln 3 = 0.6309.
boundaries fall on straight lines with slopes given by (d − 1). From these
data we find that d ≈ 1.3 for the coast of Britain and d = 1 for a circle, as
expected. Thus, it is evident that
AUSTRA
LI AN COA
ST
4.0
Log10 (Total Length in Kilometers)
CIRCLE
WEST
COAS
T OF
BRITA
IN
3.0
LAND-FRO
NTIER OF
PORTUGAL
FIGURE 2.8. Fractal plots of various coastlines in which the apparent length L(η) is
graphed versus the measuring unit η: plotted as log10 [total length (km)] versus log10
[length of scale (km)]. [216]
FIGURE 2.9. On a line segment of unit length a kink is formed, giving rise to four line
segments, each of length l/3. The total length of this line is 4/3. On each of these line
segments a kink is formed, giving rise to 16 line segments each of length 1/9. The total
length of this curve is (4/3)2 This process is continued as shown through n = 5 for the
triadic Koch curve.
η = 1/3n . (2.14)
− log η/ log 3
4 ln η
L (η) = = e− ln 3 (ln 4−ln 3)
3
= η 1−d . (2.16)
FIGURE 2.10. Here we schematically represent how a given mass can be non-uniformly
distributed in a given volume in such a way that the volume occupied by the mass has
a fractal dimension D = lna/lnb. The parameter b gives the scaling from the original
sphere of radius r and the parameter a gives the scaling from the original total mass M
assumed to be uniformly distributed in a volume r 3 to that non-uniformly distributed
in the volume r D .
d = ln 4/ ln 3 ≈ 1.2628 (2.17)
In the second decade of the last century Felix Hausdorff determined that
one could generally classify such a set as the one described above by means
of a fractional dimension [217, 219]. An application of Hausdorff’s reason-
ing can be made to the distribution of mass points in a volume of space of
radius R, where a mass point is again used to denote an indivisible unit of
physical mass (or probability mass) at a mathematical point in space. Any
observable quantity is then built up out of large numbers of these idealized
mass points. One way of picturing a distribution having a fractional dimen-
sion is to imagine approaching a mass distribution from a great distance.
At first, the mass seems to be in a single cluster. As one gets closer, it is
observed that the cluster is really composed of smaller clusters such that
upon approaching each smaller cluster, they are seen to be composed of a
set of still smaller clusters, etc. It turns out that this apparently contrived
example in fact describes the distribution of stars in the heavens, and the
Hausdorff dimension has been determined by astronomical observations to
be approximately 1.23 [266]. Figure 2.10 depicts how the total mass of such
a cluster is related to its Hausdorff (fractal) dimension.
The total mass M (R) of a distribution of mass points in Figure 2.10a
is proportional to Rd , where d is the dimension of space occupied by the
masses. In the absence of other knowledge it is assumed that the point
masses are uniformly distributed throughout the volume and that d is equal
to the Euclidean dimension E of the space, for example in three spatial di-
mensions d = E = 3. Let us suppose, however, that on closer inspection
we observe that the mass points are not uniformly distributed, but instead
are clumped in distinct spheres of size R/b each having a mass that is 1/a
smaller than the total mass as depicted in Figure 2.10b. Thus, what was
initially visualized as a beach ball filled uniformly with sand turns out to
resemble one filled with basketballs, each of the basketballs being filled uni-
formly with sand. Now examine one of these smaller spheres (basketballs)
only to find that instead of the mass points being uniformly distributed in
this reduced region it consists of still smaller spheres, each of radius R/b2
and each having a mass 1/a2 smaller than the total mass as shown in Fig-
ure 2.10c. Now again the image changes so that the basketballs appear to
be filled with ping-pong balls, and each ping-pong ball is uniformly filled
with sand. If we assume that this procedure of constructing spheres within
spheres can be telescoped indefinitely we obtain
M (R) = lim [M (R/bN )aN ]. (2.19)
N →∞
This relation yields a finite value for the total mass in the limit of N becom-
ing infinitely large only if D = ln a/ ln b, where D is the fractal dimension
of the distribution of mass points dispersed throughout the topological vol-
ume of radius R. The index of the power-law distribution of mass points
FIGURE 2.11. Fractals are a family of shapes containing infinite levels of detail, as
observed in the Cantor set and in the infinitely clustering spheres. In the fractals repro-
duced here, the tip of each branch continues branching over many generations, on smaller
and smaller scales, and each magnified, smaller scale structure is similar to the larger
form, a property called self- similarity. As the fractal (Hausdorff) dimension increases
between one and two (left to right in the figure), the tree sprouts new branches more and
more vigorously. The organic, treelike fractals shown here bear a striking resemblance
to many physiological structures. (From [367] with permission.)
FIGURE 2.12. Here we show the harmonic terms contributing to the Weierstrass func-
tion: (a) a fundamental with frequency ω0 and unit amplitude; (b) a second periodic
terms of frequency bω0 with amplitude 1/a and so on until one obtains (c) a superposi-
tion of the first 36 terms in the Fourier series expansion of the Weierstrass function. We
choose the values a = 4 and b = 8, so that the fractal dimension is D = 2 − 2/3 = 4/3,
close to the value used in Figure 2.7.
∞
1
F (z) = n
cos [bn ω0 z] , a, b > 1 (2.20)
n=0
a
1
Amplitude
−1 0.8
0 1 2
Time
Amplitude
0.6
0.78
0.4
0.76
0.74
0.78 0.79 0.80
Time
FIGURE 2.13. We reproduce here the Weierstrass curve constructed in Figure 2.12 in
which we superpose smaller and smaller wiggles; so that the curve looks like the irregular
line on a map representing a very rugged seacoast. Inset (b) is a magnified picture of the
boxed region of inset (a). We see that the curve in (b) appears qualitatively the same as
the original curve but note the change in scale. We now magnify the boxed region in (b)
to obtain the curve in (c) and again obtain a curve that is qualitatively indistinguishable
from the first two, again note the change in scale. This procedure can in principle be
continued indefinitely because of the fractal dimension of the curve.
∞
1
F (z) = n
cos [bn ω0 z] + cos [ω0 z] (2.21)
n=1
a
1
F (z) = F (bz) + cos [ω0 z] . (2.22)
a
The solution to Eq.(2.22) is worked out in detail elsewhere [364] in terms
of an analytic part Fa and a singular part Fs , such that F = Fa + Fs .
Thus, if we drop the harmonic term on the right hand side of Eq. (2.22)
we obtain a functional scaling relation for Fs (z). The dominant behavior
of the Weierstrass function is then expressed by the functional relation
1
Fs (z) = Fs (bz) . (2.23)
a
The interpretation of this relation is that if one examines the properties of
the function on the magnified scale bz what is seen is the same function
observed at the smaller scale z but with an amplitude that is scaled by
a. This is self-similarity and an expression of the form Eq. (2.23) is called
a renormalization group relation. The mathematical expression for this
self-similarity property predicts how the function F (z) varies with z. The
renormalization group transformation can be solved to yield,
The complex coefficients {An } in the Fourier expansion Eq. (2.27) are de-
termined from data.
1
∞
n
X(z) = n
1 − eib ω0 z eiφn (2.28)
n=−∞
a
where the phase φn is arbitrary. This function was first examined by Lévy
and later used by Mandelbrott [217]. The fractal dimension D of the curve
generated by the real part of Eq. (2.28) with φn = 0 is given by 2 − D = α
so that
D = 2 − ln a/ ln b, (2.29)
which for the parameters a = 4 and b = 8, is D = 4/3. Maudlin and
Williams [227] examined the formal properties of such functions and con-
cluded that for b > 1 and 0 < a ≤ b the dimension is in the interval
[2 − a − C/ ln b, 2 − a] where C is a positive constant and b is sufficiently
large.
The set of phases {φn } may be chosen deterministically as done above,
or randomly as now. If φn is a random variable uniformly distributed on
the interval (0, 2π), then each choice of the set of values {φn } constitutes
a member of an ensemble for the stochastic function X(z). If the phases
and assume that the φn are independent random variables uniformly dis-
tributed on the interval (0, 2π). The mean-square increment is
2
Q(Z) = |ΔX(Z, z)|
φ
∞
= b−2n(2−D) 2 [1 − cos (bn ω0 Z)] (2.31)
n=−∞
where the notation is changed to explicitly account for the scale parame-
ter γ(= ln(1/q) > 0) in the contracting process of the bronchial tree. In
Eq.(2.33) there is a single value for γ, but in the bronchial tree there are
a number of such scales present at each generation. The fluctuations in
d(z, γ) could then be characterized by a distribution of the γ’s, that is,
P (γ)dγ is the probability that a particular scale in the interval (γ, γ + dγ)
is present in the measured diameter. The average diameter of an airway at
the z th generation is then formally given by
∞
d (z) = d(z, γ)P (γ)dγ. (2.34)
0
and Eq. (2.34) reduces to Eq. (2.33) with γ restricted to the single value γ0 .
However, from the data in Figure 2.2 it is clear that the measured average
diameter d (z) is not of the exponential form for the entire range of z
values.
Rather than prescribing a particular functional form to the probabil-
ity density West, Bhargava and Goldberger [366] constructed a model, the
WBG or fractal model of the lung, based on the scaling of the parame-
ter γ. Consider a distribution P (γ) having a finite central moment, say a
mean value γ0 . Now, following Montroll and Shlesinger [241], WBG apply
a scaling mechanism such that P (γ) has a new mean value γ0 /b:
and assume this occurs with relative frequency 1/a. WBG apply the scaling
again so that the scaled mean is again scaled and the new mean is γ0 /b2
and occurs with a relative frequency 1/a2 . This amplification process is
applied repeatedly and eventually generates the unnormalized distribution
1 1
G (ξ) = P (ξ) + P (bξ) + 2 P (b2 ξ) + · · · (2.37)
a a
Eq. (2.37) is
∞
1 1
P (ξ) dξ = 1+ + 2 2 +···
ab a b
0
= N (ab) (2.38)
where N (ab) is the normalization constant, is finite for ab > 1 and in fact
1
N (ab) = 1 − (2.39)
ab
for an infinite series. WBG use the distribution Eq.(2.37) to evaluate the
observed average diameter, denoted by an overbar, and obtain
1 1
d(z) = N (ab) d(z) + d(z/b) + 2 d(z/b2 ) + · · ·
a a
normalized to the value in Eq.(2.38). This series can be written in the more
compact form
1
d(z) =d(z/b) + N (ab) d (z) (2.40)
a
as the number of terms in the series becomes infinite.
Note the renormalization group relation that results from this argument
when the second term on the rhs of Eq. (2.40) is dropped. Here again
we restrict our attention to the dominant behavior of the solution to this
renormalization group relation. If we separate the contributions to d(z),
into that due to singularities, denoted by ds (z), and that which is analytic,
denoted by da (z), then the singular part satisfies the functional equation
1
ds (z) = ds (z/b) (2.41)
a
The solution to this equation is
A(z)
ds (z) = (2.42)
zα
where by direct substitution the power-law index is found to be
ln a
α= (2.43)
ln b
and the periodic coefficient is
∞
A(z) = A(z/b) = An e2πni ln z/ ln b . (2.44)
n=−∞
Rat Dog
1 Human
Hamster
−1
0 1 2 3
Log Generation (Log z)
FIGURE 2.14. We plot the data of Weibel and Gomez [358] on log-log graph paper and
see that the dominant character of the functional dependence of the average bronchial
diameter on generation number is indeed an inverse power law. Thus on log-log graph
paper the relationship yields a straight line. In addition to this inverse power-law depen-
dence of the average diameter on z there appears to be a periodic variation of the data
about this power-law behavior. This harmonic variation is not restricted to the data
sets of humans but also appears in data obtained for dogs, rats and hamsters derived
from Raabe [281] and his colleagues. The harmonic variation is at least as pronounced
in these latter species as it is in humans. (From West et al. [366] with permission.)
describe this network Cohn [59] introduced the notion of an ‘equivalent bi-
furcation system’. The equivalent bifurcation systems were examined to de-
termine the set of rules under which an idealized bifurcating system would
most completely fill space. The analogy was based on the assumption that
the branchings of the arterial system should be guided by some general
morphogenetic laws enabling blood to be supplied to the various parts of
the body in some optimally efficient manner. The branching rule in the
mathematical system is then to be interpreted in the physiological context.
This was among the first physiological applications of the self-similarity
idea, predating the formal definition of fractals and was subsequently in-
dependently found and applied by West et al. [366].
FIGURE 2.15. The variation in diameter of the bronchial airways is depicted as a func-
tion of generation numbers for humans, rats, hamsters, and dogs. The modulated inverse
power law observed in the data of Raabe et al. [281] is readily captured by the function
F (z) = [A0 + A1 cos (2π ln z/ ln b)] /z α . (From Nelson et al [246] with permission).
connective tissue, called chordae tendineae, anchor the mitral and tricuspal
valves, and the electrical impulse is conducted by a fractal neural network,
the His-Purkinje system, embedded within the muscle.
I examined the fluctuation-tolerance of the growth process of the lung
and found that its fractal nature does in fact have a great deal of survival
potential [370]. In particular fractal structures were shown to be much
more error-tolerant than those produced by classical scaling; an observation
subsequently made by others as well [360, 388]. Such error tolerance is
important in all aspects of biology, including the origins of life itself [79].
The success of the fractal model of the lung suggests that nature may prefer
fractal structures to those generated by more traditional scaling. I suggested
that the reason as to why this is the case may be related to the tolerance
that fractal structures (processes) seem to possess over and above those of
classical structures (processes). Said differently, fractal processes are more
adaptive to internal changes and to changes in the environment than are
classical ones. Let us review the construction of a simple quantitative model
of error response to illustrate the difference between the classical and fractal
models.
Fz = F0 e−λz (2.45)
compared with a fractal scaling characterization of the same property at
level z > 1
G0
Gz = . (2.46)
zλ
The network property of interest at generation z could be the diameter of
a tube, the number of branches, the length of a tube and so on. The two
functional forms are presented here somewhat abstractly but what is of
significance is the different functional dependence on the parameter λ. The
exponential Eq. (2.45) has emerged from a large number of optimization
arguments whereas the inverse power-law form Eq. (2.46) results from the
fractal arguments I first expressed with some colleagues [366].
I [371, 381] assumed the parameter λ is made up of two pieces; a constant
part λ0 that dominates in the absence of fluctuations and a random part ξ.
The random part can arise from unpredictable changes in the environment
during morphogenesis, non-systematic errors in the code generating the
physiologic structure or any of a number of other causes of irregularity.
The average diameter of a classically scaling airway is given by
Fz ξ = F0 e−λ0 z e−ξz ξ (2.47)
∞
ξ = ξP (ξ)dξ = 0 (2.49)
−∞
and
∞
2
ξ = ξ 2 P (ξ)dξ = σ 2 (2.50)
−∞
so that
2 2
Fz ξ = F0 e−λ0 z eσ z /2
= Fz0 exp σ 2 z 2 /2 (2.52)
Note that the choice of Normal statistics has no special significance here
except to provide closed form expressions for the error that can be used to
compare the classical and fractal models.
The error in the classical scaling
model of the lung grows as exp σ 2 z 2 /2 .
In the fractal model of the lung the same assumptions are made and the
average over the ξ−fluctuations is
G0 1
Gz ξ = λ0 (2.53)
z zξ ξ
using the Normal distribution yields
∞
1 2
= e−ξ ln z P (ξ)dξ = exp σ 2 (ln z) /2 (2.54)
zξ ξ
−∞
resulting in
G0 σ2 (ln z)2 /2
0 2 2
Gz ξ = e = G z exp σ (ln z) /2 . (2.55)
z λ0
2
Consequently the error in the fractal model grows as exp σ 2 (ln z) /2 .
The relative error generated by the fluctuations is given by the ratio of
the average value to the function in the absence of fluctuations and for the
two models we have the relative errors
2 2
exp
σ z /2
εz = 2 . (2.56)
exp σ 2 (ln z) /2
The two error functions are graphed in Fig. 2.16 for fluctuations with
a variance σ 2 /2 = 0.01. At z = 15 the error in classical scaling is 9.5.
This enormous relative error means that the perturbed average property
at generation 15 differs by nearly an order of magnitude from what it would
be in an unperturbed network. A biological network with this sensitivity
to error would not survive for very long in the wild. For example, the di-
ameter of a bronchial airway in the human lung could not survive this level
FIGURE 2.16. The error between the model prediction and the prediction averaged over
a noisy parameter is shown for the classical model (upper curve) and the fractal model
(lower curve).
where on log-log graph paper a is the intercept with the vertical axis and
b is the slope of the line segment. Mammalian neocortical quantities X
have subsequently been empirically determined to change as a function
of neocortical gray matter volume Y as an AR. The neocortical allome-
try exponent was first measured by Tower [339] for neuron density to be
approximately −1/3. The total surface area of the mammalian brain was
found to have an allometry exponent of approximately 8/9 [160, 173, 280].
Changizi [56] points out that the neocortex undergoes a complex transfor-
mation covering the five orders of magnitude from mouse to whale but the
ARs persist; those mentioned here along with many others.
Y = aX b (2.58)
and by convention the variable on the right is the measure of size, such as
the TBM. Allometry laws in the life sciences, as stressed by Gould [134],
fall into two distinct groups. The intraspecies AR relates a property of
an organism within a species to its TBM. The interspecies AR relates a
property across species such as the basal metabolic rate (BMR) to TBM
[52, 309].
Equation (2.58) looks very much like the scaling relations that have be-
come so popular in the study of complex networks over the last two decades
[4, 47, 247, 356, 383]. Historically the nonlinear nature of Eq.(2.58) has
precluded the direct fitting of the equation to data. A logarithmic trans-
formation is traditionally made and a linear regression to the data on the
equation
ln Y = ln a + b ln X (2.59)
yield estimates of the parameters a and b.
To clarify the discussion distinguishing between the intraspecies and in-
terspecies metabolic allometry relations West and West [387] introduce the
index i to denote a species and a second index j to denote an individual
within that species. In this way the TBM is denoted X = Mij and the
BMR by Y = Bij and in this way the intraspecies metabolic AR is
b
Bij = aMij . (2.60)
Using the above notation the average size of the species i, such as average
TBM, is
1
N
Mi ≡ Mij (2.61)
N j=1
1
N
Bi ≡ Bij (2.62)
N j=1
These two kinds of AR are distinctly different from one another and the
models developed to determine the theoretical forms of the allometry coef-
ficient a and allometry exponent b in the two cases are quite varied. Note
that both ARs are traditionally expressed with the indices suppressed, so
that both Mij and Mi are usually written as M or m, occasionally re-
sulting in confusion between the two forms of the ARs.
Another quantity of interest is the time; not the chronological time mea-
sured by a clock but the intrinsic time of a biological process first called
biological time by Hill [157]. Hill reasoned that since so many properties of
an organism change with size that time itself may scale with TBM. Lind-
stedt and Calder [199, 200] develop this concept further and determine
experimentally that biological time, such as species longevity, satisfies an
AR with Y being the biological time. Lindstedt et al. [201] clarify that
biological time τ is an internal mass-dependent time scale
τ = αM β (2.64)
4
In BMR
−2
−4
0 2 4 6 8 10 12 14
In TBM
FIGURE 2.17. The linear regression to Eq.(2.59) for Heusner’s (1991) data is indicated
by the line segment. The slope of the dashed line segment is 0.71 ± 0.008.(from West
and West [386] with permission)
they partition the data into 52 bins of size 0.1 on the logarithmic scale and
average the data in each bin. The resulting 52 average data points define
a uniform distribution and are fit to a straight line segment with slope
b = 0.737 over which the 95% confidence interval includes 3/4 but excludes
2/3. They accept this latter result as support of the allometry exponent
of 3/4 over 2/3. However they also point out that there is considerable
variation in the data around 3/4, which they attribute to sample size,
range of variation in mass, experimental methods and other such procedural
sources. Using the data of Peters [270] for biological rates they construct
a histogram that is seen to peak at 3/4. However they do not explore the
consequences of treating the allometry parameters themselves as stochastic
quantities.
White and Seymour [392] argue that contamination of BMR data with
non-basal measurements is likely to increase the allometry exponent even
if the contamination is randomly distributed with respect to the TBM.
They conclude that the allometry exponent for true BMR is statistically
indistinguishable from 2/3 and that the higher measured exponents may
well be the result of such contamination. Another interesting observation
they make is that the calculation of the AR regression line conceals the
adaptive variation in the BMR.
Now focus attention on modeling rather than fitting the allometry param-
eters. We begin with the deterministic fractal model of nutrient transport
constructed by West et al. [388] and follow with the statistical fractal model
of West and West [385]. It is probably worth pointing out that the Geoff
West in the first reference is not related to the Bruce (me) and Damien
(my son) West in the second reference.
ln (M/M0 )
N =b . (2.67)
ln n
WBE introduce two parameters to characterize the network branching
process, one determines the reduction in the radii of tubes with generation
as was done in the energy minimization arguments [287] and the other
determines the reduction in tube length:
rk+1 lk+1
βk ≡ and γk ≡ . (2.68)
rk lk
Moreover WBE use Rashevsky’s energy minimization argument that the
transport of nutrients in complex networks is maximally efficient when the
ratio parameter βk is independent of generation number and refer to this
as the fractal scaling assumption. They assert that the total fluid volume
is proportional to TBM as a consequence of energy minimization so that
ln n
b=− (2.69)
ln(γβ 2 )
The estimates of the ratio parameters are done making two separate
assumptions. To estimate the ratio of lengths WBE assume that the volume
of a tube at generation k can be replaced by a spherical volume of diameter
lk and in this way implement the space-filling assumption. The conservation
of volume between generations therefore leads to
γ = γk = n−1/3 . (2.70)
WBE maintain that Eq. (2.70) is a generic property of all the space-filling
fractal networks they consider. A separate and distinct assumption is made
to estimate β using the classic rigid-pipe model to equate the cross-sectional
areas between successive generations to obtain
πrj2 = nπrj+1
2
(2.71)
so that using Eq. (2.68)
β = βk = n−1/2 . (2.72)
Note that this differs from the ratio parameter obtained using energy min-
imization, that is Murray’s law or the Hess-Murray law, which WBE main-
tain plays only a minor role in allometric scaling. Inserting Eqs. (2.70) and
(2.72) into Eq. (2.69) yields the sought after exponent
ln n 3
b=− = (2.73)
ln n −1/3 n −2/2 4
and the metabolic AR becomes
B = aM 3/4 . (2.74)
Note that this is the intraspecies AR expressed in terms of the BMR and
TBM and is not related to the the interspecies AR expressed in terms of
the average BMR and average TBM given by Eq. (2.63).
WBE point out that Eq. (2.74) is a consequence of a strictly geometri-
cal argument applying only to those networks that exhibit area-preserving
branching. Moreover the fluid velocity is constant throughout the network
and is independent of size. They go on to say that these features are a
natural consequence of the idealized vessel-bundle structure of plant vas-
cular networks in which area-preserving arises automatically because each
branch is assumed to be a bundle of nN −k elementary vessels of the same
radius. They recognized that this is not the situation with vascular blood
flow where the beating of the heart produces a pulsating flow that gen-
erates a very different kind of scaling. Area-preserving is also not true in
the mammalian lung where there is a distribution of radii at each level of
branching as we discussed.
A physical property that the area preserving condition violates is that
blood slows down in going from the aorta to the capillary bed. Here WBE
return to the principle of energy minimization and as stated by West [389]
assert that to sustain a given metabolic rate in an organism of fixed mass,
with a given volume of blood, the cardiac output is minimized subject to a
space-filling geometry. This variation is essentially equivalent to minimizing
the total impedance since the flow rate is constant and again yields the
Hess-Murray law β = n−1/3 corresponding to area-increasing branching
[302, 320]. This change in scaling from the area-preserving n−1/2 to the
area-increasing n−1/3 solves the problem of slowing down blood flow to
accommodate diffusion at the capillary level. Moreover, the variation also
leads to an allometry exponent b = 1. Such an isometric scaling suggests
that plants and animals follow different allometry scaling relations as was
found [288, 320].
A detailed treatment of pulsate flow is not straightforward and will not
be presented here, but see Savage et al. [302], Silva et al. [320] and Apol
et al. [11] for details and commentary in the context of the WBE model.
We do note that for blood flow the walls of the tubes are elastic and con-
sequently the impedance is complex, as is the dispersion relation that de-
termines the velocity of the wave and its frequency. Consequently pulsate
flow is attenuated [53, 108] and WBE argue that the impedance changes its
r−dependence from r−4 for large tubes to r−2 for small tubes. The varia-
tion therefore changes from area-preserving flow β = n−1/2 for large vessels
to dissipative flow β = n−1/3 for small vessels where blood flow is forced to
slow. Thus, β is k−dependent in the WBE model for pulsate flow and at
an intermediate value of k the scaling changes and this change over value
is species dependent. These results are contradicted in the more extensive
analysis of pulsate flow by Apol et al. [11], who conclude that Kleiber’s law
b = 3/4 remains theoretically unexplained.
Although the WBE model reinvigorated the discussion of metabolic al-
lometry over the past decade that model has not been universally accepted.
Kozlowski and Konarzewski [190] (KK1) critique the apparent limitations
of the WBE model assumptions. The size-invariance assumption regard-
ing the terminal branch of the network made in the WBE model has been
interpreted in KK1 to mean that NT ∝ M , that is, the terminal number
scales isometrically with size. This scaling causes the number of levels in
the network to be a function of body size since more levels are required
to fill a larger volume with the same density of final vessels. KK1 main-
tain that the size-invariance assumption leads to a contradiction within the
WBE model.
In rebuttal Brown, West and Enquist [46] (BWE) assert that KK1 make
a fundamental error of interpretation of the size-invariant assumption. The
gist of the error is that NT VT ∝ M so that NT ∝ M 3/4 and VT ∝ M 1/4 and
not the isometric scaling discussed in KK1. BWE go on to say that: “Having
got this critical part wrong, they went on to make incorrect calculations
and to draw erroneous conclusions about scaling...”
Of course, in their response to the rebuttal Kozlowski and Konarzewski
[191] (KK2) contend that BWE had not addressed the logical inconsisten-
cies they had pointed out. KK2 rather than abdicating refine their argu-
ments and emphasize that choosing NT ∝ M 3/4 is an arbitrary assumption
on the part of WBE and is not proven. Cyr and Walker [69] refer to this as
the illusion of mechanistic understanding and maintain that after a century
of work the jury is still out on the magnitude of the allometric exponents.
A quite different critique comes from Savage et al. [302] who emphasize
that the WBE model is only valid in the limit N → ∞, that is, for infinite
network size (body mass) and that the actual allometric exponent predicted
depends on the sizes of the organisms considered. The allometric relation
between BMR and TBM with corrections for finite N in the WBE model
is given by
M = a1 B + a2 B 4/3 (2.75)
1/3
from which it is clear that b = 3/4 only occurs when a2 B >> a1 , which
is not the case for finite size bodies. In their original publication WBE ac-
knowledged the potential importance of such finite size effects, especially
for small animals, but the magnitude of the effect remained unclear. Using
explicit expressions for the coefficients in Eq. (2.75) from the WBE model
Savage et al. [302] show that when accounting for these corrections over a
size range spanning the eight orders of magnitude observed in mammals a
scaling exponent b = 0.81 is obtained. Moreover in addition to this strong
deviation from the desired value of 3/4 there is a curvilinear relation be-
tween the TBM and the BMR in the WBE model given by
4 a1
ln M = ln a2 + ln B + ln 1 + B −1/3 (2.76)
3 a2
that behaves in the opposite direction to that observed in the data. Conse-
quently they conclude that the WBE model needs to be amended and/or
the data analysis needs reassessment to resolve this discrepancy. A start
in this direction has been made by Kolokotrones et al. [187]. Agutter and
Tuszynski [2] also review the evidence that the fractal network theory for
the two-variable AR is invalid.
Another variation on this theme was made by Price et al. [278] who re-
lax the fractal scaling assumptions of WBE and show that allometry expo-
nents are highly constrained and covary according to specific quantitative
functions. Their results emphasize the importance of network geometry
in determining the allometry exponents and supports the hypothesis that
natural selection minimizes hydrodynamic resistance.
Prior to WBE there was no unified theoretical explanation of quarter-
power scaling. Banavar et al. [26] show that the 3/4 exponent emerges
naturally as an upper bound for the scaling of metabolic rate in the ra-
dial explosion network and in the hierarchical branching networks models
and they point out that quarter-power scaling can arise even when the
underlying network is not fractal.
Finally, Weibel [361] presents a simple and compelling argument on the
limitations of the WBE model in terms of transitioning from BMR to the
maximal metabolic rate (MMR) induced by exercise. The AR for MMR
has an exponent b = 0.86 rather than 3/4, so that a different approach
to determining the exponent is needed. Painter [265] demonstrates that
the empirical allometry exponent for MMR can be obtained in the manner
pioneered by WBE by using the Hess-Murray law for the scaling of branch
sizes between levels.
Weibel [361] argues that a single cause for the power function arising
from a fractal network is not as reasonable as a model involving multiple
causes, see also Agutter and Tuszynski [2]. Darveau et al. [71] propose
such a model recognizing that the metabolic rate is a complex property
resulting from a combination of functions. West et al. [391] and Banavar et
al. [24] demonstrate that the mathematics in the distributed control model
of Darveau et al. is fundamentally flawed. In their reply Darveau et al. [72]
do not contest the mathematical criticism and instead point out consistency
of the multiple-cause model of metabolic scaling with what is known from
biochemical [324] and physiological [174] analysis of metabolic control. The
notion of distributed control remains an attractive alternative to the single
2.3.3 WW model
A logarithmic transformation is traditionally made on Eqs. (2.60) and
(2.63) resulting in a linear regression on intraspecies data of the equation
Over the past 15 years there has been an avalanche of theory [69, 85, 278,
302, 350, 388] and statistical analyses [44, 77, 110, 135, 184, 301], in biology
and ecology attempting to pin down a value of the allometry exponent. The
most prevalent deterministic theories of metabolic allometry argue either
for b = 2/3, based on the geometry of body cooling, or b = 3/4, based
on some variation of fractal nutrient transport networks. Selected data
sets have been used by various investigators to support either of these two
values. White and Seymour [392] and Glazier [117] review the empirical
evidence for and against universal scaling in mammalian metabolism, that
is, having a specific value for b, and conclude that the empirical evidence
does not support the existence of a universal metabolic allometry exponent.
On the other hand, a number of theoretical studies [25, 84, 389] maintain
that living networks ought to have universal scaling laws.
Recently the debate has shifted away from the allometry exponent hav-
ing a single value to whether it has a continuum of values in the interval
2/3 ≤ b ≤ 1 and why [117, 119]. The most recent arguments point out that
allometry relations themselves may be only a first-order approximation to
relations involving nonlinearities beyond simple scaling [187]. We do not
address this last point here except to note that the additional nonlinearity
explains only an additional 0.3% of the total variation [187] and does not
directly influence the theory presented here.
Statistics involves the frequency of the occurrence of events and statisti-
cal methods were used by Glazier [119] to analyze the metabolic allometry
relations and he determined that the metabolic exponent (b) and metabolic
level (log a) co-vary. He posited idealized boundary constraints: the surface-
area limits fluxes of metabolic resources, waste, or heat (scaling allometri-
cally with b = 2/3); volume limits energy use or power production (scaling
isometrically with b = 1). He presents a logical argument for the relative
influence of these boundary constraints on the metabolic level (log a). The
resulting form of the co-variation function is V or U shaped, with maxima
at b = 1 and a minimum at b = 2/3.
Probability theory involves predicting the likelihood of future events and
is used in this section to determine the form of the function relating b and
log a entailed by the phenomenological metabolism probability densities.
Using the probability calculus we show that although the statistical anal-
ysis of the metabolic data of Heusner [153] and of McNab [233] yield an
allometry exponent in the interval 2/3 ≤ b ≤ 3/4 the corresponding proba-
bility densities entail a linear relation between the metabolic exponent and
metabolic level. Consequently, we derive a V-shaped functional form of the
co-variation of the allometry parameters that had been phenomenologically
obtained by Glazier [119].
The data for the average BMR and average TBM across species are typ-
ically related through the logarithm of the interspecies allometry relation
Eq. (2.63) and is most often fit by minimizing the mean square error of
the linear regression of Eq. (2.78) to data. Warton et al. [353] emphasize
that there are two sources of error in the allometry relation; measurement
error and equation error. Equation error also called natural variability has
to do with the fact that the AR is a mathematical model so this error is not
physical and cannot be directly measured. Moreover there is no correct way
to partition equation error in the B and M directions. In particular the
causality implicit in choosing M as the independent variable and B as the
dependent variable as is often done in applying linear regression analysis is
unwarranted. The lines fitted to Eq. (2.78) by linear regression analysis are
not predictive, they merely provide a symmetric summary of the relation
between B and M [353].
The natural variability and measurement error in the metabolic allom-
etry data is manifest in fluctuations in the (B, M )−plane and linear re-
gression analysis determines the fitted values of the allometry parameters
a = α and b = β. West and West [386] argue that since one cannot
uniquely attribute fluctuations to one variable or the other a proper the-
ory must achieve consistency between the extremes, that is, yield the same
results if all the fluctuations were in one variable or the other.
One may also consider the fluctuations to reside in the allometry pa-
rameters in the (a, b)−plane instead of the average physiologic variables
in the (B, M )−plane. In this parametric representation we interpret the
variations in measurements to be given by fluctuations in the allometry
coefficient
a B
a = = (2.79)
α αM β
or in the allometry exponent
ln (B/α)
b−β = . (2.80)
ln M
Using Heusner’s data [153] it is possible to construct histograms of the
probability density functions (pdf ) for both allometry parameters. The pdf
for the allometry exponent b with the allometry coefficient held fixed at
a = α is determined to be that of Laplace [386]:
γ −γ|b−β|
G(b; α) = e (2.81)
2
as depicted in Figure 2.18. The empirically fit parameters are γ = 12.85,
β = 0.71 to the histogram with the quality of fit r2 = 0.97.
1
log P(Δb)
−1
−2
FIGURE 2.18. The deviations from the prediction of the AR using the Heusner’s data
[153] partitioned into 20 equal-sized bins on a logarithmic scale. The solid line segment
is the best fit of Eq. (2.81) with Δb ≡ b − β to the twenty histogram numbers, and the
quality of the fit is measured by the correlation coefficient r2 = 0.97 with γ = 12.85.
(From [386] with permission.)
Using the same data it is also possible to determine the pdf for the
allometry coefficient a with the allometry exponent held fixed at b = β to
obtain a Pareto-like distribution. The probability density, in terms of the
normalized variable a = a/α [386], is:
μ−1
μ a for a ≤ 1
P (a ; β) = −μ−1 . (2.82)
2 a for a ≥ 1
as depicted in Figure 2.19. The empirically fit parameter is μ = 2.79 with
the quality of fit r2 = 0.98. It should be mentioned that the same distribu-
tions with slightly different parameter values is obtained using the avian
data of McNab [233]. Dodds et al. [77] considered a similar shift in per-
spective and examined the statistics for the allometry coefficient obtaining
a log-normal distribution.
A given fluctuation in the (a , b)−plane is equally likely to be the result of
a random variation in the allometry coefficient or in the allometry exponent
and therefore the probability of either occurring should be the same:
b = β + f (a ). (2.84)
140
120
100
N (log a′)
80
60
40
20
0
−2 −1 0 1 2
log a′
FIGURE 2.19. The deviations from the prediction of the AR a = a/α using Heusner’s
data [153] partitioned into 20 equal sized bins on a logarithmic scale. The solid line
segment is the linear regression on Eq.(2.82) to the twenty histogram numbers, which
yields the power-law index μ = 2.79 and the quality of the fit measured by the correlation
coefficient r 2 = 0.98.
db df (a ) P (a ; β)
= = . (2.85)
da da G(b; α)
Equation (2.85) defines a relation between the allometry parameters
through the function f (a ) in terms of the empirical pdf ’s. Inserting the
empirical distributions into Eq. (2.85) and using Eq. (2.84) to obtain G(β +
f (a ); α) the resulting differential equation:
μ−1
df (a ) μ a for 0 < a ≤ 1
= exp [γ |f (a )|] −μ−1 (2.86)
da γ a for a ≥ 1
must be solved.
By inspection the values of f (a ) in Eq. (2.86), including a constant of
integration C, in the indicated domains are
− ln a for a ≤ 1
f (a ) = C . (2.87)
ln a for a ≥ 1
Tailoring the solution to the metabolic level boundaries discussed by Galzier
[119] we introduce constraints on the solution such that the maximum value
of the allometry exponent is b = 1 and the unknown function has the value
cardiac network, the respiratory network and the motor control network,
have all been shown to be fractal and/or multifractal statistical time se-
ries as we subsequently explain [381]. Consequently, the fractal dimension
turns out to be a significantly better indicator of health than the more
traditional measures, such as heart rate, breathing rate and stride rate; all
average quantities. Fractal Physiology, as this field has come to be called
since the first edition of this book, focuses on the complexity of the human
body and the characterization of that complexity through fractal measures
and the dynamics of such measures, see for example the new journal Fron-
tiers in Fractal Physiology. These new measures reveal that the traditional
interpretation of disease as the loss of regularity is not adequate and a bet-
ter interpretation of disease is the loss of variability, or more accurately,
the loss of complexity [130].
A physiologic signal is a time series whose irregularities contain patterns
characteristic of the complex phenomenon being interrogated. The inter-
pretation of complexity in this context incorporates the recent advances
in the application of concepts from fractal geometry, fractal statistics and
nonlinear dynamics, to the formation of a new kind of understanding in
the life sciences. However, as was pointed out in the encyclopedia arti-
cle on which this section is based [382], physiological time series are quite
varied in their structure. In auditory or visual neurons, for example, the
measured quantity is a time series consisting of a sequence of brief electri-
cal action potentials with information regarding the underlying dynamical
system residing in the spacing between spikes, not in the pulse amplitudes.
A very different kind of physiologic signal is contained in an electrocardio-
gram (ECG), where the analogue trace of the ECG pulse measured with
electrodes attached to the chest, reproduces the stages of the heart pump-
ing blood. The amplitude and shape of the ECG analogue recording carries
information in addition to the spacing between heartbeats.
A third kind of physiological time series is an electroencephalogram
(EEG), where the output of the channels attached at various contact points
along the scalp, recording the brain’s electrical potential, appears at first
sight to be random. Information about the operation of the brain is as-
sumed to be buried deep within the erratic fluctuations measured at each
of these points along the scalp. Thus, a physiological signal (time series)
can have both a regular part and a fluctuating part; the challenge is how
to best analyze the time series data from a given physiologic network to
extract the maximum amount of information.
Physiologic time series have historically been fit to the engineering
paradigm of signal plus noise. The signal is assumed to be the smooth,
continuous, predictable, large-scale undulation in a time series. The idea
of signal and predictability go together, in that signals imply information
FIGURE 2.20. We select frequencies that are integer multiples of a fundamental fre-
quency ω0 and the amplitudes decrease according to a scaling rule such that the nth
amplitude is a factor 1/a smaller than the (n-l)st. The spectrum consists of the har-
monics of ω0 with the nth harmonic having a spectral strength 1/a2n . The shape of a
time trace having this spectrum is quite variable. If we choose all the phases to have a
constant value, zero say, the result is curve (A), then the time trace is given by a single
pulse of height a/(a − 1). If we choose the phases to be random variables, uniformly
distributed on the interval (0, 2π), the result is curve (B), the time trace appears to be
a random function of time.
T/2
1
∞
∞
1
C(τ ) = lim dt n+n
cos [bn ω0 t] cos bn ω0 (t + τ ) (2.92)
T →∞ T a
n=0n =0
−T /2
Using the trigonometric identity for the product of cosines and the integral
relation
π
1
dθ cos mθ cos nθ = δm,n
2π
−π
we obtain
∞
1
C(τ ) = 2n
cos [bn ω0 τ ] (2.93)
n=0
a
so that following the scaling analysis given earlier results in the dominant
behavior for the autocorrelation function
Due to the slow time variation of A(τ ) the asymptotic spectrum is esti-
mated using a Tauberian Theorem [396] to be
A
S(ω) ≈ (2.96)
ω 2α+1
for small frequencies, which is an inverse power law in frequency.
The above argument indicates that a fractal time series is associated
with a power spectrum in which the higher the frequency component, the
lower its power. Furthermore, if the spectrum is represented by an inverse
power law, then a plot of log (frequency) versus log (power) should yield
a straight line graph of slope −(2α + 1). Since the frequency output of
physiological networks can be a determined using Fourier analysis, this
scaling hypothesis can be directly tested.
Let me now return to the example of the cardiac depolarization pulse.
Normally, each heartbeat is initiated by a stimulus from pacemaker cells
in the sinus node in the right atrium. The activation wave then spreads
through the atria to the AV junction. Following activation of the AV junc-
tion, the cardiac impulse spreads to the ventricular myocardium through
a ramifying conduction network, the His-Purkinje system. This branching
structure of the His-Purkinje conduction network is strongly reminiscent of
the bronchial fractal discussed previously. In both structures a self-similar
tree with finely-scaled details on a ‘microscopic’ level is seen. In the present
case the spread of the depolarization wave is represented on the body sur-
face by the QRS-complex of the electrocardiogram. Spectral analysis of the
QRS waveform (time trace) reveals a broadband frequency spectrum with
a long tail corresponding to an inverse power law in frequency. To explain
BUNDLE OF HIS
LEFT BUNDLE BRANCH
MYOCARDIUM
PURKINJE FIBERS
b a−1
C(t) = C(bt) + c(t) (2.98)
a a
The asymptotic solution to Eq.(2.98) where C(t) >> c(t) is given by
where the scaling index and the coefficient function have the same defini-
tions as Eqs.(2.90) and (2.79).
If we assume that the above scaling law is a good representation of the
asymptotic correlation function for the QRS complex then the power spec-
trum S(ω) for the QRS pulse is
∞
1
S(ω) = 2 dtA(t)tα−1 cos ωt ∝ α (2.100)
ω
0
QRS SPECTRUM
4
log(amplitude)2
FIGURE 2.22. The normal ventricular deploration (QRS) waveform (mean data of 21
healthy men) shows a broadband distribution with a long, high-frequency tail. The
straight line segment is the linear regression to an inverse power-law spectrum [S(ω)
∝ ω −α ] with a fundamental frequency of 7.81 Hz. (From Goldberger et al. [123] with
permission.)
when A(t) is slowly varying in time or is constant, so that the integral can
be evaluated using a Tauberian Theorem. In general the exponent α can de-
pend on other parameters such as temperature and pressure. Thus accord-
ing to this argument the QRS waveform should have an inverse power-law
spectrum.
The actual data fits this model quite well as shown in Figure 2.22. This
example, therefore, supports a connection between nonlinear structures,
represented by a fractal His-Purkinje system, and nonlinear function, re-
flected in the inverse power-law pulse [128]. Thus, just as nature selects
static anatomical structures with no fundamental length scale, s/he selects
structure for the His-Purkinje conduction system so as to have no funda-
mental time scale. Presumably the error-tolerance of the fractal structure
is as strong an influence on the latter as it is on the former.
In the case of the QRS-complex such power-law scaling could be related
to the fractal geometry of the His-Purkinje system. What is the ‘mech-
anism’ for self-similar scaling in the regulation of heart rate variability?
Fluctuations in heart rate are regulated by multiple control processes in-
cluding neurohumoral regulation (sympathetic and parasympathetic stim-
2.5 Summary
This has been a wonderful chapter to write and populate with some of the
exciting ideas that have been realized since the first edition of this book
was written twenty years ago. The notion of fractals in physiology that
seemed revolutionary then is not yet mainstream by any means, but it is
no longer dismissed out of hand as it once was. The utility of the fractal
concept has been established not just as a descriptive tool but as a measure
of diagnostic significance. In his tribute to the late Benoit Mandelbrot the
world class anatomist Ewald Weibel explained how Mandelbrot and fractals
changed the way he and the rest of the scientific community thinks about
biological forms [362]. This change in thinking is the basis for his 2000
book Symmorphosis [360] on how the size of parts of an organism must be
matched to the overall functional demands and the design principle that
accomplishes this goal is that of fractals.
It is interesting to consider about how dynamics might be phenomeno-
logically incorporated into the description of physiologic processes using
the arguments from this chapter. One image that suggests itself is that
of a feedback system that induces a response on time scaled a factor of b
faster than the input time. When this scaled response is fed back as part
example it was observed that there was no fundamental time scale (period)
in the fractal process, resulting in a correlation function that increased
algebraically in time. This power-law correlation function resulted in a
predicted inverse power-law spectrum of the QRS-complex which was also
observed.
The physiological examples considered in the present chapter share the
common feature of being static. Even the time dependence of the correlation
function obtained from the QRS time series resulted from the static struc-
ture of the His-Purkinje conduction network rather than as a consequence
of any time varying aspect of the network. Subsequent chapters address
some of the dynamic aspect of physiology, including dynamical diseases, as
well as the other aspects of physiology and medicine that are intrinsically
time dependent.
ARTERIAL 95
OXYGEN
SATURATION
87
MAX
VENTILATORY
0
72
BEATS/MIN 80
HEART RATE
2 3 4 5 6 7 8
TIME (MINUTES)
FIGURE 3.1. The low frequency periodic fluctuations in the heart rate are compared
with two measure of respiration in very ill cardiac patients. The periodic phenomenon
is referred to as Cheyne-Stokes breathing.
FIGURE 3.2. The curve in A depicts a single trajectory spiraling into the orgin, which
is a fixed point. The three curves in B depict how orbits starting from various intial
conditions are drawn into the origin if the intial states are in the basin of attraction for
the fixed point.
those initial states were in the basin of attraction of the focus as suggested
in Figure 3.2B.
If the flow field converges on a single closed curve this is called a limit
cycle and is depicted in Figure 3.3B. Such limit cycles appear as periodic
time series for the variables of interest. The basin of attraction for the limit
cycle can be geometrically internal to the cycle in which case it evolves
outward to intercept it. The basin can also be outside the cycle in which
case the orbit is drawn inward to the cycle.
Nature abounds with rhythmic behavior that closely intertwines the
physical, biological and social sciences. The spinning earth gives rise to
periods of dark and light that are apparently manifest through the circa-
dian rhythms in biology. A large but incomplete list of such daily rhythms
is given by Luce [208]: the apparent frequency in fetal activity variations
in body and skin temperature, the relative number of red and white cells
in the blood along with the rate at which blood coagulates, the production
and breakdown of ATP (adenosine triphosphate), cell division in various
organs, insulin secretion in the pancreas, susceptibility to bacteria and in-
fection, allergies and pain tolerance. No attempt has been made here to
distinguish between cause and effect; the stress is on the observed peri-
odicity in each of these phenomena. The shorter periods associated with
the beating heart and breathing, for example, are also modulated by a
circadian rhythm.
FIGURE 3.3. A: The phase space is shown for a harmonic oscillator with a few typical
orbits. Each ellipse has a constant energy. The energy of the oscillator is increased as
the system jumps from an ellipse of smaller diameter to one of larger diameter. B: A
single limit cycle is depicted (solid curve). The dashed curves corresponds to transient
trajectories that asymptotically approach the limit cycle for the nonlinear oscillator.
d2 V (t) 2 dV (t)
2
+ V (t) − ε2 + ω02 V (t) = 0, (3.3)
dt dt
where V (t) is the voltage, ω0 is the natural frequency of the nonlinear os-
cillator, and ε is an adjustable parameter. In a linear oscillator of frequency
ω0 the constant coefficient of the first-order time derivative determines the
Although the van der Pal oscillator given by Eq.(3.3) does not have
the broad range of application envisioned by its creators [344, 345], their
comments reveal they understood that these many and varied phenomena
are dominated by nonlinear mechanisms. In this sense their remarks are
prophetic. An example not mentioned by these authors that with a little
thought they would have included in their list and that is walking.
existence of a CPG have been done on animals with spinal cord transec-
tions. It has been shown that such animals are capable of walking under
certain circumstances. Walking, for example, by a cat with its brain stem
sectioned rostral to the superior colliculus, is very close to normal, on a flat,
horizontal surface, when a section of the midbrain is electrically stimulated.
Stepping continues as long as a train of electrical pulses is used to drive
the gait cycle. However this is not a simple linear response process since
the frequency of the stepping increases in proportion to the amplitude of
the stimulation and is insensitive to changes in the frequency of the driver.
1.6 slow
stride int. (sec)
1.4
1.2 norm
1 fast
0.8
0 10 20 30 40 50 60
(free pace) time (min)
1.6 slow
stride int. (sec)
1.4
1.2 norm
1 fast
0.8
0 5 10 15 20 25 30
(metronome pace) time (min)
FIGURE 3.4. Single intervals for slow, normal and fast gaits for free walking and
metronome. The time duration of the data collection for each sfree walking eries was
approximately one hour and half that for the metronome data. (From [304] with per-
mission.)
It has been established that the nonlinear analysis of gait data supports
the conjecture made in biomechanics that the CPG in human locomotion
can be modeled as a correlated system of coupled nonlinear oscillators. If
the observed random variations in the stride intervals or normal walking
were related to the chaotic behavior of such nonlinear oscillators, this would
explain the type of multifractal behavior observed in the gait data. The gait
data studied by my colleagues and me [304, 376] depicted in Figure 3.4
were taken from public domain archives Physionet [272] set up by my
Metronome pace
6
slow
4
norm
2
fast
0
Free pace
6
elderly
4
2 Parkinson
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
Holder exponent -h-
FIGURE 3.5. Typical Hölder exponent histograms for the stride interval series in the
freely walking and metronome conditions for normal, slow and fast pace and for elderly
and for a subject with Parkinson’s disease. The average properties are discussed in the
text. (From [304] with permission.)
the oscillator during the j th cycle that is related to the intensity of the
j th neural fired impulse, and A and f0 are respectively the strength and
the frequency of the external driver. The frequency of the oscillator would
be f = fj if A = 0. We notice that the nonlinear term, as well as the
driver, induce the oscillator to move on a limit cycle. The actual frequency
of each cycle may differ from the inner virtual frequency fj . We assume
that at the conclusion of each cycle, a new cycle is initiated with a new
inner virtual frequency fj produced by the SCPG model while all other
parameters are kept constant. However, the simulated stride interval is not
1/fj but is given by the actual period of each cycle of the van der Pol
oscillator. We found this mechanisms more interesting than that proposed
by Hausdorff et al. [147] and Ashkenazy et al. [16] who added noise to the
output of each node to mimic biological variability. In fact, we noticed that
the so-called biological noise is naturally produced by the chaotic solutions
of the nonlinear oscillators in the SCPG, here that is the forced van der
Pol oscillator fluctuating over its limit cycle.
We assume that the neural centers of the SCPG may fire impulses with
different voltage amplitudes that would induce virtual frequencies {fi } with
finite-size correlations. Therefore we model the time series of virtual fre-
quencies directly. If the reader is interested in the mathematical details of
SCPG they may be found in the literature [305, 376].
FIGURE 3.6. Histogram and probability density estimation of the Hölder exponents:
slow (star; h0 = 0.046, σ = 0.102), normal (triangle; h0 = −0.092, σ = 0.069), fast
(circle; h0 = −0.035, σ = 0.081) gaits. Each curve is an average over the ten members of
the ten cohorts in the experiment. The fitting curves are Normal functions with average
h0 and standard deviation σ. By changing the gait mode from slow to normal the Hölder
exponents h decreases but from normal to fast they increase. There is also an increase
in the width of the distribution σ by moving from the normal to the slow or fast gait
modes. (From Scafetta et al. [304] with permission.)
The SCPG is used to simulate the stride interval of human gait under a
variety of conditions [305, 376]. We use the average experimentally deter-
mined value of the basic frequency, f0,n = 1/1.1 Hz, so that the average
period of the normal gait is 1.1 second; the frequency of the slow and fast
gait are chosen to be respectively f0,s = 1/1.45 Hz and f0,f = 1/0.95 Hz,
with an average period of 1.45 and 0.95 seconds, respectively, that is similar
to experimentally realized slow and fast human gaits shown in Figure 3.4.
By using the random walk process to activate a particular frequency of the
short-time correlated frequency neural chain, we obtain the time series of
the frequencies {fj } to use in the time evolution of the van der Pol oscilla-
tor. For simplicity, we keep constant the two parameters of the nonlinear
component of the oscillator (3.5), μ = 1 and p = 1. The only parameters
allowed to change in the model are the mean frequency f0 that changes
also the correlation length, and the intensity A of the driver of the van der
Pol oscillator (3.5).
An important question this study raises is which aspects of bipedal lo-
comotion are passively controlled by the biomechanical properties of the
body and what aspects are actively controlled by the nervous system. It
is evident that the rhythmic movements are controlled by both feedfor-
ward and feedback [192]. Thus, there is not a simple answer to the above
question because both the biomechanical properties of the body and the
nervous system are closely entangled and both can contribute to the pe-
culiar variability patterns observed in the data. Whether some degree of
stride variability can occur also in an automated passive model for gait,
for example, a walking robot, is a realistic expectation, in particular if the
robot can adjust its movements according to the environment. However,
human locomotion may be characterized by additional peculiar properties
which emerge from its psychobiological origin that naturally generates 1/f
scaling and long-range power-law correlated outputs [210].
The stride interval of human gait presents a complex behavior that de-
pends on many factors. The interpretation of gait dynamics that emerges
from the SCPG model is as follow: the frequency of walking may be as-
sociated with a long-time correlated neural firing activity that induces a
virtual pace frequency, nevertheless the walking is also constrained by the
biomechanical motor control cycle that directly controls movement and
produces the pace itself. Therefore, we incorporate both the neural firing
activity given by a stochastic CPG and the motor control constraint that
is given by a nonlinear filter characterized by a limit cycle. Therefore, we
model our SCPG such that it is based on the coupling of a stochastic with a
hard-wired CPG model and depends on many factors. The most important
parameters of the model are the short-correlation size that measures the
correlation between the neuron centers of the stochastic CPG; the intensity
8
slow
6
norm
p(h)
4
fast
2
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
(free pace) Holder exponent -h-
8
slow
6
norm
p(h)
4
fast
2
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
(metronome pace) Holder exponent -h-
FIGURE 3.7. Typical Hölder exponent histograms for computer-generated stride inter-
val series using the SCPG model in the freely walking and metronome paced conditions
for normal, slow and fast pace. The parameters of the SCPG model were chosen in such
a way as to approximately reproduce the average behavior of the fractal and multifractal
properties of the phenomenological data. The historgrams are fitted with Normal func-
tions The results appear qualitatively similar to those depeicted in Figure 3.6. (From
[305] with permission.)
I R
L1 L2
V0
R1 R2
I1 V1 I2 V2
FIGURE 3.8. Analog circuit described by Eqs. (3.6) and (3.7) with tunnel diodes, re-
sistors and inductors. The overall voltage is provided by the battery V0 with the total
current I. (From West et al. [364] with permission.)
dimensional phase space with coordinate axes (ID1 , ID2 , VD1 , VD2 ). The
coupling results in a voltage drop VD1 − VD2 across Rc , producing a current
through each diode dependent on this voltage drop, and can result in in-
duced switching of one oscillator by the other. The time rates of change in
the current through the two diode branches of the circuit are determined
by Kirchhoff’s laws:
dI1 (t)
L1 + (R + R1 )I1 (t) + R2 I2 (t) = VD − VD1 (3.6)
dt
dI2 (t)
L2 + (R + R2 )I2 (t) + R1 I1 (t) = VD − VD2 (3.7)
dt
along with the various currents through the branches of the circuit
I D1 = I 1 + I c (3.8)
I D2 = I 2 − I c (3.9)
IH
CURRENT (ma)
IL
VL VH
VOLTAGE (v)
FIGURE 3.9. A typical voltage response curve across a diode is shown. The highest
current is IH , the lowest current is IL , the highest voltage is VH and the lowest voltage
is VL . The arrows indicate how the diode operation jumps discontinuously to VH at
constant IH , and to VL at constant IL .
At this point the current begins to decrease again with little or no change
in the voltage until the current reaches the value IL , at which point the
voltage switches back to VL . The cycle then repeats itself. The cycling of
the coupled system is depicted in Figure 3.10 which shows that the sharply
angled regions of the uncoupled hysteresis loops have been smoothed out by
means of the coupling. Here we use the model of Gollub et al. [133] in which
the transition between VL and VH on the upper branch and between VH
and VL on the lower branch of the hysteresis loop is instantaneous, because
of its simplicity. West et al. [364] have generalized this model to mimic the
smooth change from one branch of the hysteresis curve to the other that
is observed in physiological oscillators by replacing the above discontinu-
ity with a hyperbolic tangent function along with a voltage which linearly
increases in magnitude with time at the transition point IH and IL .
0.6
VG
VOLTAGE V1(V2)
0.4
0.2
VL
0.00 IL 0.01 0.02 IG
CURRENT I1(I2)
FIGURE 3.10. The hysteresis cycle of operation across the diode, is depicted. The sharp
changes in voltage shown in Figure 3.9 are here smoothed out by the coupling between
diodes. (From West et al. [364] with permission.)
set of system parameter values. Basically we observe that all four of the
dynamic variables, the two voltage V1 (t) and V2 (t), and the two currents
I1 (t) and I2 (t), are strictly periodic with period T for all applied voltages
V0 at which oscillations in fact occur. A periodic solution to the dynamic
equations Eqs.(3.6) and (3.7) is a closed curve in the reduced phase space
as shown in Figure 3.11. Here, for two periods in one oscillator we have
three in the other so that the coupled frequencies are in the ratio of three
to two. A closed orbit with 2m turns along one direction and of 2n turns in
the orthogonal direction indicate a phase locking between the two diodes
such that one diode undergoes n cycles and the other m cycles in a constant
time interval T for the coupled system. Figure 3.12 shows the time trace
of the voltage across diodes 1 and 2 for this case. We observe the 3:2 ratio
of oscillator frequencies over a broad range of values of V0 .
0.02
I1
0.01
0.00
0.00 0.01 0.02
I2
FIGURE 3.11. The current in diode 1 is graphed as a function of the current through
diode 2. We see that the trajectory forms a closed figure indicating the existence of a limit
cycle (R = 3.2Ω, V0 = 0.32V , R1 = 1.3Ω, L1 = 2.772μH, R2 = 1.4Ω, L2 = 3.732μH).
(From West et al. [364] with permission.)
For an externally applied voltage less than 0.225V the frequency ratio
of the two oscillators becomes phase locked (one-to-one coupling) at a fre-
quency that is lower than the intrinsic frequency of the SA node oscillator,
but faster than that of the AV junction oscillator. In Figure 3.13. A the
output of both oscillators in the coupled system is depicted, with parameter
values such that the uncoupled frequencies are in the ratio of three to two.
In the coupled system, the SA and AV oscillators are clearly one-to-one
phase locked due to their dynamic interaction.
1.0
0.8
VOLTAGE
0.6
0.4
0.2
0.0
TIME
FIGURE 3.12. Voltage pulses are shown as a function of time (dimensionless units) for
SA (solid line) and AV (dashed line) oscillators with parameter values given in Figure
3.11. Note that there are two AV pulses for three SA pulses, that is,a 3:2 phase locking.
(From West et al. [364] with permission.)
0.4
0.2
0.0
B 3:2 AV WENCNEBACH
0.6
VOLTAGE
0.4
0.2
0.0
C 2:1 AV BLOCK
0.6
0.4
0.2
0.0
TIME
FIGURE 3.13. Voltage pulses with the same parameter values as in Figure 3.13 V0 =
0.18; (A) 1:1 phase locking persists when the SA node is driven by an external voltage
pulse train with pulse width 0.5 dimensionless time units and period 4.0. (B) Driver
period is reduced to 2.0 with emergence of 3:2 Wenckebach periodicity. (C) Driver period
reduced to 1.5, resulting in a 2:l AV block. Closed brackets denote SA pulse associated
with AV response. Open brackets denote SA pulse without AV response (‘non-conducted
beat’). (From West et al. [364] with permission.)
such that the interval between the SA and AV oscillators becomes progres-
sively longer until one SA pulse is not followed by an AV pulse. This cycle
then repeats itself, analogous to AV Wenckebach periodicity which is char-
acterized by progressive prolongation of the PR interval until a P-wave is
not followed by a QRS-complex. These AV Wenckebach cycles, which may
be seen under a variety of pathological conditions, are also a feature of
normal electrophysiological dynamics and can be induced by driving the
atria with an electronic pacemaker [176].
The findings of both phase-locking and bifurcation-like behavior are par-
ticularly noteworthy in this two oscillator model because they emerge with-
out any special assumptions regarding conduction time between oscillators,
refractoriness of either oscillator to repetitive stimulation or the differential
effect of one oscillators on the other.
The observed dynamics support the contention that the AV junction may
be more than a passive conduit for impulses generated by the sinus node, is
also suggested by Guevara and Glass [141]. The present model is consistent
with the alternative interpretation that normal sinus rhythm corresponds
to one-to-one phase locking (entrainment) of two or more active oscillators,
and does not require complete suppression of the slower pacemaker by the
faster one, as do the passive conduit models. It should be emphasized,
however, that when two active pacemakers become one-to-one phase locked,
the intrinsically slower one may be mistaken as a passive element because
of its temporal relation to the intrinsically faster one. Furthermore, the
model is of interest because it demonstrates marked qualitative changes
in system dynamics, characteristics of AV Wenckebach and 2:1 AV block,
occurring when a single parameter (driving frequency) is varied over some
critical range of values.
Up to this point we have been using the traditional concepts of a limit
cycle to discuss one kind of dynamic process, that is, the beating of the
heart and the occurrence of certain cardiac pathologies. Zebrowski et al.
[409] consider a modification of the van der Pol oscillator by introducing
a potential function with multiple minima. Adjusting parameters in this
potential enables them to more closely model the heart’s dynamic response
to the autonomous nervous system. They explain the oscillator response to
a single pulse as well as to a periodic square-wave, the former producing a
change in phase and the latter an irregular response. In this way they are
able to gain insight into the underlying dynamics of irregular heart rate,
systole, sinus pause and other cardiac phenomena.
Extending this discussion consider models of various other biorhythms
mentioned earlier. I explore certain of the modern concepts arising in non-
linear dynamics and investigate how they may be applied in a biomedical
context that are eventually of value in understanding both erratic ECG
and EEG time series. It is apparent that the classical limit cycle is too
well ordered to be of much assistance in that regard, and so I turn to an
attractor that is a bit strange.
dX
= −σX + σY (3.12)
dτ
dY
= −XZ + rX − Y (3.13)
dτ
dZ
= XY − bZ (3.14)
dτ
where σ, r and b are physical parameters. The solutions to this system
of equations can be identified with trajectories in phase space. What is of
interest here are the properties of non-periodic bounded solutions in this
three-dimensional phase space. A bounded solution is one that remains
within a restricted domain of phase space as time goes to infinity.
The phase space for the set of equations Eqs. (3.12)–(3.14), is three-
dimensional and the solution to them traces out a curve Γt (x, y, z) given
by the locus of values of X(t) = [X(t), Y (t), Z(t)] shown in Figure 3.14. We
can associate a small volume V0 (t) = X0 (t)Y0 (t)Z0 (t) with a perturbation
of the trajectory and investigate how this volume of phase space changes
with time. If the original flow is confined to a finite region R then the rate
of change of the small volume with time ∂V0 /∂t must be balanced by the
flux of volume J(t) = V0 (t)Ẋ(t) across the boundaries of R. The quantity
Ẋ(t) in the expression for the flux J represents the time rate of change of
the dynamical variables in the absence of the perturbations, that is, the
unperturbed flow field that can sweep the perturbation out of the region
R. The balancing condition is expressed by an equation of continuity and
in the physics literature is written
∂V0 (t)
+
· J(t) = 0 (3.15)
∂t
or substituting the explicit expression for the flux into Eq.(3.15) and re-
ordering terms yields
1 dV0 (t) · · ·
= ∂x X(t) + ∂y Y(t) + ∂z Z(t) (3.16)
V0 (t) dt
where the total time derivative operator is
d ∂ ·
≡ + X(t) ·
(3.17)
dt ∂t
and is called the convective or total derivative of the volume. Using the
Lorenz equations of motion for the time derivatives in Eq.(3.16) we obtain
1 dV0 (t)
= − (σ + b + 1) . (3.18)
V0 (t) dt
Equation (3.18) is interpreted to mean that as an observer moves along
with an element of phase space volume V0 (t) associated with the flow field
the volume contracts at a rate (σ + b + 1), that is., the solution to Eq.(3.18)
is
−20
40
30
20
10
−10
0
10
20
FIGURE 3.14. The attractor solution to the Lorenz system of equations is depicted in a
three-dimensional phase space (X, Y, Z). The attractor is strange in that it has a fractal
(noninteger) dimension.
T /2
T (ω) ≡ 1
X dtX(t)e−iωt (3.20)
2π
−T /2
2
XT (ω)
SXX (ω) ≡ lim . (3.21)
T →∞ T
In Figure 3.15 is displayed the power spectral densities (PSD) SXX (ω) and
SZZ (ω) as calculated by Fanner et al. [89] using the trajectory shown in
Figure 3.14. It is apparent from the power spectra density using the X(t)
time series that there is no dominant periodic X−component in the dy-
namics of the attractor, although lower frequencies are favored over higher
ones. The power spectral density for the Z(t) time series has a much flat-
ter spectrum overall, but there are a few isolated frequencies at which
more energy is concentrated. This energy concentration would appear as
a strong periodic component in the time trace of Z(t). From these spec-
tra one would conclude that X(t) is non-periodic, but that Z(t) possesses
both periodic and non-periodic components. In fact from the linearity of
the Fourier transform Eq.(3.20) one could say that Z(t) is a superposition
of these two parts:
−2
0.1 0.2 0.3
frequency
−2
0.1 0.2 0.3 0.4 0.5
frequency
FIGURE 3.15. The power spectral densities SXX (ω) and SZZ (ω) are calclated using
the solution for the X, Z−components of the Lorenz equations. (From Farmer et al. [91]
with permission.)
So what does this all mean? In part it means that the dynamics of a
complex network such as the brain or the heart might be random even if its
description can be ‘isolated’ to a few (three or more) degrees of freedom that
interact in a deterministic but nonlinear way. If the system is dissipative,
that is, information is extracted from the network on average, but the
network is open to the environment, so that information is supplied to the
network by means of boundary conditions, then a strange attractor is not
only a possible manifold for the solutions to the dynamic equations; it, or
something like it, may even be probable.
The aperiodic or chaotic behavior of an attractor is subsequently shown
to be a consequence of a sensitivity to initial conditions: trajectories that
are initially nearby exponentially separate as they evolve forward in time
on a chaotic attractor. Thus, as Lorenz observed: microscopic perturbations
mixed with the dough, just as chaos thoroughly mixes the trajectories in
phase space on the attractor.
The dynamic equations for Rössler’s [297] three degree of freedom system
is
dX
= −(Y + Z) (3.24)
dt
dY
= X + aY (3.25)
dt
dZ
= b + XZ − cZ (3.26)
dt
where a, b and c are constants. For one set of parameter values, Farmer
et al. [91] referred to the attractor as ‘the funnel’, the obvious reason for
this name is seen in Figure 3.16. Another set of parameter values yields
the ‘simple Rössler attractor’, (cf. Figure 3.17d). Both of these chaotic at-
tractors have one positive Lyapunov exponent. As we mentioned earlier,
a Lyapunov exponent is a measure of the rate at which trajectories sepa-
rate one from the other (cf. Section 3.2). A negative exponent implies the
orbits approach a common fixed point. A zero exponent means the orbits
maintain their relative positions; they are on a stable attractor. Finally, a
positive exponent implies the orbits exponentially separate; they are on a
chaotic attractor. In Figure 3.17 is depicted phase space projections of the
attractor, for various values of the parameters.
FIGURE 3.16. The ‘funnel’ attractor solution to the Rössler equations with parameter
values a = 0.343, b = 1.82 and c = 9.75. (From [297] with permission.)
+14 +14
C = 2.5 C = 3.5
Y
Y
−14 −14
−14 +14 −14 +14
x x
+14 +14
C=4 C=5
Y
−14 −14
−14 +14 −14 +14
x x
FIGURE 3.17. An X − Y phase plane plot of the solution to the Rössler equations with
parameter values a = 0.20 and b = 0.20 at four different values of c indicated in the
graphs.
14
C=5
xmax (N + 1)
0
0 14
xmax (N)
FIGURE 3.18. Next amplitude maximum plot of the solution to the Rössler equations
for c = 5, a = 0.2 and b = 0.2. Each amplitude of the oscillation of X was plotted against
the preceding amplitude.
x
x1 x2 x3 x4
FIGURE 3.19. The spiral is an arbitrary orbit depicting a function y = f (x). The
intersection of the spiral curve with the x-axis defines a set of points x1 , x2 , ...that can
be obtained from a mapping determined by the mapping function f (x).
In genetics, for example, Xn could describe the change in the gene fre-
quency between successive generations; in epidemiology, the variable could
denote the fraction of the population infected at time n; in psychology,
certain learning theories can be cast in the form where Xn is interpreted
as the number of bits of information that can be remembered up to gen-
eration n; is sociology, the iterate might be interpreted as the number of
people having heard a rumor at time n and Eq.(3.29) would then describe
the propagation of rumors in societies of various structures see, for exam-
ple, Kemeny and Snell [183]. The potential applications of such modeling
equations are therefore restricted only by our imaginations.
Consider the simplest mapping, also called a recursion relation, in which
a population Xn of organisms per unit area, on a petri dish in the nth gener-
ation is strictly proportional to the population in the preceding generation
with a proportionality constant μ:
Equation (3.30) is quite easy to solve. Suppose that the population has a
level X0 = N0 at the initial generation, then the recursion relation yields
the sequence of relation
Xn = μn N0 , n = 0, 1, 2, ... (3.32)
This rather simple solution already exhibits a number of interesting prop-
erties. First, if the net birth rate μ is less than unity, then we can write
μn = e−nβ where β > 0 so that the population decreases exponentially
between successive generations (note β = − ln μ). This is a reflection of the
fact that with μ < 1, the population of organisms fails to reproduce itself
from generation to generation and therefore it exponentially approaches
extinction:
lim Xn = 0 if μ < 1. (3.33)
n→∞
On the other hand if μ > 1, then we can write μn = ebβ where β (= 1nμ) >
0, so the population increases exponentially from generation to generation.
This is a reflection of the fact that with μ > 1 the population has an excess
at each generation resulting in a population explosion. This is the Malthus
exponential population growth:
The only value of μ for which the population does not have these extreme
tendencies is μ = 1, when, since the population reproduces itself exactly in
each generation, we obtain the unstable situation:
lim Xn = N0 if μ = 1. (3.35)
n→∞
Of course this simple model is no more valid than the continuous growth
law of Malthus [225], which he used to describe the exponential growth
of human populations. It is curious that the modeling of such growth, al-
though attributed to Malthus did not originate with him. In fact Malthus
was an economist and clergyman interested in the moral implications of
such population growth. His contribution to population dynamics was the
exploration of the consequences of the fact that a geometrically growing
population is always outstripped by a linearly growing food supply, result-
ing in overcrowding and misery. Why the food supply should grow linearly
was never questioned by him. A more scientifically oriented investigator,
Verhulst [348], put forth a theory that mediated the pessimistic view of
Malthus. Verhulst noted that the growth of real populations is not un-
bounded. He argued that such factors as the availability of food, shelter,
sanitary conditions, and so on. all restrict (or at least influence) the growth
of populations. He included these effects by making the growth rate μ a
function of the population level. His arguments allows generalization of the
discrete model to include the effects of limited resources. In particular, he
assumed the birthrate to decrease with increasing population in a linear
way:
It is clear that when the population is very far from its saturated level
Xn << Θ the number of people grows exponentially since the nonlinear
term is negligible. However at some point the ratio Xn /Θ is of the order
unity and the rate of population growth is retarded. When Xn = Θ there
are no more births. Biologically the regime Xn > Θ corresponds to a neg-
ative birthrate, or the number of deaths exceeds the number of births, and
so we restrict the region of interpretation of this model to [1 − Xn /Θ] > 0.
Finally, we reduce the number of parameters from two, μ and Θ, to one
by introducing Yn = Xn /Θ the fraction of the saturation level achieved by
the population at generation n. In terms of this ratio variable the recursion
relation Eq. (3.37) becomes the normalized logistic equation
which has the two roots Yss = 0, and Yss = (1 − 1/μ). The Yss = 0
root corresponds to extinction, but we now have a second steady solution
to the mapping Yss = 1 − 1/μ, which is positive for μ > 1. One of the
questions that is of interest in the more general treatment of this problem
is to determine to which of these steady states the population evolves as
the years go by; to extinction or some finite constant level.
Before we examine the more general properties of Eq.(3.38) and equations
like it, let us use a more traditional tool of analysis and examine the stability
of the two steady states found above. Traditionally the stability of a system
in the vicinity of a given value is determined by perturbation theory. I use
that technique now and write
Yn = Yss + ξn (3.41)
where ξn << Yss so that Eq.(3.41) denotes a small change in the rela-
tive population from its steady-state value. Substituting Eq. (3.41) into
Eq. (3.38) yields
also clear that in the unstable case that the condition for perturbation the-
ory eventually breaks down as ξn increases. When this divergence occurs a
more sophisticated analysis than linear perturbation theory is required.
In the neighborhood of the steady state Yss = 1 − 1/μ the recursion
relation specifying the stability condition becomes
ξn+1 = (2 − μ) ξn . (3.44)
The preceding analysis can again be repeated with the result that if 1 >
2 − μ > −1 the fixed point Yss = 1 − 1/μ is stable and implies that
the birthrate is in the interval 1 < μ < 3. The stability is monotonic for
1 < μ < 2, but because of the changes in sign it is oscillatory for 2 < μ < 3.
Similarly the fixed point is unstable for 0 < μ < 1 (monotonic) and μ > 3
(oscillatory).
μ = 2.8 μ = 3.2
1.0 1.0
(a) (b)
x1
x1
0.5 0.5
0 0
0 10 20 0 10 20
μ = 3.53 μ = 3.9
1.0 1.0
(c) (d)
x1
x1
0.5 0.5
t
0 0
0 10 20 0 10 20
FIGURE 3.20. The solution to the logistic map is depicted for various choices of the
control parameter μ. (a) The solution Yn approaches a constant value asymptotically
for μ = 2.8. (b) The solution Yn is a periodic orbit, a 2-cycle, after the initial transient
dies out for μ = 3.2. (c) The solution Yn from (b) bifurcates to a 4-cycle for μ = 3.53.
(d) The solution Yn is chaotic for μ = 3.9.
Following Olsen and Degn [255] I examine the nature of the solutions to
the logistic equation as a function of the parameter μ a bit more closely.
This can be done using a simple computer code to evaluate the iterates Yn .
For 0 < μ ≤ 4 insert an initial value 0 ≤ Y0 ≤ 1 into Eq. (3.38) and generate
a Y1 , which is also in the interval [0, 1]. This second value of the iterate is
then inserted back into Eq. (3.38) and a third value Y2 is generated; here
again 0 ≤ Y2 ≤ 1. This process of generation and reinsertion constitutes
the dynamic process, which is a mapping of the unit interval into itself in a
two- to-one manner, that is, two values of the iterate at step n can be used
to generate a particular value of the iterate at step n + 1. In Figure 3.20a
we show Yn as a function of n for μ = 2.8 and observe that as n becomes
large (n > 10) the value of Yn becomes constant. This value is a fixed
point of the mapping equal to 1 − 1/μ = 0.643, and is approached from all
initial conditions 0 ≤ Y0 ≤ 1; it is an attractor. Quite different behaviors
are observed for the same initial points but different values of the control
parameter, say when μ = 3.2. In Figure 3.20b we see that after an initial
transient the process becomes periodic, that is to say the iterate alternates
between two values. This periodic orbit is called a 2-cycle. Thus, the fixed
point becomes unstable at the parameter value μ = 3 and bifurcates into
a 2-cycle. Here the 2-cycle becomes the attractor for the mapping. At a
slightly larger value of μ, say μ = 3.53, the mapping settles down into a
pattern in which the value of the iterate alternates between two large values
and two small values (cf. Figure 3.20c). Here again the existing orbit, a
2-cycle, has become unstable at μ = 3.444 and bifurcated into a 4-cycle.
Thus, we sec that as μ is increased a fixed point changes into a 2-cycle, a
2-cycle changes into a 4-cycle, which in turn changes into an 8-cycle and so
on. This process of period doubling is called subharmonic bifurcation since
a cycle of a given frequency ω0 bifurcates into periodic orbits which are
subharmonics of the original orbit, that is, for k bifurcations the frequency
of the orbit is ω0 /2k . The attractor for the dynamic process can therefore
be characterized by the appropriate values of the control parameter μ.
As one might have anticipated, the end point of this period doubling
process is an orbit with an infinite period (zero frequency). An infinite
period implies that the system is aperiodic, that is to say, the pattern of the
values of the iterate does not repeat itself in any finite number of iterations,
or said differently it does not repeat itself in any finite time interval (cf.
Figure 3.20d). We have already seen that any process that does not repeat
itself as time goes to infinity is completely unique and hence is random.
It was this similarity of the mapping to discrete random sequences that
motivated the coining of the term chaotic to describe such attractors. The
deterministic mapping Eq. (3.38) can therefore generate chaos for certain
values of the parameter μ.
Returning now to the more general context it may appear that limit-
ing the present analysis to one-dimensional systems is unduly restrictive;
however, recall that the system is pictured to be a projection of a more
complicated dynamical system onto a one-dimensional subspace (cf. for ex-
His plea was motivated by the recognition that the traditional math-
ematical tools such as Fourier analysis, orthogonal functions, etc. are all
fundamentally linear and
has been intensively studied in the physical sciences, usually in the scaled
form Eq. (3.38)and when graphed versus Yn yields the parabolic curve de-
picted in Figure 3.21.
y3
y1
y2
y
y2 y0 y* y1 y3
(a)
y2
y1
y
y0 y1 y2
(b)
FIGURE 3.21. A mapping function with a single maximum is shown. In (a), the iteration
away from the initial point Y0 is depicted. In (b), the convergence to the stationary
(fixed) point Y ∗ is shown.
1.0
f2(yn)
0.5
0
0 0.5 1.0
yn
(a)
1.0
f2(yn)
0.5
0
0 0.5 1.0
yn
(b)
FIGURE 3.22. The map f with a single maximum in Figure 3.21 yields an f 2 map with
a double maximum. The slope at the point Y ∗ is indicated by the dashed line and is
seen to increase as the control parameter μ is raised in the map from (a) to (b).
Y ∗ = f (Y ∗ ) (3.46)
is called a fixed point of the dynamic equation, that is, Y ∗ is the Yss from
Eq.(3.39). The fixed points correspond to the steady-state solutions of the
discrete equation and for Eq.(3.40) there are Y ∗ = 1 − 1/μ (nontrivial) and
Y ∗ = 0 (trivial). We can see in Figure 3.21b that the iterated points are
approaching the fixed point Y ∗ and reach it as n −→ ∞. To determine if a
mapping approaches a fixed point asymptotically, that is, whether or not
the fixed point is stable, we examine the slope of the function at the fixed
point [60, 197, 228]. The function acts like a curved mirror either focusing
the ray towards the fixed point under multiple reflections or defocusing
the ray away. The asymptotic direction (either towards or away from the
fixed point) is determined by the slope of the function at Y ∗ , which is
depicted in Figure 3.21 by the dashed line and denoted by f (Y ∗ ), that
is, the (tangent to the curve) derivative of f (Y ) at Y = Y ∗ . As long as
|f (Y )| < 1 the iterations of the map are attracted to the fixed point, just as
the perturbation ξ approaches zero in Eq.(3.44) near the stable fixed point.
FIGURE 3.23. The bifurcation of the soluton to the logistic mapping as a function of
μ∞ − μ is indicated. The lorarithmic scale was chosen to clearly depict the bifurcation
regions. [60]
and two new solutions to the quadratic mapping emerge. These are the two
intersections of the quadratic map with the diagonal having slopes with
magnitude less than unity, Y1∗ and Y2∗ . The chain rule of differentiation of
the derivative of f 2 at Y1∗ and Y2∗ is the product of the derivatives along
the periodic orbit
so that the slope is the same at both points of the period-2 orbit [197] and in
fact the slope is the same at all k of the values of a period k orbit. This is in
fact a continuous process starting from the stable fixed point Y ∗ when |f |
< 1; as μ is increased this point becomes unstable at |f | = 1 and generates
two new stable points with |f 2 | < 1 for a period-2 orbit; as μ is increased
further these points become unstable at |f 2 | = 1 and generates four new
stable points with |f 4 | < 1 for a period-4 orbit. This bifurcation sequence
is tied to the value of the parameter μ. As this parameter is increased the
discrete equation undergoes a sequence of bifurcations from the fixed point
to stable cycles with periods 2, 4, 8, 16, 32...2k . In each case the bifurcation
process is the same as that for the transition from the stable fixed point
to the stable period-2 orbit. A graph indicating the location of the stable
values of Y for a given μ is given in Figure 3.23.
FIGURE 3.24. The same as Figure 3.23 but with a linear scale in μ∞ − μ so that the
hazy region denoting chaos is clear observed. [60]
Here we follow Ott [260] and consider only invertible maps where Eq.(3.49)
can be solved uniquely for Xn and Yn as functions of Xn+1 and Yn+1 :
X2
A B
C
X1
X3
FIGURE 3.25. An arbitrary trajectory is shown and its intersection with a plane parallel
to the x1 , x3 −plane at x2 = constant are recorded. The points A, B, C,... define a map
as in Figure 3.19. This is the Poincaré surface of section.
∂Xn+1 ∂Xn+1
J ≡ ∂Xn ∂Yn
(3.55)
∂Yn+1
∂Xn
∂Yn+1
∂Yn
Inserting Eqs. (3.51) and (3.52) into Eq. (3.55) we find J = −β so that
using the magnitude of the Jacobian the volume at consecutive times is
given by
Vn+1 = β n V1 (3.57)
0.4
0.3
0.2
0.1
–0.1
–0.2
–0.3
–0.4
FIGURE 3.26. Iterated points of the Henon map for 104 iterations with the parameter
values c = 1.4 and β = 0.2. [260]
and gives rise to the aperiodic behavior of strange attractors. There ex-
ists however a large variety of attractors which are neither periodic orbits
nor fixed points and which are not strange attractors. All of these seem
to present more or less pronounced chaotic features [80]. Thus, there are
attractors that are erratic but not strange. I will not pursue this general
class here.
As an example of the two-dimensional invertible mapping we first trans-
form the logistic equation into the family of maps Xn+1 = 1−cXn2 with the
parametric identification c = (μ/2 − 1)μ/2 and 0 < c ≤ 2, since 2 < μ ≤ 4
and Xn maps the interval [−1, 1] onto itself. Then using Eqs. (3.51) and
(3.52) we obtain the mapping first studied by Henon [152]
0.20 0.190
0.19 0.189
0.18 0.188
0.17 0.187
0.16 0.186
0.15 0.185
0.55 0.60 0.65 0.70 0.625 0.630 0.635 0.640
(a) (b)
0.1895
0.1894
0.1893
0.1892
0.1891
0.1890
0.1889
0.6305 0.6310 0.6315 0.6320
(c)
FIGURE 3.27. a) Enlargement of the boxed region in Figure 3.26, 104 iterations; b)
enlargement of the square in a), 106 iterations; c) enlargement of the square in b),
5×106 iterations. [260]
In Figure 3.26 we have copied the loci of points for the Henon system in
which 104 successive points from the mapping with the parameter values
c = 1.4 and β = 0.2 initiated from a variety of choice of (X0 , Y0 ). Ott [260]
points out that, as the map is iterated, points come closer and closer to the
attractor eventually becoming indistinguishable from it. This, however, is
an illusion of scale.
If the mixed-in region of the figure is magnified one obtains Figure 3.27a
from which a great deal of structure of the attractor can be discerned. If
the boxed region in this latter figure is magnified, then what had appeared
as three unequally spaced lines appear in Figure 3.27b as three distinct
parallel intervals containing structure. Notice that the region in the box
of Figure 3.27a appears the same as that in Figure 3.27b. Magnifying the
boxed region in this latter region we obtain Figure 3.27c, which aside from
resolution is a self-similar representation of the structure seen on the two
preceding scales. Thus, scale invariant, Cantor-set-like structure transverse
to the linear structure of the attractor is observed. Ott [260] concludes that
because of this self-similar structure the attractor is probably strange. In
fact it has been verified by direct calculation that initially nearby points
separate exponentially in time [95, 67], thereby coinciding with at least one
definition of a strange attractor.
where Yn+1 = f (Yn ). Shaw [316] has shown that σ is also the average
information change over the entire interval of iteration. He argues that a
map may be interpreted as a machine that takes a single input Y0 and
generates a string of numbers during the iteration process. If the string
has a pattern such as would arise for an attractor that is a fixed point
or periodic orbit, then after a very short time the machines gives no new
information. On the other hand, if the orbit is chaotic so that the string of
numbers is random, then each iterate is new to the observer, and gives a
new piece of information. Shaw convincingly demonstrated that a chaotic
process is a generator of information. He argues that a negative σ implies
a periodic orbit and the magnitude of σ measures the degree of stability
∂f
un+1 = f (Xn ) − f (Xn ) ≡
· un + · · ·, (3.66)
∂X
which defines the linearized mapping when all the higher-order terms are
neglected
∂f
A (Xn ) ≡ (3.68)
∂X
so that the map given by Eq.(3.67) is linearized along the trajectory Xn .
Following Nicolis [250] the solution to Eq. (3.67) for a given initial condition
e0 at the nth iteration can be written as
U10 0 = d1
e1 (3.70)
and e1 has a unit norm. Now we apply U21 to e1 and obtain a vector of
length d2 and direction
e2 . Finally we can rewrite Eq. (3.69) as the product
of n numbers
un = d1 d2 · · · dn
en , |
en | = 1 (3.71)
instead of a product of n matrices. The maximal Lyapunov exponent is
then defined as
1
n
1
σ = lim ln ||un || = lim ln dj . (3.72)
n→∞ n n→∞ n
j=1
ε1(n) = λ1nε(0)
ε(0) n iterations of F
ε2(n) = λ2nε(0)
FIGURE 3.28. Lyapunov exponents define the average stretching or contraction of trajec-
tories in characteristic directions. Here we show the effects of applying a two-dimensional
mapping to circles of initial conditions. A sufficiently small circle of radius ε (0) is trans-
formed after n iterations into an ellipse with major radius λn 1 ε (0) and minor radius λ2
n
1/n
λj = lim magnitude jth eigenvalue of [A (Xn , Yn ) · · · A (X1 , Y1 )]
n→∞
(3.75)
where A(X, Y) is the Jacobian matrix of the map:
∂f1 (X,Y) ∂f1 (X,Y)
A(X, Y) = ∂X ∂Y . (3.76)
∂f2 (X,Y) ∂f2 (X,Y)
∂X ∂Y
T/2
1
Cxx (τ, T ) = X(t)X(t + τ )dt. (3.77)
T
−T /2
Note that for a finite sample length T the integral defines an estimate for
the autocorrelation function
In Figure 3.29 a sample history of X(t) is given along with its displaced
time trace X(t + τ ). The point by point product of these two series is given
in Eq. (3.77) and then the average over the time interval (−T /2, T /2) is
taken. A sine wave, or any other harmonic deterministic data set, would
have an autocorrelation function which persists over all time displacements.
FIGURE 3.29. The time trace of a random function X(t) versus time t is shown as the
upper curve. The lower curve is the same time trace displaced by a time interval τ . The
product of these two functions when averaged yield an estimate of the autocorrelation
function Cxx (τ ).
and m is the maximum lag number. Note that Cxx (rΔ, N ) is analogous to
the estimate of the continuum autocorrelation function and becomes the
true autocorrelation function in the limit N −→ ∞. These considerations
have been discussed at great length by Wiener [395] in his classic book on
time series analysis, and is still recommended today as a text from which
to capture a master’s style of investigation.
The frequency content is extracted from the time series using the auto-
correlation function by applying a filter in the form of a Fourier transform.
This yield the power spectral density
∞
1
Sxx (ω) = e−iωt Cxx (t) dt (3.80)
2π
−∞
of the time series X(t). Equation (3.80) relates the autocorrelation func-
tion to the power spectral density and is known as the Weiner-Khinchine
relation in agreement with Eq.(3.21). One example of its use is provided
in Figure 3.30a where the exponential form of the autocorrelation function
Cxx (t) = e−t/τc used in Figure 3.30b yields a frequency spectrum of the
Cauchy form:
1 τc
Sxx (ω) = (3.81)
π 1 + ω 2 τc2
FIGURE 3.30. (a) The autocorrelation function Cxx (τ ) for the typical time traces de-
picted in Figure 3.29 assuming the fluctuations are exponentially correlated in time
[exp(−t/τc )]. The constant τc is the time required for Cxx (t) to decrease by a factor
1/e, this is the decorrelation time. (b) The power spectral density Sxx (ω) is graphed as
a function of frequency for the exponential correlation function with a central frequency
ω0 .
GEA
50 μV
MG
RF
IC
HI
FIGURE 3.31. A typical set of simultaneously recorded and selectively averaged evoked
potentials in different brain nuclei of chronically implanted cats, elicited during the
slow wave sleep stage by an auditory stimulation in the form of step function. Direct
computer-plottings; negativity upwards. [32]
GEA
0
0 MG
RF
IC
10 dB
HI
Frequency (Hz)
FIGURE 3.32. Mean value curves of the power spectral density functions obtained from
16 experiments during the slow wave sleep stage. Along the abscissa is the logarithm of
frequency ω, along the ordinate is the power spectral density, Sx (ω), in such a way that
the power at 0 Hz is equal to one (or 10log1 = 0). [32]
dimension of the HRV time series. They also determined that the complex-
ity of HRV decreases with the onset of atrial fibrillation.
The relatively narrow-band spectrum of fibrillatory signals contrast with
the spectrum of the normal ventricular depolarization (QRS) waveform
which in man and animals shows a wide band of frequencies (0 to > 300
Hz) with 1/f −like scaling, that is, the power spectral density at frequency
f is equal to 1/f α , where α is a positive number. As discussed in Section
2.4 the power-law scaling that characterizes the spectrum of the normal
QRS waveform can be related to the underlying fractal geometry of the
branching His-Purkinje system. Furthermore, a broadband inverse power-
law spectrum has also been identified by analysis of interbeat intervals vari-
ations in a group of healthy subjects, indicating that normal sinus rhythm
is not a strictly periodic state. Important phasic changes in heart rate as-
sociated with respiration and other physiologic control systems account for
only some of the variability in heartbeat interval dynamics; overall, the
spectrum in healthy subjects includes a much wider band of frequencies
with 1/f −like scaling. This behavior is also observed in the EEG time
series data.
It has been suggested that fractal processes associated with scaled, broad-
band spectra are ‘information-rich’. Periodic states, in contrast, reflect
narrow-band spectra and are defined by monotonous, repetitive sequences,
depleted of information content. In Figure 3.33 is depicted the spectrum
of the time series X(t) obtained from the funnel attractor solution of the
equation set Eqs. (3.24)–(3.26). The attractor itself is shown in Figure 3.16.
The spectrum is clearly broadband as was that of the Lorenz attractor, with
a number of relatively sharp spikes. These spikes are manifestations of a
strong periodic components in the dynamics of the funnel attractor. Thus,
the dynamics could easily be interpreted in terms of a number of harmonic
components in a noisy background, but this would be an error. One way
to distinguish between these two interpretations is by means of the infor-
mation dimension of the time series. The dimension decreases as a system
undergoes a transition from chaotic to periodic dynamics. The transition
from healthy function to disease implies an analogous loss of physiologi-
cal information and is consistent with a transition from a wide-band to a
narrow-band spectrum. The dominance of relatively low-frequency periodic
oscillations might be anticipated as a hallmark of the dynamics of many
types of severe pathophysiology disturbances. As pointed out earlier, such
periodicities have already been documented in many advanced clinical set-
tings, including Cheyne-Stokes breathing patterns in heart failure, leukemic
cell production, sinusoidal heart rate oscillations in fetal distress syndrome,
and the ‘swinging heart’ phenomenon in cardiac tamponade.. The highly
periodic electrical fibrillatory activity of the heart, which is associated with
FIGURE 3.33. The power spectral density for the X(t) time series for the ‘funnel’ at-
tractor depicted in Figure 3.16.
1 M
C(r) = lim Θ (r − |Xi − Xj |) (3.82)
M →∞ M (M − 1)
i,j=1
∞
= dE r c (r ) (3.83)
0
1
M
c(r) = lim δ E (r − Xi − Xj ) . (3.84)
M →∞ M (M − 1)
i=j=1
The virtue of the integral function is that for a chaotic or strange attractor
the autocorrelation integral has the power-law form
C(r) ∝ rν (3.85)
and moreover, the ‘correlation exponent’ ν is closely related to the fractal
dimension D and the information dimension σ of the attractor. They argue
ν ≤ σ ≤ D. (3.86)
Thus, if the autocorrelation integral obtained from an experimental data set
has the power-law form Eq.(3.85) with ν < E, they argue that one knows
that the data set arises from deterministic chaos rather than random noise,
because noise results in C(r) ∝ rE for a constant autocorrelation function
over the interval r. Note that for periodic sequences ν = 1; for random
sequences it should equal the embedding dimension, while for chaotic se-
quences it is finite and non-integer.
Grassberger and Procaccia establish Eq.(3.85) by the following argu-
ment: If the attractor is a fractal, then the number of hypercubes of edge
length r needed to cover it N (r) is
1
N (r) ∝ (3.87)
rD
as determined in Chapter Two. The number of points from the data set
which are in the j th non-empty cube is denoted nj so that
1 2 N (r)
2
M
C(r) ∼ lim nj = n (3.88)
M →∞ M 2 M2
j=1
but
N (r)
nj = M
j=1
Thus, comparing Eq. (3.89) with (3.85) they obtain the inequality
ν≤D (3.90)
so that the correlation dimension is less than or equal to the fractal dimen-
sion.
Grassberger and Procaccia also point out that one of the main advantages
of the correlation dimension ν is the ease with which it can be measured. In
particular it can be measured more easily than either σ or D for cases when
the fractal dimension is large (≥ 3). Just as they anticipated, the measure
ν has proven to be most useful in experimental situations, where typically
high-dimensional systems exist. However, calculating the fractal dimension
in this way does not establish that the erratic time series is generated by a
chaotic attractor. It only proves that the time series is fractal.
To test their ideas they studied the behavior of a number of simple
models for which the fractal dimension is known. In Figure 3.34 is displayed
three of the many calculations they did. In each case the logarithm of the
correlation integral [lnC(r)] is plotted as a function of the logarithm of
a dimensionless length (ln r) which according to the power-law relation
Eq.(3.85) should yield a straight line of positive slope. The slope of the line
is the correlational dimension ν. From these examples it is clear that the
technique successfully predicts the correlational behavior for both chaotic
mappings and differential equations having chaotic solutions.
(a) (b) 0
Henon map
Logistic map
0 σ = 3.56994
−10
log2 C(I)
log2 C(I)
−10 ν = 0.5000±0.005
νeff = 1.21±0.01
−20
−20
0 10 20 30 40 0 5 10 15 20 25
log2 (I/I0) (I0 arbitary) log2 (I/I0) (I0 arbitary)
(c) 0
Lorenz equations
ν = 2.05±0.01
−5
−10
log2 C(I)
−15
Rabinovich
Fabrikant equations
ν = 2.19±0.01
−20
−25
0 5 10 15
log2 (I/I0) (I0 arbitary)
FIGURE 3.34. (a) The correlation integral for the logistic map at the infinite bifurcation
point μ. = μ∞ = 3.699. The starting point was Y0 = l/2, the number of points was
N = 3 × l04 . (b) Correlation integral for the Hénon map with c = 1.4, β = 0.01
and N = l.5 × l04 . (c) Correlation integrals for the Lorenz equations (dots); for the
Rabinovich-Fabricant equation (open circles). In both cases N = l.5 × l04 and τ = 0.25.
[136, 137]
the three dynamic coordinates X(t), Y (t) and Z(t) is only one of the many
possibilities. They conjectured that; “any such sets of three independent
quantities which uniquely and smoothly label the states of the attractor
are diffeomorphically equivalent.” In English this means that an actual dy-
namic system does not know of the particular representation chosen by
us, and that any other representation containing the same dynamic infor-
mation is just as good. Thus, an experimentalist sampling the values of a
single coordinate need not find the ‘one’ representation favored by nature,
since this ‘one’ may not in all probability exist.
Playing the role of experimentalists the Santa Cruz group sampled the
X(t) coordinate of the Rössler attractor. They then noted a number of
possible alternatives to the phase space coordinates (x, y, z) that could
give a faithful representation of the dynamics using the time series they had
obtained. One possible set was the X(t) time series itself plus two replicas
of it displaced in time by τ and 2τ , that is, X(t), X(t + τ ) and X(t + 2τ ).
Note that implicit in this choice is the idea that X(t) is so strongly coupled
to the other degrees of freedom that it contains dynamic information about
these coordinates as well as itself. A second representation set is obtained
by making the time interval τ an infinitesimal, so that by taking differences
between the variables we obtain X(t), Ẋ(t) and Ẍ(t).
(a) (b)
τ x(t + τ)
x
t x(t)
(c) (d)
P(N + 1)
x(t + 2τ)
x(t+τ) P(N)
FIGURE 3.36. Attractor from a chemical oscillator. (a) The time series X(t) is the
bromide ion concentration in a Belousov-Zhabatinskii reaction. A time interval τ is
indicated. (b) Plot of X(t) versus X(t + τ ). Dotted line indicates a cut through the
attractor. (c) Cross section of attractor along cut. (d) Poincare return map of cut,
P (N + 1) is the position the trajectory crosses the dotted line as a function of the
crossing position on the previous turn around the attractor. (From Roux et al [299] with
permission.)
in Figure 3.35B and the measured data are the sequence of values {Xn }
denoting the crossing of the line by the attractor in the positive direction.
These data are used to construct a next amplitude plot in which each
amplitude Xn+1 is plotted as a function of the preceding amplitude Xn . It
is possible for such a plot to yield anything from a random spray of points to
a well defined curve. If in fact we find a curve with a definite structure then
it may be possible to construct a return map for the attractor. For example,
the oscillating chemical reaction of Belousov and Zhabotinskii was shown
by Simoyi et al. [321] to be describable by such a one-dimensional map. In
Figure 3.36 we indicate the return map constructed from the experimental
data [321].
Simoyi et al. [321] point out that there are 25 or so distinct chemicals in
the Belousov-Zhabotinskii reaction, many more than can be reliably mon-
itored. Therefore there is no way to construct the twenty-five dimensional
phase space X(t) = {X1 (t), X2 (t), · · ·X25 (t)} from the experimental data.
Instead they use the embedding theorems of Whitney [393] and Takens
[333] to justify the monitoring of a single chemical species, in this case the
concentration of the bromide ion, for use in constructing an m−dimensional
phase portrait of the attractor {X(t), X(t + τ ), · · ·X[t + (m − 1)τ ]} for suf-
ficiently large m and for almost any time delay τ . They find that for their
experimental data m = 3 is adequate and the resulting one-dimensional
map as depicted in Figure 3.36, provided the first example of a physical
system with many degrees of freedom that can be so modeled in detail.
Let us now recap the technique. We assume that the system of interest,
can be described by m variables, where m is large but unknown, so that
at any instant of time there is a point X(t) = [X1 (t), X2 (t), · · ·, Xm (t)] in
an m−dimensional phase space that completely characterizes the system.
This point moves around as the system evolves, in some cases approaching
a fixed point or limit cycle asymptotically in time. In other cases the motion
appears to be purely random and one must distinguish between a system
confined to a chaotic attractor and one driven by noise. In experiments, one
often only records the output of a single detector, which selects one of the N
components of the system for monitoring. In general the experimentalist
does not know the size of the phase space since the important dynamic
variables are usually not known and therefore s/he must extract as much
information as possible from the single time series available, X1 (t) say. For
sufficiently long times t one uses the embedding theorem to construct the
sequence of displaced time series {X1 (t), X1 (t+τ ), ..., X1 [t+(m−1)τ ]}. This
set of variables has been shown to have the same amount of information as
the d−dimensional phase point provided that m ≥ 2d + 1. Thus, as time
goes to infinity, we can build from the experimental data a one-dimensional
phase space X1 (t), a two-dimensional phase space with axes {X1 (t), X1 (t +
1 M
Cm (r) = lim Θ (r − |ξ (ti ) − ξ (tj )|) , (3.92)
M →∞ M (M − 1)
i,j=1
which for a chaotic (fractal) time series again has the power-law form
lim νm = ν (3.94)
m→∞
and ν is again the correlation dimension. In Figure 3.34 the results for the
Lorenz model with m = 3 is depicted where X1 (t) ≡ X(t); the power-law
is still satisfactory being in essential agreement with the earlier result.
M/2
1/2
X (tj ) = [S (ωk ) Δωk ] cos (ωk tj + φk ) ; j = 1, 2, · · ·, M, (3.95)
k=1
C
S (ωk ) = (3.96)
ωkα
where C > 0 is chosen to yield a unit variance for the times series and
α > 0. Such time series are said to be ‘colored’ noise and have generated a
great deal of interest [242, 383].
Osborne and Provenzale calculated X(t) = (X1 (t), X2 (t), ..., Xm (t)} for
fifteen different values of the embedding dimension m = 1, 2, ...15 for spe-
cific values of α. The correlation function Eq.(3.92) is then calculated for
each value of m and lnCm (r) is graphed versus lnr in Figure 3.37. The
slope of these curves yields νm from Eq. (3.93)
10−1
10−2
Cm (r)
10−3
10−4
10−5
10−3 10−2 10−1 100 101 102
r
(a)
10
6
νm
0
0 2 4 6 8 10 12 14 16
m
(b)
FIGURE 3.37. (a) The fifteen correlation functions Cm (r) for a spectral exponent α =
1.0, and (b) the correlation dimension νm versus the embedding dimension m for this
case. No saturation is evident in this case.
100
10−1
10−2
Cm (r)
10−3
10−4
10−5
10−3 10−2 10−1 100 101 102
r
(a)
5
3
νm
0
0 2 4 6 8 10 12 14 16
m
(b)
ΔX (τ ) = X (t + τ ) − X (t) (3.98)
and scale the time interval τ by a constant Γ
ΔX (Γτ ) = ΓH X (τ ) (3.99)
where H is the scaling exponent, 0 < H ≤ 1. Now if we generate a self-
similar trajectory from the time series Eq. (3.95) in an m−dimensional
phase space, each component has the same scaling exponent H. The fractal
dimension of the trajectory generated by the colored noise is then given by
Mandelbrot [217],
d = min(1/H, m). (3.100)
6
ν
0
0 1 2 3 4 5 0
α
FIGURE 3.39. The correlation dimension ν versus the spectral exponent α. The corre-
lation dimension turns out to be a well defined, monotonically decreasing function ν(α)
of the spectral exponent a for this class of random noises.
Thus, for 0 < H < 1 the trajectory is a fractal curve since its fractal
dimension strictly exceeds its topological dimension DT = 1.
Osborne and Provenzale numerically verify the relation Eq. (3.100) for
the colored noise time series. Using the scaling relation Eq. (3.99) they
evaluate the average of the absolute value of the increment in the process
X(t):
100
10−2
(a)
10−4
10−4 10−2 100 102 104
λ
102
<|X(t+λΔt)−X(t)|>
100
10−2
(b)
10−4
10−4 10−2 100 102 104
λ
102
<|X(t+λΔt)−X(t)|>
100
10−2
(c)
10−4
10−4 10−2 100 102 104
λ
FIGURE 3.40. The three straight lines correspond to α = 1.0, 1.75and 2.75 with the
slope values from Eq. (3.101) given by H = 0.1, 0.39 and 0.84, respectively.
D = 1/H
6
ν
0
0 1 2 3 4 5 6
α
FIGURE 3.41. The fractal dimension D, determined as the inverse of the scaling ex-
ponent, versus the spectral exponent α.The solid and the dashed lines are theoretical
relationaships for ‘perfect’ and truncated power-law spectra, respectively.
on the overall fractal structure. The fractal nature of the network suggests
a basic variability in the way networks are coupled together. For example,
the interaction between cardiac and respiratory cycles is not constant, but
adapts to the physiologic challenges being experienced by the body.
Modeling the adaptation of gait to various conditions was considered by
extending the traditional central pattern generator (CPG) to include corre-
lated stochastic processes to produce the super or stochastic central pattern
generator (SCPG). Walking is thought to be a consequence of the two-way
interaction between the neural networks in the central nervous system plus
the intraspinal nervous system on one side and the mechanical periphery
consisting of bones and muscles on the other. That is, while the muscles
receive commands from the nervous system, they also send back sensory
information that modifies the activity of the central neurons. The coupling
of these two networks produces a complex stride interval time series that
is characterized by particular symmetries including fractal and multifrac-
tal properties that depend upon several biological and stress constraints.
It has been shown that: (a) gait phenomenon is essentially a rhythmic cy-
cle that obeys particular phase symmetries in the synchronized movement
of the limbs; (b) the fractal and multifractal nature of the stride interval
fluctuations become slightly more pronounced under faster or slower paced
frequencies relative to the normal paced frequency of a subject; (c) the
randomness of the fluctuations is higher for children than for adults and
increases if subjects are asked to synchronize their gait with the frequency
of a metronome or if the subjects are elderly or suffering from neurodegen-
erative disease. In this chapter the SCPG model, which is able to reproduce
these known properties of walking, was briefly reviewed.
The description of a complex network consists of a set of dynamical ele-
ments, whatever their origin, together with a defining set of relations among
those elements; the dynamics and the relations are typically nonlinear. A
physiologic network may be identified as such because it performs a spe-
cific function, such as breathing or walking, but each of these functional
networks is part of a hierarchy that together constitutes the living human
body. Consequently the human body may be seen as a network of networks,
each separate network such as the beating heart is complex in its own right,
but also contributes to a complex network of interacting networks. This is
what the physician has chosen to understand, even though the neurosur-
geon specializes in the brain and the cardiologist focuses on the heart, they
both must interpret the signals from the network to determine the influence
of the other networks on their specialization.
The allometry relations discussed in Chapter Two have been tied to
chaos [111]. The connection was made by Ginzburg et al. [111] noting that
May and Oster [229] suggested the likelihood of population extinction is
noise can neither be predicted nor controlled, except perhaps through the
way the system is coupled to the environment.
The above distinction between chaos and noise highlights one of the
difficulties in formulating unambiguous measures of the dynamic properties
of complex phenomena. Since noise cannot be predicted or controlled it
might be viewed as being complex, thus, systems with many degrees of
freedom manifest randomness and may be considered complex. On the
other hand, systems with only a few dynamical elements, when they are
chaotic, might be considered simple. In this way the idea of complexity is ill
posed, because very often chaos and noise are indistinguishable, so whether
the system has a few variables (simple?) or many variables (complex?) is
not known. Consequently, because noise and chaos are often confused with
one another new measures for their discrimination have been developed
[34, 180], such as the correlation and fractal dimension, as well as the
attractor reconstruction technique (ART).
can neither be predicted nor controlled, except perhaps through the way
the system is coupled to the environment.
The above distinction between chaos and noise highlights one of the
difficulties of formulating unambiguous measures of the dynamic properties
of complex phenomena. Since noise cannot be predicted or controlled it
might be viewed as being complex, thus, systems with many degrees of
freedom manifest randomness and may be considered complex. On the
other hand, systems with only a few dynamical elements, when they are
chaotic, might be considered simple. In this way the idea of complexity
is ill posed, because very often we cannot distinguish between chaos and
noise, so it cannot be known if the network has a few variables (simple?)
or many variables (complex?). Consequently, because noise and chaos are
often confused with one another in data a new approach to the definition
of complexity is required.
In early papers on systems theory it was argued that the increasing com-
plexity of an evolving system can reach a threshold where the system is so
complicated that it is impossible to follow the dynamics of the individual
elements, see for example, Weaver [357]. At this point new properties often
emerge and the new organization undergoes a completely different type of
dynamics. The details of the interactions among the individual elements are
substantially less important, at this point, than is the ‘structure’, the ge-
ometrical pattern, of the new aggregate. This is self-aggregating behavior.
Increasing the number of elements beyond this point, or alternatively in-
creasing the number of relations among the existing elements, often leads to
a complete ‘disorganization’ and the stochastic approach becomes a viable
description of the system behavior. If randomness (noise) is now consid-
ered as something simple, as it is intuitively, one has to seek a measure of
complexity that increases as the number of variables increases, reaches a
maximum where new properties may emerge, and eventually decreases in
magnitude in the limit of the system having an infinite number of elements,
where thermodynamics properly describes the system. Thus, a system is
simple when its dynamics are regular and described by a few variables; sim-
ple again when its dynamics are random and described by a great many
variables; but somewhere between these two extremes its dynamics are
complex, being a mixture of regularity (order) and randomness (disorder).
As seen earlier such relations are referred to as scaling and can be solved
in the same way differential equations are solved. Practically one usually
guesses the form of a solution, substitutes that guess into the equation and
see if it works. I do that here and assume a trial solution of the form used
earlier
ln a
μ= (4.5)
ln b
and the real coefficient function is periodic in the logarithm of time with
period ln b and consequently can be expressed in terms of a Fourier expan-
sion
∞
A(t) = An ei2πn ln t/ ln b . (4.6)
n=−∞
Recall that this was the same function obtained in fitting the bronchial tube
data to the average diameter. In the literature Z(t) is called a homogeneous
function [374].
The homogeneous function Z(t) is now used to define the scaling observed
in the moments of an experimental time series with long-time memory. The
second moment of a homogeneous stochastic process X(t) having long-time
memory is given by
X(bt)2 = b2H X(t)2 (4.7)
where the brackets denote an average over an ensemble of realizations of the
fluctuations in the time series. Like the Weierstrass function which repeats
itself at smaller and smaller scales we see that the characteristic measure
of a time series, the second moment, has the same scaling property. This
implies that the scaling index can be related to the fractal dimension as
done in Section 2.2 so that
H =2−D (4.8)
relating the exponent H to the fractal dimension D.
This same process has a different scaling for the stationary covariance
The more familiar equivalent form for the scaling of the power spectral
density is
and the exponent H is a real constant, often called the Hurst exponent, after
Mandelbrot identification of the Civil engineer who first used this scaling.
In a simple random walk model of such a process the steps of the walker
monofractal, but have a fractal dimension that changes over time. The time
series are multifractal and as such they have a spectrum of dimensions.
(1 − B)Xj = ξj , (4.16)
where ξj is +1 or –1 and is selected according to to the random process
of flipping a coin. The solution to this discrete equation is given by the
position of the walker after N steps, the sum over the sequence of steps
N
X (N ) = ξj (4.17)
j=1
and the total number of steps can be interpreted as the total time t over
which the walk unfolds, since we have set the time increment is one. For N
sufficiently large the central limit theorem determines the statistics of the
dynamic variable X(t) to be Normal
⎡ ⎤
2
1 ⎣ x
pN (x) = $
exp −
⎦ . (4.18)
2
2 2 X (N )
2π X (N )
Assuming the random steps are statistically independent ξj ξk = ξj2 δjk
we have for the second moment of the diffusion variable in the continuum
limit
N
X(t)2 = ξj ξk = N ξ 2 −→ 2Dt. (4.19)
j,k=1
1 x
p(x, t) = F δ , (4.21)
tδ t
with δ = 1/2 so that the distribution for the random variable λ1/2 X(λt)
is the same as that for X(t). This scaling relation establishes that the
random irregularities are generated at each scale in a statistically identical
manner, that is, if the fluctuations are known in a given time interval they
can be determined in a second larger time interval by scaling. This is the
property is used in the next section to construct a data processing method
for nonlinear dynamic processes.
The simplest generalization of this model is to make each step dependent
on the preceding step in such a way that the second moment given by
Eq.(4.14) has H = 12 and corresponds to anomalous diffusion. A value
of H < 12 is interpreted as an anti-persistent process in which a walker’s
step in one direction is preferentially followed by a reversal of direction. A
value of H > 12 is interpreted as a persistent process in which a walker’s
step in one direction is preferentially followed by another step in the same
direction. This interpretation of anomalous diffusion in terms of random
walks would be compatible with the concept of environmental noise where
the environment forces the step in each time interval.
In a complex system the response X(t) is expected to depart from the
totally random condition of the simple random walk model, since such
fluctuations are expected to have memory and correlation. In the physics
literature anomalous diffusion has been associated with phenomena with
long-time memory such that the covariance is
C(t, t ) = X(t)X(t ) ∝ |t − t | .
β
(4.22)
Xj = (1 − B)−α ξj (4.24)
has the binomial series expansion
∞
−α
(1 − B)−α =
k
(−1) B k . (4.25)
k
k=0
X ω
ω = Θ ξω (4.29)
yielding the power spectrum
2 2 2
S (ω) = X ω =
ξ ω Θω . (4.30)
Xj+k Xj Γ (1 − α) 2α−1
rk =
≈ k
2 Γ (α)
|Xj |
Γ (1.5 − H) 2H−2
= k (4.35)
Γ (H − 0.5)
where the memory kernel replaces the dissipation parameter and the
fluctuation-dissipation relation becomes generalized
α t−α
0 Dt [X(t)] + λα X(t) = X(0) + ξ (t) (4.38)
Γ (1 − α)
where one form of the fractional operator can be interpreted in terms of
the integral
∞
1 g(t )dt
α
0 Dt [g(t)] ≡ 1−α . (4.39)
Γ (α) (t − t )
0
This definition of a fractional operator is not unique, various forms are cho-
sen to emphasize different properties of the system being modeled [378].
Equation (4.38) is mathematically well defined, and strategies for solving
such equations have been developed by a number of investigators, particu-
larly in the book by Miller and Ross [236] that is devoted almost exclusively
to solving such equations when the index is rational and ξ(t) = 0. Here we
make no such restriction and consider the Laplace transform of Eq.(4.38)
to obtain
X(0)sα−1 %
ξ(s)
%
X(s) = α + α (4.40)
λ +s α λ + sα
whose inverse Laplace transform is the solution to the fractional differential
equation. Inverting Laplace transforms such as Eq. (4.40) is non-trivial and
an excellent technique that overcomes many of these technical difficulties,
implemented by Nonnenmacher and Metzler [251], involve the use of Fox
functions. For our purposes fractional derivatives can be thought of as a
way of incorporating the influence of the past history of a process into
its present dynamics. There has been a rapidly growing literature on the
fractional calculus in the past decade or so, particularly in the description
of the fractal dynamical behavior of physiological time series. We do not
have the space to develop the mathematical background for this formalism
and its subsequent use in physiology so I merely give a few examples of its
use and refer the reader to the relevant literature [376].
The formal solution to the fractional Langevin equation is expressed in
terms of the Laplace transform which can be used to indicate how the mem-
ory influences the dynamics. Recall that the fluctuations were assumed to
be zero centered, so that taking the average over an ensemble of realizations
of the fluctuations yields
X(0)sα−1
%
X(s) = α . (4.41)
λ + sα
The solution to the average fractional relaxation equation is given by the
series expansion for the standard Mittag-Leffler function [376]
∞
k kα
(−1) (λt)
X(t = X(0)Eα (−(λt)α ) = X(0) (4.42)
Γ (1 + kα)
k=0
as it should, since under this condition Eq. (4.38) reduces to the ordinary
Langevin relaxation rate equation.
FIGURE 4.1. The solid curve is the Mittag-Leffler function, the solution to the frac-
tional relaxation equation. The dashed curve is the stretched exponential (Kohlrausch-
Williams-Watts Law) and the dotted curve is the inverse power law (Nutting Law).
also known as the stretched exponential. In the long-time limit it yields the
inverse power law, known as the Nutting Law,
1
lim Eα (−(λt)α ) = (4.45)
t→∞ (λt)α
Figure 4.1 displays the general Mittag-Leffler function as well as the two
asymptotes, the dashed curve being the stretched exponential and the dot-
ted curve the inverse power law. What is apparent from this discussion
is the long-time memory associated with the fractional relaxation process,
being inverse power law rather than the exponential of ordinary relaxation.
Returning now to the Laplace transform of the solution to the generalized
Langevin equation we can express the inverse Laplace transform of the first
term on the rhs of Eq. (4.40) in terms of the Mittag-Leffler function as
just found for the homogeneous case. The inverse Laplace transform of the
second term is the convolution of the random force and a stationary kernel.
The kernel is given by the series
∞
zk
Eα,β (z) = , α, β > 0 (4.46)
Γ (β + kα)
k=0
t
X(t) = X(0)Eα (−(λt) ) + α
(t − t )α−1 Eα,α (−(λt )α )ξ (t ) dt . (4.47)
0
t
X(t) = X(0)e −λt
+ e−λ(t−t ) ξ (t ) dt (4.48)
0
t t
2 1 dt1 dt1
[X(t) − X(0)] = 2 1−α 1−α ξ (t1 ) ξ (t2 ) .
Γ (α) (t − t1 ) (t − t1 )
0 0
(4.49)
Recalling that the fluctuations are delta correlated in time with strength
2D therefore yields
2Dt2α−1
2
[X(t) − X(0)] = 2 (4.50)
(2α − 1) Γ (α)
The time dependence of the second moment Eq. (4.50) agrees with that
obtained for anomalous diffusion if we make the identification 2H = 2α−1,
where, since the fractional index is less than one for 1/2 ≥ H > 0. Conse-
quently, the process described by the dissipation-free fractional Langevin
equation is anti-persistent.
This anti-persistent behavior of the time series was observed by Peng
et al. [267] for the differences in time intervals between heart beats. They
interpreted this result, as did a number of subsequent investigators, in terms
of random walks with H < 1/2. However, we can see from Eq. (4.50) that
the fractional Langevin equation without dissipation is an equally good,
or one might say an equivalent, description of the underlying dynamics.
The scaling behavior alone cannot distinguish between these two models,
what is needed is the complete statistical distribution and not just the
time-dependence of the central moments.
The formal solution to this fractional Langevin equation is
t
1 ξ (t ) dt
X(t) − X(0) = 1−α
Γ (1 − α) (t − t )
0
t
X(t) − X(0) = Kα (t − t ) ξ (t ) dt . (4.51)
0
where the Hurst exponent is in the range 0 < H ≤ 1. In a similar way the
kernel in Eq.(4.51) is easily shown to scale as
The scaling relation Eq. (4.55) determines the q th order structure function
exponent ρ (q). Note that when ρ (q) is linear in q the underlying process is
monofractal, whereas, when it is nonlinear in q the process is multifractal,
because the structure function can be related to the mass exponent [282]:
f (q) = 2 − H − (μ − 1) bq μ (4.58)
FIGURE 4.2. The singularity spectrum for q > 0 obtained through the numerical fit to
the human gait data. The curve is the average over the ten data sets obtained in the
experiment. (From [377] with permission.)
Napoleon. However, its etiology and pathomechanism have to date not been
satisfactorily explained. It was demonstrated [377] that the characteristics
of CBF time series significantly differs between that of normal healthy in-
dividuals and migraineurs. Transcranial Doppler ultrasonography enables
high-resolution measurement of middle cerebral artery blood flow velocity.
Even though this technique does not allow us to directly determine CBF
values, it helps to clarify the nature and role of vascular abnormalities
associated with migraine. In particular we present the multifractal proper-
ties of human middle cerebral artery flow velocity, an example of which is
presented below in Figure 4.4.
FIGURE 4.3. (a) The mass exponent as a function of the q−moment obtained from
a numerical fit to the partition function using (3.4.10) for a typical walker. (b) The
singularity spectrum f (h) obtained from a numerical fit to the mass exponent and its
derivative using (3.4.9) for a typical walker. (From [377] with permission.)
FIGURE 4.4. Middle cerebral artery flow velocity time series for a typical healthy sub-
ject.
0.9
0.8
f(h)
0.7
0.6
0.5
0.5 0.6 0.7 0.8 0.9 1 1.1
h
(a)
0.9
0.8
f(h)
0.7
0.6
0.5
0.5 0.6 0.7 0.8 0.9 1 1.1
h
(b)
FIGURE 4.5. The average multifractal spectrum for middle cerebral blood flow time
series is depicted by f (h). (a) The spectrum is the average of ten time series measure-
ments from five healthy subjects (filled circles). The solid curve is the best least-squares
fit of the parameters to the predicted spectrum. (b) The spectrum is the average of 14
time series measurements of eight migraineurs (filled circles). The solid curve is the best
least-squares fit to the predicted spectrum. (From [377] with permission.)
The multifractal nature of CBF time series is here modeled using a frac-
tional Langevin model. The scaling properties of the random force are again
implemented in the memory kernel to obtain Eq. (4.54) as the scaling of the
solution to the fractional Langevin equation. Here the q−moment of the
solution is calculated and the statistics are assumed to be Normal rather
than the more general Lévy. Consequently the quadratic function for the
singularity spectrum becomes
2
(h − H)
f (h) = f (H) − (4.59)
2σ
and is obtained from Eq. (4.58) by setting μ = 2 and b = 2σ. The mode of
the spectrum is located at f (H) = 2 − H with h = H.
It seems that the changes in the cerebral auto-regulation associated with
migraine can strongly modify the multifractality of middle cerebral artery
blood flow. The constriction of the multifractal to monofractal behavior of
the blood flow depends on the statistics of the fractional derivative index.
As the distribution of this parameter narrows down to a delta function, the
nonlocal influence of the mechanoreceptor constriction disappears. On the
other hand, the cerebral auto-regulation does not modify the monofractal
properties characterized by the single global Hurst exponent, presumably
that produced by the cardiovascular system.
(n)
n−1
Xj = Xnj−k . (4.60)
k=1
b = 2H . (4.65)
It is well established [34] that the exponent in a scaling equation such as
Eq.(4.63) is related to the fractal dimension D of the underlying time series
by D = 2 − H, so that
D = 2 − b/2. (4.66)
A simple monofractal time series therefore satisfies the power-law relation
of the AR form with theoretically determined parameters.
FIGURE 4.6. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the heartbeat interval time series for a young adult male, using
sequential values of the aggregation number. The solid line is the best fit to the aggre-
gated data and yelds a fractal dimension D = 1.24 midway between the curve for a
regular process and that for an uncorrelated random process as indicated by the dashed
curves. (From [381] with permission.)
infinite as the size of the ruler used to measure it goes to zero. The depen-
dence of the average and standard deviation of the ruler size, for a given
time series, implies that the statistical process is fractal and consequently
defines a fractal dimension for the HRV time series as given by Eq.(4.66).
These results are consistent with those first obtained by Peng et al. [267]
for a group of ten healthy subjects having a mean age of 44 years, using ten
thousand data elements for each subject. They concluded that the scaling
behavior observed in HRV time series is adaptive for two reasons: firstly
that the long-time correlations constitutes an organizing principle for highly
complex, nonlinear processes that generate fluctuations over a wide range
of time scales; secondly, the lack of a characteristic scale helps prevent ex-
cessive mode-locking that would restrict the functional responsiveness of
the organism.
The sinus node (the heart’s natural pacemaker) receives signals from the
autonomic (involuntary) portion of the nervous system which has two major
branches: the parasympathetic, whose stimulation decreases the firing rate
of the sinus node, and the sympathetic, whose stimulation increases the
firing rate of the sinus node pacemaker cells. These two branches are in
a continual tug-of-war on the sinus node, one decreasing and the other
increasing the heart rate and it is this tug-of-war that produces the HRV
time series in healthy subjects. We emphasize that the conclusions drawn
here are not from this single figure or set of data presented; these are
only representative of a much larger body of work. The conclusions are
based on a large number of similar observations [381] made using a variety
of data processing techniques, all of which yield results consistent with
the scaling of the HRV time series indicated in Figure 4.6. The heartbeat
intervals do not form an uncorrelated random sequence; instead the analysis
suggests that the HRV time series is a statistical fractal, indicating that
heartbeats have a long-time memory. The implications of this long-time
memory concerning the underlying physiological control system is taken
up later.
The global Hurst exponent determines the properties of monofractals,
but as previously stated there exists a more general class of heterogeneous
signals known as multifractals, which are made up of many interwoven sub-
sets with different local Hurst exponents h. The local and global exponents
are only equal for infinitely long time series, in general the Hurst expo-
nent h and the fractal dimension D = 2 − h are independent quantities.
The statistical properties of the interwoven subsets may be characterized
by the distribution of fractal dimensions f (h). In general, time series have
a local fractal exponent h that varies over its course. The multifractal or
singularity spectrum describes how the local fractal exponents contribute
to such time series. A number of investigators have used the singularity
spectrum to demonstrate that HRV time series are multifractal [171, 381].
The multifractal character of HRV time series further emphasizes the
non-homeostatic physiologic variability of heartbeats. Longer time series
than the one presented here clearly show a patchiness associated with the
fluctuations; a patchiness that is usually ignored in favor of average values
in traditional data analysis. This clustering of the fluctuations in time can
be symptomatic of the scaling with aggregation observed in Figure 4.6 or if
particularly severe it can be indicative of multifractality. However, due to
limitations of space, we do not further pursue the multifractal properties of
time series here, but refer the interested reader to the literature [374, 381].
cally fractal manner discussed. Nor is it good fortune alone that ties the
dynamics of our every breath to this biological structure. I argued that,
like the heart, the lung is made up of fractal processes, some dynamic and
others now static. However, both the static and dynamic processes lack a
characteristic scale and the simple argument given in Section 2.2 establishes
that such a lack of scale has evolutionary advantage.
Respiration is, in part, a function of the lungs, whereby the body takes in
oxygen and expels carbon dioxide. The smooth muscles in the bronchial tree
are innervated by sympathetic and parasympathetic fibers, much like the
heart, and produces contractions in response to stimuli such as increased
carbon dioxide, decreased oxygen and deflation of the lungs. Fresh air is
transported through some twenty generations of bifurcating airways of the
lung, during inspiration, down to the alveoli in the last four generations of
the bronchial tree. At this tiny scale there is a rich capillary network that
interfaces with the bronchial tree for the purpose of exchanging gases with
the blood.
Szeto et al. [332] made an early application of fractal analysis to fe-
tal lamb breathing. The changing patterns of breathing in seventeen fetal
lambs and the clusters of faster breathing rates, interspersed with period
of relative quiescence, suggested to them that the breathing process was
self-similar. The physiological property of self-similarity implies that the
structure of the mathematical function describing the time series of inter-
breath intervals is repeated on progressively shorter time scales. Clusters
of faster rates were seen within the fetal breathing data, what Dawes et
al. [74] called breathing episodes. When the time series were examined on
even finer time scales, clusters could be found within these clusters, and the
signature of this scaling behavior emerged as an inverse power-law distribu-
tion of time intervals. Consequently, the fractal scaling was found to reside
in the statistical properties of the fluctuations and not in the geometrical
properties of the dynamic variable.
In parallel with heart rate, the variability of breathing rate using breath-
to-breath time intervals is called breathing rate variability (BRV). An ex-
ample of BRV time series data on which a scaling calculation is based is
shown in Figure 4.7. Because the heart rate is higher than the respiration
rate, in the same measurement epoch there is a factor of five more data for
HRV than there is for BRV time series. The BRV data were collected under
the supervision of Dr. Richard Moon, the Director of the Hyperbaric Lab-
oratory at Duke Medical Center. West et al. [379] applied the aggregation
method to the BRV time series and obtained the typical results depicted in
Figure 4.7. The logarithms of the aggregated standard deviation and aggre-
gated average were determined in the manner described earlier. Note that
we stop the aggregation at ten data elements because of the small number
of data in the breathing sequence. The solid curve is the best least-square
fit to the aggregated BRV data and has a slope of 0.86; the scaling in-
dex. The scaling index and fractal dimension obtained from this figure are
consistent with the results obtained by other researchers.
FIGURE 4.7. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the breathing interval time series for a healthy senior citizen,
using sequential values of the aggregation number. The solid line is the best fit to the
aggregated data and yelds a fractal dimension D = 1.14 between the curve for a regular
process and that for an uncorrelated random process as indicated by the dashed curves.
the stride interval uses successive maximum extensions of the knee of either
leg [375]. The stride interval time series for a typical subject has variation
on the order of 3-4%, indicating that the stride pattern is very stable. The
changes in stride interval is called stride rate variability (SRV). It is the
statistical stability of SRV that historically led investigators to decide that
not much could go wrong by assuming the stride interval is constant and the
fluctuations are merely biological noise. The experimental data fluctuations
around the mean gait interval, although small, are non-negligible because
they indicate an underlying complex structure and it was shown that these
fluctuations cannot be treated an uncorrelated random noise.
FIGURE 4.8. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the SRV time series for a young adult male, using sequential
values of the aggregation number. All the data elements are used for the graphed point
at the lower left and 20 data elements are aggregated in the last graphed point on the
upper right. The solid line is the best fit to the aggregated SRV data and yelds a fractal
dimension D = 1.30 midway between the extremes for a regular process and that for an
uncorrelated random process as indicated by the dashed curves.
time series to represent a random fractal process. In the SRV context, the
implied clustering indicated by a slope greater than the random dashed
line, means that the intervals between strides change in clusters and not in
a uniform manner over time. This result suggests that the walker does not
smoothly adjust his/her stride from step to step. Rather, there are a number
of steps over which adjustments are made followed by a number of steps over
which the changes in stride are completely random. The number of steps
in the adjustment process and the number of steps between adjustment
periods are not independent. The results of a substantial number of stride
interval experiments support the universality of this interpretation.
211
Measles
6000
New York
4000 City
2000
Chicken Pox
2000 New York City
Monthly notification
1000
Mumps
2000 New York City
1000
Measles
2000 Baltimore
1000
FIGURE 5.1. The monthly reported cases of measles, chicken pox and mumps in New
York and measles in Baltimore in the periods 1928 to 1972 inclusive. [204]
15000 0.92
0 −2.57
(a) (b)
0.71
6000
0 −2.46
(c) (d)
FIGURE 5.2. Epidemics of measles in New York and Baltimore. Left: The numbers of
cases reported monthly by physicians from 1928 to 1963. Right: Power spectra (from
[306] with permission).
the actual number. In the spectra given in Figure 5.2 a number of peaks are
evident superimposed on a noisy background. The most prominent peak co-
incides with a yearly cycle with most cases occurring during the winter. The
secondary peaks at 2 and 3 years are obtained by an appropriate smooth-
ing of the data. These data were also plotted using ART as phase plots of
N (t), N (t + τ ), N (t + 2τ ) when N is the number of cases per month and
τ is a two to three month shift in the time axis. Figures 5.3 and 5.4 show
the phase portraits obtained using the smoothed data. Schaffer and Kott
point out that for both New York and Baltimore most of the trajectory
traced out by the data lies on the surface of a cone with its vertex near
the origin. They conclude by inspection of these figures that the attrac-
tor is an essentially two-dimensional object embedded in three dimensions.
This estimate is made more quantitative using the method of Grassberger
and Procaccia [136, 137] to calculate the correlation dimension. Figure 5.5
depicts the dimension asymptotes to a value of approximately 2.5 as the
embedding dimension is increased to five.
Return now to the SEIR model of epidemics. For measles in the large
cities of rich countries, m−1 = 102 , a−1 = 10−1 , and g −1 = 10−2 . As given
by Eq.(5.1)-(5.3) the solution to the SEIR model as determined by the
value of the rate of infection Q:
ba
Q= . (5.4)
[(m + a)(m + g)]
If Q < 1 the disease dies out; if Q > 1, it persists at a constant level and
is said to be endemic. At long times neither of these solutions captures
the properties of the attractors shown in Figures 5.3 and 5.4, that is, the
observation of recurrent epidemics is at variance with the predictions of the
SEIR model as formulated above.
FIGURE 5.3. Reconstructed trajectory for the New York data (smoothed and interpo-
lated). The motion suggests a unimodall-D map in the presence of noise. a-d. The data
embedded in three dimensions and viewed from different perspectives. (From [306] with
permission,)
To study the effect of seasonality Schaffer [306] replaces the contact rate
b in Eq. (5.1) and (5.2) with the periodic function
b
b(t) = . (5.5)
[1 + cos (2πt)]
For this form of the contact rate the solution to the SEIR-model has
period-doubling bifurcations leading to chaos [15, 311, 312]. Schaffer uses
ART along with the number of exposed and infectives to obtain the results
shown in the top row of Figure 5.6. In this figure the attractors generated
by the data are compared with that produced by the SEIR model. The
resemblance among them is obvious. The second row depicts the attractors
as seen from above. From this perspective the essential two-dimensional na-
ture of the flow field is most evident. Poincaré sections are taken by plotting
the intersection of the attractor with a transverse line drawn through each
of the three attractors. It is seen that these sections are V-shaped half lines,
demonstrating that the flow is confined to a nearly two-dimensional conical
surface. A one-dimensional map was constructed by plotting the sequential
intersecting points against one another yielding the nearly single humped
maps shown in the final row of Figure 5.6. These maps for the New York
and Baltimore measle data depict a strong dependence between consecu-
tive intersections. When a similar analysis was made of the chicken pox and
mumps data no such dependence was observed, that is, the plot yielded a
random spray of points.
FIGURE 5.4. Reconstructed trajectory for the Baltimore data. The 1-D map is very
steep and compressed. Order of photographs as in Figure 5.3.
The failure of the chicken pox and mumps data to yield a low-dimensional
attractor in phase space lead Schaffer and Kott to investigate the effects of
uncorrelated noise on a known deterministic map. The measure they used
to determine the nature of the attractor was the one-dimensional map from
the Poincaré surface of section. They argued that the random distribution
m=1
−2 3
log C(r)
m=2
3 4 2
D
5
−4 1
0
−4 −2 0 0 1 2 3 4 5
log r σ
FIGURE 5.5. Estimating the fractal dimension for measles epidemics in New York. Left:
The correlation integral C(r) plotted against the length scale r for different embeddings
m of the data. Right: Slope of the log-log plot against embedding dimension (from [306]
with permission).
of points observed in the data could be the result of a map of the form
Xn+1 = (1 + Zn )F (Xn ) (5.6)
where F (Xn ) is the mapping function and Zn is a discrete random variable
with Normal statistics of prescribed mean and variance. They showed that
the multiplicative noise Zn could totally obscure the underlying map F (Xn )
when the dynamics are periodic. However as the system bifurcates and
moves towards chaos the effect of the noise is reduced, becoming negligible
when chaos is reached. Thus they conclude: “..that whereas noise can easily
obscure the underlying determinism for systems with simple dynamics, this
turns out not to be the case if the dynamics are complex.” This result is at
variance with the earlier interpretation of Bartlett [29] that the observed
spectrum for measles resulted from the interaction between a stochastic
environment and weakly damped deterministic oscillations. Olsen and Degn
[255] support the conclusions of Schaffer and Kott, stating:
The conclusion that measles epidemics in large cities may
be chaotic due to a well defined, albeit unknown mechanism is
also supported by the analysis of measles data from Copenhagen
yielding a one-dimensional humped map almost identical to the
ones found from the New York and Baltimore data.
Hence we have seen that ART is not only useful when the data yield
low-dimensional attractors, but also has utility when it does not. That is
to say that some of the ideas in nonlinear dynamics conjoined with the
older concepts of stochastic equations, can explain why certain data sets
do not yield one-dimensional maps. These insights become sharper through
subsequent examples.
In order not to leave the impression that this interpretation of the data
is uniformly accepted by the epidemiological community we mention the
criticism of Aron [15] and Schwartz [312]. Much of the debate centers on
the contact rate parameter, which because it varies through the year, must
be estimated indirectly. Aron contends that the models are extremely sen-
sitive to parameters such as the contact rate, and the variation in these pa-
rameters over 30 to 40 years could produce the fluctuations [276]. Schwartz
cautions against the over-use of such simplified models as the SEIR, since it
does not yield quantitative agreement with the real world situation. Pool’s
Science article gives a clear exposition of the state of the debate as it stood
two decades ago.
FIGURE 5.6. Measles epidemics real and imagined. Top row: Orbits reconstructed from
the numbers of infective individuals reponed monthly with three-point smoothing and
interpolation with cubic splines [307]. Time lag for reconstructions indicated in photos.
Middle row: Orbits viewed from above (main part of the figures) and sliced with a plane
(vertical line) normal to the paper. Poincaré sections shown in the small boxes at upper
left. Bottom row. One of the Poincare sections magnified (left) and resulting 1-D map
(right). In each case, 36 years of data are shown. Left column: data from New York City.
Middle column: data from Baltimore. Right column: SEIR equations with parameters
as in Figure 5.3 save b1 = 0.28 (from [306] with permission).
The difficulty in testing the above ideas has to do in part with the lack
of experimentally controlled data to test the underlying parameters. Some
recent attempts to clarify one of the underlying issues focused on the fre-
quency of contacts between individuals in a social gathering, which is im-
portant since the spread of infectious diseases is strongly dependent on
these patterns of individual contacts. Stehle et al. [328] point out there
are few empirical studies available that provide estimates of the number
and duration of contacts between individuals. In their study the number
and duration of individual contacts at a two-day medical conference were
recorded using radiofrequency identification devices. The distribution of
the number of contacts versus contact duration is depicted in Figure 5.7. It
is clear from the figure that the duration of contact times has a long tail,
which is to say, that the average time does not characterize the distribution
very well.
FIGURE 5.7. Distribution of the contact duration between any two people at the con-
ference on a log-log scale. The mean duration was 49 seconds, with a standard deviation
of 112 seconds. From [328].
Stehle et al. [328] assessed the role of data-driven dynamic contact pat-
terns between the 405 individuals participating in the study in shaping the
spread of a simulated epidemic in a population using various extensions of
the SEIR model. They used both the dynamic network of contacts defined
by the collected data, and two aggregated versions of such networks, to
assess the role of the time varying aspects of the data. This is an exciting
application of the understanding of dynamic complex networks to epidemi-
ology. The broad distributions of the various network characteristics re-
ported in this study were consistent with those observed in the interaction
networks of two previous conferences [55, 169]. This study emphasizes the
effect of contact heterogeneity on the dynamics of communicable diseases
and showed that the rate of new contacts is a very important parameter
in modeling the spread of disease. However they found that increasing the
complexity of the model did not always increase the accuracy of the model.
Their analysis of a detailed contact network and a simplified version of the
same network generated very similar results. These results invite further
exploration to determine their generality.
10 SECONDS
R9
10 SECONDS
R15
10 SECONDS
L10
50 SECONDS
FIGURE 5.8. Firing patterns of identified neurons in Aplysia’ s abdominal ganglion are
portrayed. R2 is normally silent, R3 has a regular beating rhythm, Rl5 a regular bursting
rhythm and LlO an irregular bursting rhythm. LlO is a command cell that controls other
cells in the sytem. (From [179] with premission.)
The rich dynamic structure of the neuron firing patterns has lead to their
being modeled by nonlinear dynamical systems. In Figure 5.8 the normally
For these last neurons we have the remarkable result that as few as three
or four variables may be sufficient to model the neuronal dynamics if in fact
the source of their fractal nature is a low-dimensional attractor. It would
have been reckless to anticipate this result, but we now see that in spite
of the profound complexity of the mammalian central nervous system the
dynamics of some of its components may be describable by low-dimensional
dynamic systems. Thus, even though we do not know what the dynamic
relations for these neurons systems might be, the fact that they do mani-
fest such relatively simple dynamical behavior, bodes well for the eventual
discovery of the underlying dynamic laws.
The next level of dynamic complexity still involving only a single neuron
is its response when subjected to stimulation. This is a technique that was
mature long before nonlinear dynamics was a defined concept in biology.
I review some of the studies here because it is clear that many neurons
capable of self-sustained oscillations are sinusoidally driven as part of the
hierarchal structure in the central nervous system. The dynamics of the
isolated neuron, whether periodic or chaotic, may well be modified through
periodic stimulation. This has been found to be the case.
Hayashi et al. [148] were the first investigators to experimentally show
evidence of chaotic behavior in a self-sustained oscillations of an excitable
biological membrane under sinusoidal stimulation. The experiments were
carried out on the giant internodal cell of the fresh water algae Nitellajlex-
ilis. A sinusoidal stimulation, A cosr (ω0 t)+B, was applied to the internodal
cell which was firing repetitively. The DC outward current B was applied
in order to stably maintain the repetitive firing which was sustained for
40 minutes. In Figure 5.9 the repetitive firing under the sinusoidal cur-
rent stimulation is shown. In Figure 5.9a the firing current is seen to be
one-to-one phase locked to the stimulating current.
The phase plot of segmented peaks is shown in Figure 5.10a, where the
stroboscopic mapping function is observed to converge on a point lying
along a line of unit slope. In Figure 5.9b we see that the firing of the neuron
has become aperiodic losing its entrainment to the stimulation. This in
itself is not sufficient to establish the existence of a low-dimensional chaotic
attractor. Additional evidence is required. The authors obtain this evidence
by constructing the mapping function between successive maxima of the
pulse train. For an uncorrelated random time series this mapping function
is just a random spray of points, whereas for a chaotic time series this
function is well defined. The mapping of sequential peaks depicted in Figure
5.10b reveals a single-valued mapping function. The slope of this function
is less than −1 at its intersection with the line of unit slope. The lines in
Figure 5.10b clearly indicate that the mapping function admits of a period
three solution. Hayashi et al. [148] then invoked a theorem due to Li and
−20
mV
−40
−60
4
μA/cm2
2
0
(a)
1 sec
−20
mV
−40
−60
4
μA/cm2
2
0
(b)
FIGURE 5.9. Entrainment and chaos in the sinusoidally stimulated internodal cell of
Nitella. (a) Repetitive firing (upper curve) synchronized with the periodic current stim-
ulation (lower curve). (b) Non-periodic response to periodic stimulation. (From Hayashi
et al. [148] with permission.)
Yorke [197] that states: “period three implies chaos.” They subsequently
show that entrained, harmonic, quasiperiodic and chaotic responses of the
self-sustained firing of the Nitella internodal cell occur for different values
of the amplitude and frequency of the periodic external force [149]. These
same four categories of responses were obtained by Matsumoto et al. [226]
using a squid giant axon.
The above group [148] also investigated the periodic firing of the Onchid-
ium giant neuron under sinusoidal stimulation (the pacemaker neuron from
the marine pulmonate mollusk Onchidium verraculatum). The oscillatory
response does not synchronize with the sinusoidal stimulation, but is in-
stead aperiodic. The trajectory of the oscillation is shown in Figure 5.11
and it is clearly not a single closed curve but a filled region of phase space.
This region is bounded by the trajectory of the larger action potentials.
−30
−50
−60
−70
−80
−80 −70 −60 −50 −40 −30
Vn (mV)
(a)
−30
Vn+1 = F(Vn) (mV)
−40
−50
−60
−70
−80
−80 −70 −60 −50 −40 −30
Vn (mV)
(b)
FIGURE 5.10. (a) and (b) are the stroboscopic transfer function obtained from Figure 5.9
(a) and (b) respectively. The membrane potential at each peak of the periodic stimulation
was plotted against the preceding one. Period three is indicated graphically by arrows
in (b). (From Hayashi et al. [149] with permission.)
dV/dt (V/sec)
−5
−60 0 60
V (mV)
FIGURE 5.11. The trajectory of the non-periodic oscillation. The trajectory is filling
up a finite region of the phase space. The oscillation of the membrane potential was
differentiated by the differentiated circuit whose phase did not shift in the frequency
region below 40 Hz. (From Hayashi et al. [148] with permission.)
dV
= I − g Na m3 h (V − VNa ) − g K n4 (V − VK ) − g L (V − VL ) /C
dt
(5.7)
where the g j ’s are the maximal ionic conductances and the Vj ’s are the
reversal potentials for j =sodium (Na ), potassium (K) and leakage current
component (L); I is the membrane current density (positive outward); C is
the membrane capacitance; m is the dimensionless sodium activation; h is
the dimensionless sodium inactivation and n is the dimensionless potassium
activation. The functions m, h and n satisfy their own rate equations that
depend on V and the temperature, but there is no reason to write these
down here; see for example, Aihara et al. [1].
Vn (mV)
50
(d)
150°
30
10
−10
−10 10 30 50
FIGURE 5.12. Stroboscopic transfer function of the chaotic response to periodic current
stimulation in the Onchidium giant neuron. The arrows indicate period three. (From
Hayashi et al. [149] with permission.)
There was good agreement found between the time series of the exper-
imental oscillations in the membrane potential of the periodically forced
squid axon by Matsumoto et al. [226] and those obtained in the numeri-
cal study by Aihara et al. The latter authors determined that there were
two routes to chaos followed by the Hodgkin-Huxley equations: succes-
sive period doubling bifurcations and the formation of the intermittently
chaotic oscillation from subharmonic synchronization. The former route
had previously been analyzed by Rinzel and Miller [293] for the autonomous
Hodgkin-Huxley equations, whereas the present discussion focusses on the
non-autonomous system. Aihara et al. [1] reach the conclusion:
Therefore, it is expected that periodic currents of various
forms can produce the chaotic responses in the forced Hodgkin-
Huxley oscillator and giant axon. This implies that neural sys-
tems of nonlinear neural oscillators connected by chemical and
electrical synapses to each other can show chaotic oscillations
and supply macroscopic fluctuations to the biological brain.
1×10-1 0
3 −1
[Ceiv]
2 log
1
[Ceiv]
[Ceiii]
1×10-1 [Ceiii] −2
3 5
2 2
log
6
1×10-1
3 log [Br-]
2 3 4
1×10-1
0 150 300 450 600 750 900 1050 1200
Seconds
FIGURE 5.13. The Belousov-Zhabotinskii (BZ) reaction is the most fully understood
chemical reaction that exhibits chemical organization. The general behavior of this re-
action as the concentrations of bromide and cerium ions oscillate. (From [97] with per-
mission.)
Thus, for sufficiently high values of the control parameter (flow rate) the
attractor becomes chaotic. In Figure 5.15a is depicted a two-dimensional
projection of the three-dimensional phase portrait of the attractor with the
third axis normal to the plane of the page. A Poincaré surface of section is
constructed by recording the intersection of the attractor with the dashed
line to obtain the set of data points {Xn }. The mapping function shown
in Figure 5.15b is obtained using these data points. The one-humped form
of the one-dimensional map clearly indicates the chaotic character of the
attractor. These observations were thought to provide the first example of a
physical system with many degrees of freedom that can be modeled in detail
by a one-dimensional map. However, Olsen and Degn [254] had observed
chaos in the oscillating enzyme reaction: peroxidase-oxidase reaction in an
open system some five years earlier. The next amplitude plot for this latter
reaction does not yield the simple one-humped mapping function shown
in Figure 5.15b, but rather has a “Cantor set-like” structure as shown in
Figure 5.16. Olsen and Degn [254] constructed a mathematical model con-
taining the minimal chemical expressions for quadratic branchings. The
results yielded periodic and chaotic oscillations closely resembling the ex-
perimental results. In Figure 5.16 the next amplitude plot of the chaotic
solutions for the data is overlaid on the numerical solutions. As pointed out
by Olsen [253], “The dynamic behavior of the peroxidase-oxidase reaction
may thus be more complex than the behavior previously reported for the
BZ reaction”.
FIGURE 5.14. Observed bromide-ion potential series with periods τ (115 s), 2τ , 2x2τ ,
6τ , 5τ , 3τ , and 2 x 3τ ; the dots above the time series are separated by one period. (From
Simoyi et al. [321] with permission.)
Xn + 1
B(ti +53s)
B(ti) Xn
(a) (b)
8
3
3 8
(a)
6.5
6
3.25 3.75
(b)
FIGURE 5.16. Next amplitude plot of the oscillators observed in the peroxidase-oxidase
reaction. (a) 3000 maxima have been computed. The first of these maxima is preceded
by 100 maxima that were discarded. (b) Magnification of the square region shown in
(a). (From Olsen [253] with permission.)
0 30 60 90 120 150
TIME (min)
FIGURE 5.17. Measured NADH fluorescence (upper curve) of yeast extract under sinu-
soidal glucose input flux (lower curve). (From Markus et al. [223] with permission.)
Fn+1
Fn
(a)
2
Fn+1
Fn
(b)
Denote the phase of the oscillator immediately before the ith stimulus of
a periodic stimulation with a period τ by φi . The recursion relation is
τ
φi+1 = g (φi ) + (5.8)
T0
FIGURE 5.19. The new phase of the cardiac oscillator following a stimulation is plotted
against the old phase, the resulting curve is called the phase transition curve. This is
denoted by g (φ) in the text. (From Glass et al. [115] with permission.)
15
14
13
12
11
T-Wave Height (N + 1)
10
9
8
7
6
5
4
3
2
1
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
T-Wave Height (N)
FIGURE 5.20. The height to the (N + 1)st T-wave is plotted against the height of the
N th T-wave during 48 beats of an episode of period doubling (from Oono et al. [257]
with permission).
Section 2.4 discussed the power spectrum of the QRS complex of a normal
heart and the hypothesis that the fractal structure of the His-Purkinje net-
work serves as a structural substrate for the observed broadband spectrum
[123], as depicted in Figure 2.22. Babloyantz and Destexhe [21] construct
the power spectrum of a four minute record of ECG which also shows
1.0
Tn + 1
0.0
Tn 1.0 sec
FIGURE 5.21. A next amplitude plot of T-wave maximum yields a period three orbit
from a patient with an arrhythmia. (From Oono et al. [257] with permission.)
since the Tauberian theorem applied to Eq. (5.9) implies that for small r
the spectrum corresponding to this correlation function is
1
S (ω) ∼ (5.10)
ω 2ν+1
for large ω. Whereas if the power spectrum is exponential
S (ω) ∼ e−γω
the corresponding correlation function is
γ
C(r) ∼ 2 .
γ + r2
Thus, it would seem that the cardiac time series is not fractal, but fur-
ther measures suggest that it may in fact be chaotic or at least there is a
persistent controversy as we subsequently discuss.
A phase portrait of the ECG attractor may be constructed from the time
series using ART. Figure 5.23 depicts such a portrait in three-dimensional
phase space using two different delay times. The two phase portraits look
different; however, their topological properties are identical. It is clear that
these portraits depict an attractor unlike the closed curves of a limit cycle
describing periodic dynamics. Further evidence for this is obtained by cal-
culating the correlational dimension using the Grassberger-Procaccia cor-
relation function; this dimension is found to range from 3.5 to 5.2 using
four minute time segments of data or 6 × 104 data points.
Power specturm
No window. resol.: 29 x 4096 pts.
1012
(a)
1011
Log Amplitude
1010
109
108
107
0 20.0 40.0
Frequency Hz
FIGURE 5.22. Semi-logarithmic plot of a power spectrum from ECG showing exponen-
tial decay at high frequencies followed by a flat region at still higher frequencies (not
shown). The flat region accounts for instrumental noise. (Adapted from Babloyantz and
Destexhe [21] with permission.)
FIGURE 5.23. Phase portraits of human ECG time series constructed in three-
dimensional space. A two-dimensional projection is displayed for two values of the delay
time: (a) 12 ms and (b) 1200 ms. (c) represents the phase portrait constructed from the
three simultaneous time series taken from the ECG leads. These portraits are far from
the single closed curve that would describe a periodic activity. (From Babloyantz and
Destexhe [21] with permission.)
Poincare section
Plane X=2250
1.9x103
2.0x103
-Z(k)
2.1x103
2.2x103
2.3x103
−2.40 −2.30 −2.20 −2.10
-Y(k) (units of 103)
(a)
First return map
Plane X=2250
−2.0x103
−2.1x103
−2.2x103
-Y(k+1)
−2.3x103
−2.4x103
−2.5x103
(b)
FIGURE 5.24. The Poincaré map of normal heart activity. Intersection of the phase
portrait with the Y − Z plane (X = const) in the region Q of Figure 5.23. the first
return map is constructed form the Y −coordinate of the previous section. We see that
there may be a simple non-invertible relationship between successive intersections. (From
Babloyantz and Destexhe [21] with permission.)
event there is no way that the ‘conventional wisdom’ of the ECG consisting
of periodic oscillations can be maintained in light of these results.
The above argument suggests that ‘normal sinus rhythm’ may be chaotic.
However in 2009 the journal Chaos initiated a new series on Controversial
Topics in Nonlinear Science. The first of the solicited topics was: Is the
Normal Heart Rate Chaotic? [116]; a question closely related to the above
discussion. One of the pioneers in the application of nonlinear dynamics to
biomedical phenomena is Leon Glass was a post doctoral researcher at the
University of Rochester with Elliott Montroll at the same time I was. Glass
provided a history of this controversial topic as well as an overview of the
contributions [116]. In part he states:
Glass points out that his own research on the effects of periodic stimu-
lation on spontaneously beating chick heart cell aggregation yields chaotic
dynamics [114], as we discussed at the beginning of this section. In spite
of his own research results he concluded that normal heart rate variability
does not display chaotic dynamics. Moreover that the application of the in-
sights resulting from understanding the nonlinear dynamics of arrhythmias
to clinical situations is been more difficult than he had originally imagined.
I could not agree more.
random time series with contributions from throughout the spectrum ap-
pearing with random phases as depicted in Figure 5.26. This aperiodic
signal changes throughout the day and changes clinically with sleep, that
is, its high frequency random content appears to attenuate with sleep, leav-
ing an alpha rhythm dominating the EEG signal. The erratic behavior of
the signal is so robust that it persists, as pointed out by Freeman [105],
through all but the most drastic situations including near-lethal levels of
anesthesia, several minutes of asphyxia, or the complete surgical isolation
of a slab of cortex. The random aspect of the signal is more than appar-
ent, in particular, the olfactory EEG has a Normal amplitude histogram,
a rapidly attenuating autocorrelation function, and a broad spectrum that
resembles ‘1/f noise’ [103].
FIGURE 5.25. The complex ramified structure of typical nerve cells in the cerebral
cortex is depicted.
Here I review the applications of ART to EEG time series obtained un-
der a variety of clinical situations. This application enables us to construct
measures of the degree of irregularity of the time series, such as the cor-
FIGURE 5.26. Typical episodes of the electrical activity of the human brain as recorded
in EEG time series together with the corresponding phase portraits. These portraits
are the two-dimensional projections of three-dimensional constructions. The EEG was
recorded on a FM analog tape and processed off-line (signal digitized in 12 bits, 250Hz
frequency, 4th order 120 Hz low pass filter). (From Babloyantz and Destexhe [20] with
permission.)
the attractor it is not static, that is to say, it varies with the level of sleep.
Correspondingly, the correlation dimension has decreasing values as sleep
deepens.
Mayer-Kress and Layne [230] used the results obtained by a number of
investigators to reach the following conclusions:
(1) The ‘fractal dimension’ of the EEG cannot be deter-
mined regionally, due to non-stationarity of the signal and sub-
sequent limitations in the amount of acceptable data.
(2) EEG data must be analyzed in a comparative sense with
the subject acting as their control.
(3) In a few cases (awake but quiet, eyes closed) with limited
time samples, it appears that the dimension algorithm converge
to finite values.
(4) Dimension analysis and attractor reconstruction could
prove to be useful tools for examining the EEG and complement
the more classical methods based on spectral properties.
(5) Besides being a useful tool in determining the optimal
delay-time for dimension calculations, the mutual information
content is a quantity which is sensitive to different brain states.
The data processing results suggest the existence of chaotic attractors
determining the dynamics of brain activity underlying the observed EEG
signals. This interpretation of the data would be strongly supported by
the existence of mathematical models that could reproduce the observed
behavior; such as in the examples shown earlier in this chapter. One such
model has been developed by Freeman [105] to describe the dynamics of
the olfactory system, consisting of the olfactory bulb (OB), anterior nucleus
(AON) and prepyriform cortex (PC). Each segment consists of a collection
of excitatory or inhibitory neurons which in isolation is modeled by a non-
linear second-order ordinary differential equation. The basal olfactory EEG
is not sinusoidal as one might have expected, but is irregular and aperiodic.
This intrinsic unpredictability is manifest in the approach to zero of the
autocorrelation function of the time series data. This behavior is captured
in Freeman’s dynamic model.
The model of Freeman generates a voltage time series from sets of cou-
pled nonlinear differential equations with interconnections that are spec-
ified by the anatomy of the olfactory bulb, the anterior nucleus and the
prepyriform cortex. When an arbitrarily small input pulse is received at
the receptor, the model system generates continuing activity that has the
statistical properties of the background EEG of resting animals. A compar-
ison of the model output with that of a rat is made in Figure 5.28. Freeman
[105] comments:
2400 2400
2000 2000
1600 1600
2400 2400
2000 2000
1600 1600
FIGURE 5.27. Two-dimensional phase portraits derived from the EEG of: (a) an awake
subject, (b) sleep stage two, (c) sleep stage four, (d) REM sleep. The time series x0 (t)
is made of N = 4000 equidistant points. The central EEG derivation C4-A1 according
to the Jasper system. Recorded with PDP11-44, 100Hz for 40 s. The value of the shift
from 1s to 1d is r = 10Δt. (From Babloyantz [18] with permission.)
OB
PC
MODEL
PC
TIME, 2 SEC
FIGURE 5.28. Examples of chaotic background activity generated by the model, simu-
lating bulbar unit activity and the EEGs of the OB, AON and PC. The top two traces
are representative records of the OB and PC EEGs from a rat at rest breathing through
the nose. (From Freeman [104] with permission.)
epileptic form seizure in the prepyriform cortex of cat, rat and rabbit. The
seizures closely resembles variants of psychomotor or petit mal epilepsy in
humans. His dynamic model, discussed in the preceding section, enables
him to propose neural mechanisms for the seizures, and investigate the
model structure of the chaotic attractor in transition from the normal to
the seizure state. As I have discussed, the attractor is a direct consequence
of the deterministic nature of brain activity, and what distinguishes normal
activity from that observed during epileptic seizures is a sudden drop in the
dimensionality of the attractor. Babloyantz and Destexhe [18] determine
the dimensionality of the brain’s attractor to be 4.05 ± 0.5 in deep sleep
and to have the much lower dimensionality of 2.05 ± 0.09 in the epileptic
state.
Epileptic seizures are manifestations of a characteristic state of brain
activity that can and often does occur without apparent warning. The
spontaneous transition of the brain from a normal state to a epileptic state
may be induced by various means, but is usually the result of functional
disorders or lesions. Such a seizure manifests an abrupt, violent, usually
self-terminating disorder of the cortex; an instability induced by the break-
down of neural mechanisms that ordinarily maintain the normal state of
the cortex and thereby assure its stability. 1n the previous section evidence
indicated that the normal state is described by a chaotic attractor. Now
the seizure state is also modeled as a chaotic attractor, but with a lower
dimension. Babloyantz and Destexhe [18] were concerned with seizures of
short duration (approximately five seconds in length) known as ‘petit mal.’
This type of generalized epilepsy may invade the entire cerebral cortex and
shows a bilateral symmetry between the left and right hemispheres. As is
apparent in the EEG time series in Figure 5.29 there is a sharp transition
from the apparently noisy normal state to the organized, apparently pe-
riodic epileptic state. The transition from the epileptic state back to the
normal state is equally sharp.
A sequence of stimulations applied to the lateral olfactory tract (LOT)
induce seizures when the ratio of background activity to induced activ-
ity exceeds a critical value [104]. 1n Figure 5.30 the regular spike train
of the seizure induced by the applied stimulation shown at the left is de-
picted. These data are used to define the phase space variables {x0 (t), x0 (t+
τ ), ..., x0 [t + (m − l)τ ]} necessary to construct the phase portrait of the sys-
tem in both normal and epileptic states.
In Figure 5.31 is depicted the projection of the epileptic attractor onto a
two-dimensional subspace for four different angles of observation. Babloy-
antz and Destexhe [18] point out that the structure of this attractor is
reminiscent of the spiral or screw chaotic attractor of Rössler [298].
FIGURE 5.29. (a) EEG recording of a human epileptic seizure of petit mal activity.
Channel 1 (left) and channel 3 (right) measure the potential differences between frontal
and parietal regions of the scalp, whereas channel 2 (left) and channel 4 (right) cor-
respond to the measures between vertex and temporal regions. This seizure episode,
lasting 5 seconds is the longest and the least noise-contaminated EEG selected from a
24-hr recording on a magnetic tape of a single patient. Digital PDP 11 equipment was
used. The signal was filtered below 0.2 Hz and above 45 Hz and is sampled in 12 bits at
1200Hz. (b) One pseudocycle is formed from a relaxation wave. (From Babloyantz and
Destexhe [18] with permission.)
Freeman [104] did not associate the attractor he observed with any of the
familiar mathematical forms, but he was able to capture a number of the
qualitative features of the dynamics with calculations using his model. It
is clear in Figure 5.32 that the attractor for a rat during seizures is well
captured by his model dynamics. He acknowledged that the unpredictabil-
ity in the detail of the simulated and recorded seizure spike trains indicate
that they are chaotic, and in this regard he agreed with the conclusion
of Babloyantz and Destexhe. Note the similarity in the attractor depicted
in Figure 5.32 with that for the heart in Figure 5.23c. The latter authors
calculated the dimension of the reconstructed attractor using the limited
data sample available in the single realization of human epileptic seizure.
PC
OB
EMG
FIGURE 5.30. The last 1.7 sec is shown of a 3sec pulse train to the LOT (10 V, 0.08
ms, 10/sec), with decrement in response amplitudes begining at 0.7 sec before the end
of the train. Seizure spike trains begin uncoordinated in both structures and settle into
a repetitive train at 3.4/sec with the PC spike leading by 25 ms both the OB spike and
EMG spike from the ipsilateral temporal muscle.(From Freeman [104] with permission.)
V(t + T)
α = 135° α = 45°
,
V(t + 2T)
V(1)
α = 225° α = 315°
FIGURE 5.31. Phase portraits of human epileptic seizure. First, the attractor is repre-
sented in a three-dimensional phase space. The figure shows two-dimensional projections
after a rotation of an angle α around the V (t) axis. The time series is constructed from
the first channel of the EEG (n = 5000 equi-distant points and τ = 19Δt ). Nearly
identical phase portraits are found for all τ in the range from 17Δt to 25Δt and also in
other instances of seizure. (From Babloyantz and Destexhe [18] with permission.)
T + 30
MODEL T
EEG T
FIGURE 5.32. Comparison of the output of the trace from granule cells (G) in the
olfactory model with the OB seizure from a rat (below), each plotted against itself
lagged 30 ms in time. Duration is 1.0 sec; rotation is counterclockwise. (From Freeman
[104] with permission.)
3
v
2 ve
0
1 2 3 4 5 6 7
p
One strategy for understanding the dynamic behavior of the brain has
been general systems analysis, or general systems theory. In this approach a
system is defined as a collection of components arranged and interconnected
in a definite way. As stated by Basar [30] the components may be physical,
chemical, biological or a combination of all three. From this perspective if
the stimulus applied to the system is known (measured) and the response of
the system to this response is known (measured) then it should be possible
to estimate the properties of the system. This, of course, is not sufficient
to determine all the characteristic of the ‘black box’ but is the first step in
formulating what Basar calls a ‘biological system analysis theory’ in which
special modes of thought, unique to the special nature of living systems,
are required. In particular Basar points out the non-stationary nature of
5.6 Retrospective
The material included in this chapter spans the realm of chaotic activity
including the social influence on epidemiology, the internal dynamics of a
single neuron, up to and including complex biochemical reactions. In all
these areas the rich dynamic structure of chaotic attractors is seen and
scientists have been able to exploit the concepts of nonlinear dynamics to
answer some of the fundamental questions that were left unanswered or
ambiguous using more traditional techniques.
Infectious diseases may be divided into those caused by microparasites
such as viruses, bacteria and protozoa and those caused by macroparasites
such as helminths and arthropods. Childhood epidemics of microparasitic
infections such as mumps and chicken pox show almost periodic yearly
outbreaks and those cyclic patterns of an infection have been emphasized
in a number of studies [8]. In Figure 5.1 the number of reported cases of
infection each month for measles, chicken pox and mumps in New York City
and measles in Baltimore is depicted. The obvious irregularities in these
data explained historically in terms of stochastic models [8, 29], but the
subsequent applications of chaotic dynamics to these data have resulted in
a number of interesting results. In Section 5.1 we review the Schaffer and
Kott [306] analysis of the data in Figure 5.1.
A research news article in Science by Pool [276] points out that the
outbreaks of measles in New York City followed a curious pattern before
the introduction of mass vaccinations. When children returned to school
each winter there was a sudden surge of infections corresponding to the
periods the student remained indoors exchanging germs. Over and above
this yearly cycle there occurred a biyearly explosion in the number of cases
of measles with a factor of five to ten increase in the number of cases
reported – sometimes as many as 10,000 cases a month. He points out,
however, that this biennial cycle did not appear until after 1945. Prior to
this, although the yearly peak occurred each winter, there did not seem to
be any alternating pattern of mild and severe years. In the period 1928 to
1944 there was no organized pattern of mild and severe years; a relatively
severe winter might be following by two mild ones, or vice versa. This is
the intermittency that is arguably described by means of chaos. It should
be pointed out that these dramatic yearly fluctuations were ended with the
implementation of a vaccination program in the early 1960’s.
If we attempt to model physiological structures as complex networks
arising out of the interaction of fundamental units, then it stands to reason
that certain clinically observed failures in physiological regulation occur
because of the failure of one or more of these fundamental units. One ex-
ample of such a system is the mammalian central nervous system, and
the question that immediately comes to mind is whether this system can
display chaotic behavior? Rapp et al. [284] present experimental evidence
that strongly suggest that spontaneous chaotic behavior does occur. In the
same vein Hayashi et al. [148] show that sinusoidal electrical stimulation
of the giant internode cell of the freshwater algae Nitella flexilis causes en-
trainment, quasiperiodic behavior and chaos just as did the two oscillator
model of the heart discussed previously. We review both of these examples
in Section 5.2.
The first dynamical system that was experimentally shown to manifest a
rich variety of dynamics involved nonequilibrium, chemical reactions. Ar-
neodo et al. [14] comment that one of the most common features of these
chemical reactions is the alternating sequence of periodic and chaotic states,
the Beslousov-Zhabotinskii reaction being the most thoroughly studied of
the oscillating chemical reactions. I briefly indicated some of the experi-
mental evidence for the existence of chaos in well-controlled nonequilibrium
reactions in Section 5.3.
There are a number of mathematical models of the heart with an imag-
inative array of assumed physical and biological characteristics. In Section
5.4 we display some of the laboratory data that suggests that the electrical
properties of the mammalian heart are manifestations of a chaotic attrac-
tor [123, 186]. One such indication comes from the time series interbeat
intervals, that is, the number of and interval between R waves in the elec-
trocardiographic signal. The ordered set of RR intervals form a suitable
times series when the RR interval magnitude is plotted versus the interval
number in the sequence of heart beats. I also indicated how to determine
the fractal dimension of this time series.
The chapter closes with a brief discussion of the history and analysis
of EEG time series in Section 5.5. One point of interest was associating a
scaling index with the various stages of brain activity and in turn relating
the scaling index to a fractal dimension. The fractal dimension quantified
the 1/f variability depicted in the EEG power spectral density. The second
point of interest was the changing appearance of the electrical activity of
the brain as manifest in the differences in the phase portraits of the time
series. The differences between the brain activity in the awake state and
the four stages of sleep was found to be evident with the fractal dimension
changing dramatically between the various states. But the change was most
significant during epileptic seizures when the dimension would be reduced
by at least a factor of two from its value in deep sleep.
I hope this revision makes it clear that both chaos and fractals have revo-
lutionized the way scientists think about complexity. In particular the way
scientists think about complex physiologic phenomena in medicine. Heart
beats, stride intervals and breathing intervals do not have normal statistics,
they are inverse power-law processes. These long tailed distributions imply
that the underlying dynamics cannot be specified by a single scale such
as a rate or frequency, but span multiple scales that are interconnected
through their nonlinear dynamics. However the question why has not been
answered. Why should chaos and fractals be so important in physiology
and medicine? These two generic properties of complexity have in the past
decade dovetailed into what is optimistically called Network Science. If such
a new science is to exist it would over arch the traditional disciplines of bi-
ology, chemistry, physics, etc. because the properties of a complex network
would not be dependent on the mechanisms of a particular context. In the
present context we refer to this as a Physiological Network. Let me explain.
In this final chapter I tie together a number of the formal concepts dis-
cussed in this revision. This synthesis is assembled from the perspective
of complex networks. I could have provided an over view of the area, but
there are a number of excellent reviews from a variety of perspectives start-
ing with that of nonequilibrium statistical physics [4], the mathematics of
inverse power-law distributions [247] and the dynamics of social networks
[355, 356] to name a few. My own efforts to accomplish such a synthe-
261
sis required a book [386], which I recommend to those that enjoy finding
out why the mathematics is necessary. Another approach would have been
to describe things for a less mathematical audience as done in general
[27, 47, 356] and in medicine [381]. But I have elected to follow a third
path here; one that is based on a particular model that manifests most if
not all the properties I wish to relate to chaos and fractals in physiology
and medicine.
This network point of view has been developed into what has been termed
Network Physiology [33] but personally I prefer the term Fractal Physiology;
half the title of the present book. I believe the intent of coining this new
name was to capture relations between the topological structure of networks
and physiological function and I use it in this final chapter. However the
science is far from establishing that all fractal properties are a consequence
of dynamic networks and until that time I retain my preference.
The next step beyond the random network in which elements are either
connected or not, was the social network in which the links can be either
weak or strong. The strong links exist within a family and among the closest
of friends, for example those that are called in case of emergency. On the
other hand, there are the weak links, such as link me to friends of my
friends, those I regularly meet at the grocery store, and so on. In a random
network clusters form in which everyone knows everyone else. These clusters
are formed from strong ties and can now be coupled together through weak
social interactions. Watts and Strogatz [355] were able to show that by
randomly coupling arbitrarily distant clusters together with weak links a
new kind of network was formed, the small world network. The connectivity
of small world networks are described by scale-free inverse power laws and
not Poisson distributions. In small world networks individuals are much
closer together than they are in random networks thereby explaining the
six degrees of separation phenomenon.
Small world theory demonstrates that one of the more important prop-
erties of networks is distance. This concept of distance is related to the
abstract notion of a metric and changes from that in a social network, to
that in a transportation network, to that in a neural network; each network
has its own intrinsic metric. I was informally exposed to this idea when I
was a graduate student by my friend and mentor Elliott Montroll and as I
explained elsewhere [381]:
ψ (τ ) ∝ τ −μ , (6.2)
that is separate and distinct from the scale-free degree distribution. The
consensus time is the length of time the majority of the elements stay within
one of the two available states in the critical condition.
Criticality is an emergent property of DMM and provides the dynamic
justification for its observed phase transition. This fundamental property
of criticality is phenomenologically explored in Section 6.3. Criticality is
shown to be a central mechanism in a number of complex physiologic phe-
nomena including neuronal avalanches [35] and multiple organ dysfunction
syndrome [49] as discussed subsequently.
The parameter M (l) denotes the total number of nearest neighbors to ele-
(l) (l)
ment l, and M1 (t) and M2 (t) gives the numbers of nearest neighbors in
the decision states ‘yes’ and ‘no’, respectively.
We define the global variable in order to characterize the state of the
network:
N1 (t) − N2 (t)
ξ(t) ≡ , (6.5)
N
where N is the total number of elements, and N1 (t) and N2 (t) are the
number of elements in the state “yes” and “no” at time t, respectively.
Individuals are not static but according to the master equation they ran-
(l) (l)
domly change their opinions over time thereby making M1 (t) and M2 (t)
vacillate in time as well. However, the total number of nearest neighbors is
(l) (l)
time independent: M1 + M2 = M (l) .
An isolated individual can be represented by a vanishing control param-
eter K = 0 and consequently that individual’s decision would randomly
oscillate between ‘yes’ and ‘no’, with Poisson statistics at the rate g. This
value of the control parameter would result in a collection of non-interacting
random opinions, such as that shown in the top panel of Figure 6.1. As the
control parameter is increased the coupling among the elements in the net-
work increases and consequently the behavior of the global variable reflects
this change. As the critical value Kc is approached the two states become
more clearly defined even in the case where all elements are coupled to
all other elements within the network. All-to-all coupling is often assumed
in the social science for convenience and we make that assumption tem-
porarily. Subsequently a more realistic assumption is made that restricts
the coupling of a given element to only its nearest neighbors.
The elements of the network are coupled when K > 0; an individual in
the state ‘yes’ (‘no’) makes a transition to the state ‘no’ (‘yes’) faster or
slower according to whether M2 > M1 (M1 > M2 ) or M2 < M1 (M1 < M2 )
and we have suppressed the superscript l, respectively. The quantity Kc
is the critical value of the control parameter K, at which point a phase-
transition to a self-organized, global majority state occurs. The efficiency of
a network in facilitating global cooperation can be expressed as a quantity
proportional to 1/Kc . Herein that self-organized state is identified as con-
sensus. On the other hand, expressing network efficiency through consensus
has the effect of establishing a close connection between network topology
and the ubiquitous natural phenomenon of synchronization. In this way a
number of investigators have concluded that topology plays an important
role in biology, ecology, climatology and sociology [12, 54, 273, 383]
1.00
0.50
ξ(t)
0.00
−0.50
−1.00
1.00
0.50
ξ(t)
0.00
−0.50
−1.00
1.00
0.50
ξ(t)
0.00
−0.50
−1.00
0 250000 500000 750000 1000000
t
FIGURE 6.1. The variation of the mean field-global variable as a function of time. For
the network configuration: (top) N = 500, K = 1.05 and g = 0.01; (middle ) N = 1500,
K = 1.05 and g = 0.01; (bottom ) N = 2500, K = 1.05 and g = 0.01.
Typical DMM calculations of the global variable for the control param-
eter greater than the critical value Kc = 1 in the all-to-all coupling con-
figuration for three sizes of the network are similar to those depicted in
Fig. 6.1. However this figure depicts a different dependence of the vari-
ability of the dynamics and that is on the size of the network. The three
panels display the global variable with the control parameter just above
the critical value. The dynamics appear random in the top panel for 500
elements. The dynamics in the central panel reveal two well-defined criti-
cal states with fluctuation for 1500 elements. Finally, the dynamics in the
lower panel indicate a clear decrease in the size of the fluctuations for 2500
elements. The variability in the time series resemble the thermal noise ob-
served in physical processes but there is no such mechanism in the DMM.
The erratic variations are the result of the finite number of elements in the
network. Moreover the magnitude of the fluctuations are found to conform
the DMM and Ising degree distributions is the value of the power-law index.
In the DMM the index is near 1.0 in the Ising model it is near 1.5.
1.00
0.75
ξeq
0.50
0.25
0.00
0.50 1.00 1.50 2.00 2.50
K
FIGURE 6.2. The phase diagram for the global variable ξeq . The solid and dashed lines
are the theoretical predictions for the fully connected and two-dimensional regular lattice
network, respectively. In both cases N = ∞ and the latter case is the Onsager prediction
[256]. The circles are the DMM calculation for K = 1.70.
6.3. Turalska et al. [341] also evaluate the distribution density p(l) of the
Euclidian distance l between two linked elements and find that the average
distance is of the order of 50, namely, of the order of the size of the two-
dimensional grid 100 × 100. This average distance implies the emergence
of long-range links that go far beyond the nearest neighbor coupling and
is essential to realizing the rapid transfer of information over a complex
network [43, 185].
FIGURE 6.3. The degree distribution for the Dynamically Generagted Complex Topol-
ogy created by examining the dynamics of elements placed on a two-dimensional regular
lattice with the parameter values, N = 100 × 100, g = 0.01 and K = 1.69 in the DMM.
FIGURE 6.4. Consensus survival probability. Thick solid and dashed lines refer to the
DMM implemented on a two-dimensional regular lattice with control parmater K = 1.70
and to dynamics of the ad hoc network evaluated for K = 1.10, respectively. In both
cases g = 0.01. The thin dashed line are visual guides corresponding to the scaling
exponents μ = 1.55 and μ = 1.33, respectively. The thin solid line fitting the shoulder
is an exponential.
The survival probability of the consensus state emerging from the ad hoc
network, with Kc = 1, is limited to the time region 1/g, and for N → ∞ is
expected [340] to be dominated by the shoulder depicted in Fig. 6.4. The
shoulder is actually a transition from the inverse power-law to an exponen-
tial distribution in time. The exponential is a signature of the equilibrium
regime of the network dynamics and is explained in detail elsewhere us-
ing a formal analogy to Kramers theory of chemical reactions [340]. It is
worth noting that this shoulder looks remarkably like the hump observed
in the sleep-wake studies of Lo et al. [203] in the last chapter. Their intu-
ition that the hump in the wake distribution was a consequence of a phase
transition in the underlying neural network that induced long-range order
into the network interactions is consistent with the explicit network calcu-
lation carried out here. The major difference is that the present calculation
did not require a separate assumption about self-organized criticality; the
phase transition emerged as a consequence of the network dynamics.
6.3 Criticality
Topology and criticality are the two central concepts that arise from the
application of dynamics to the understanding of the measurable properties
of the brain through the lens of complex networks. Topology is related
to the inverse power-law distributions of such newly observed phenomena
as neuronal avalanches [35, 36] and criticality [101] has to do with the
underlying dynamics that gives rise to the observed topology. Criticality
was first systematically studied in physics for systems undergoing phase
transitions as a control parameter is varied. For example, water transitions
from a liquid to a solid as temperature is lowered and to a gas as the
temperature is raised. The temperature at which these transitions occur are
called critical points or critical temperatures. Physical systems consist of a
large number of structurally similar interacting units and have properties
determined by local interactions. As a critical point is reached, the critical
value of the control parameter, the interactions suddenly change character.
In the case of the phase transition from water vapor to fluid what had been
the superposition of independent dynamical elements becomes dominated
by short-range interactions and on further temperature decrease the second
critical point is reached and one has long-range coordinated activity; ice.
The dynamical source of these properties was made explicit through the
development of DMM, which is related to but distinct from the Ising model
used by others in explaining criticality in the context of the human brain
[387].
Zemanova et al. [410] point out that investigators [325] have determined
that the anatomical connectivity of the animal brain has a number of prop-
erties similar to those of small-world and scale-free networks and organizes
into clusters (communities) [158, 159]. However the topological of these
networks remains largely unclear.
P (n) ∝ nα (6.7)
100
P
10−2
10−4
FIGURE 6.5. Probability distribution of neuronal avalanche size: Black) Size measured
using the total number of activated electrodes. Teal) Size measured using total local field
potential (LFP) amplitude measured at all electrodes participating in the avalanche.
(adapted from [35])
6.4 Finale
General systems theory, cybernetics, catastrophe theory, nonlinear dynam-
ics, chaos theory, synergetics, complexity theory, complex adaptive systems,
and fractal dynamics, have all contributed to our understanding of physi-
ology and medicine. Some have passed out of fashion whereas others have
proven to be foundational. As the title of this chapter suggests network sci-
ence is the ‘modern’ strategy devised to understand the intermittent, scale-
invariant, nonlinear, fractal behavior of physiologic structure and function.
Part of what make network science an attractor is that although it follows
a long tradition of theoretical methods and constructs it retains its intellec-
tual flexibility. Network Science engenders understanding of the complexity
of living networks through emergent behavior.
I was tempted to end this book with a review of what has been covered,
but on reflection that seemed to be ending on a sour note. After some
tossing and turning I decided that lists of outstanding problems that might
stimulate the interest of some bright cross-disciplinary researcher would be
of more value. One such list of questions for future research was compiled
by Sporns et al. [325]:
• How can scale-free functional networks arise from the structural or-
ganization of cortical networks?
It is interesting that since 2004 when this list was compiled a number of
partial answers to some questions have been obtained.
In a complex network many element are interconnected but only a few
play a crucial role and are considered central for the network to carry out
its function. In the Network Science literature these elements are called
hubs and one of the questions concerned its role in scale-free functional
brain networks. Various quantitative measures to identifying these central
elements had been developed but they did not readily lend themselves to
the study of the structure and function of the human brain. To rectify
this situation in 2010 Joyce et al. [175] developed an innovative centrality
measure (leverage centrality) that explicitly accounts for the local hetero-
geneity of an element’s connectivity within the network. This previously
neglected heterogeneous property determines how efficiently information is
locally transmitted and identifies elements that are highly influential within
a network. It is noteworthy that these elements that are the most influen-
tial are not necessarily the elements with the greatest connectivity; they
need not be hubs. The hierarchical structure of brain networks was a prime
candidate for use of the new centrality metric and fMRI data was used to
verify its utility.
In another investigation Hasagawa and Laurienti resolved inconsis-
tencies made across studies by others that had used networks deduced
from fMRI data. They did not merely apply the techniques of network
theory to the construction of cortical networks but showed that network
characteristics, such as the domain over which the connectivity distribu-
tion was inverse power law, depends sensitively on how the fMRI data are
averaged (by region or by voxel). They demonstrated that voxel-based net-
works, being more fine-grained, exhibit many desirable properties, such as
the co-locality of high connectivity and high efficiency within modules that
region-based networks do not share.
There is no natural end point for the discussion of the dovetailing
of chaos and fractals into physiologic networks and medicine, nor is there
a natural end to the present edition of this book. So in the tradition of
Sherlock Holmes let me bow out by saying “The game’s afoot.” and you
are invited to join in the chase.
[7] B. Andresen, J.S. Shiner and D.E. Uehlinger, “Allometric scaling and
maximum efficiency in physiological eigen time”, PNAS 99, 5822-
5824 (2002).
[11] M.E.F. Apol, R.S. Etienne and H. Olff, “Revisiting the evolutionary
origin of allometric metabolic scaling in biology”, Funct. Ecol. 22,
1070-1080 (2008).
[15] J. L. Aron and I.B. Schwartz, “Seasonality and period doubling bi-
furcations in an epidemic model,” J. Theor. Biol. 110, 665 (1984).
[17] I. Asimov, The Human Brain, Signet Science Lib., New York. (1963).
[21] A. Babloyantz and A. Destexhe, “Is the normal heart a periodic os-
cillator?” Biol. Cybern. 58, 203-211 (1988).
[22] A. Babloyantz, I. M. Salazar and C. Nicolls, “Evidence of chaotic
dynamics during the sleep cycle,” Phys. Lett. 11lA, 152-156 (1985).
[23] P. Bak, C. Tang and K. Wiesenfeld, “Self-organized criticality: an
explanation of the 1/f noise”, Phys Rev Lett 59,381–384 (1987).
[24] J.R. Banavar, J. Damuth, A. Maritan and A. Rinaldo, “Allometric
cascades”, Nature 421, 713 (2003).
[25] J.R. Banavar, J. Damuth, A. Maritan and A. Rinaldo, “Scaling in
Ecosystems and the Linkage of Macroecological Laws”, Phys. Rev.
Lett. 98, 068104 (2007).
[26] J.R. Bavavar, M.E. Moses, J.H. Brown, J. Damuth, A. Rinaldo, R.M.
Sibly and A. Maritan, “A general basis for quarter-power scaling in
animals”, Proc. Natl. Acad. Sci. USA 107, 1516-1520 (2010).
[27] A.-L. Barabasi, A.-L., Linked: How Everything is Connected to Ev-
erything Else and What it Means for Business, Science, and Everyday
Life, Plume, NewYork (2003).
[28] G.I. Barenblatt and A.S. Monin, “Similarity principles for the biol-
ogy of pelagic animals”, Proc. Nat. Acad. Sci. USA 99, 10506-10509
(1983).
[29] M. S. Bartlett, Stochastic Population Models in Ecology and Epidemi-
ology, London, Methuen (1960).
[30] E. Basar, Biophysical and Physiological Systems Analysis, Addison-
Wesley, London (1976).
[31] E. Basar, H. Flohr, H. Haken and A. I. Mandell, eds. Synergetics of
the brain, Springer-Verlag, Berlin (1983).
[32] E. Basar, A. G. lnder, C. Ozesmi and P. Ungan, “Dynamics of brain
mythmic and evoked potentials. III Studies in the auditory pathway,
recticular formation, and hippocampus during sleep,” Biol. Cyber-
netics 20, 161-169 (1975).
[33] A. Bashan, R.P. Bartsch, J.W. Kantelhardt, S. Havlin and
P.Ch Ivanov, “Network physiology reveals relations between net-
work topology and physiological function”, Nature Comm.|3:702|
DOI:10.1038/hcomms1705|www.nature.com/naturecommunications
(2012).
[34] J.B. Bassingthwaighte, L.S. Liebovitch and B.J. West, Fractal Phys-
iology, Oxford University Press, New York (1994).
[35] J.M. Beggs and D. Plenz, “Neuronal avalanches in neocortical cir-
cuits”, J. Neurosci. 23, 11167-77 (2003).
[36] J.M. Beggs and D. Plenz, “Neuronal avalanches are diverse and pre-
cise activity patterns that are stable for many hours in cortical slice
cultures”, J. Neurosci. 24, 5216-29 (2004).
[37] G. Benettin, L. Golgani and J. M. Strelcyn, “Kolmogorov entropy
and numerical experiments,” Phys. Rev. A 14, 2338 (1976).
[38] J. Beran, Statistics of Long-Memory Processes, Monographs on
Statistics and Applied Probability 61, Chapman & Hall, New York
(1994).
[39] M. Berry, “Diffractals,” J. Phys. A 12, 781-797 (1979).
[40] M. V. Berry and Z. V. Lewis, “On the Weierstrass-Mandelbrot fractal
function,” Proc. Roy. Soc. Lond. 370A, 459 (1980).
[41] S. Bianco, E. Geneston, P. Grigolini and M. Ignaccolo, Phys. Rev. E
387, 1387 (2008).
[42] J.W. Blaszcyk and W. Klonowski, “Postural stability and fractal dy-
namics”, Acta Neurobiol. Exp. 61, 105-112 (2001).
[43] M. Boguna et al., Nature Physics 5, 74 (2009); M. Boguna and D.
Krioukov, Phys. Rev. Lett. 102, 058701 (2009).
[44] F. Bokma, ”Evidence against universal metabolic allometry”, Func.
Eco. 18, 184-187 (2004).
[45] P. Bonifazi, M. Goldin, M.A. Picardo, I. Jorquera, A. Cattani, G.
Bianconi, A. Represa, Y. Ben-Ari, and R. Cossart, “GABAergic hub
neurons orchestrate synchrony in developing hippocampal networks”.
Science 4, 5958, 1419-1424 (2009).
[46] Brown J.H., West G.B. and Enquist B.J., “Yes, West, Brown and
Enquist’s model of allometric scaling is both mathematically correct
and biologically relevant”, Funct. Ecol. 19, 735-738 (2005).
[47] M. Buchanan, Nexus, W.W. Norton, New York (2002).
[48] T.G. Buchman, J.P. Cobb, A.S. Lapedes and T.B. Kepler, “Complex
systems analysis: a tool for shock research”, SHOCK 16, 248-251
(2001).
[49] T.G. Buchman, “The Community of the Self”, Nature 420, 246-251
(2002).
[52] W.W. Calder III, Size, Function and Life History, Harvard University
Press, Cambridge, MA (1984).
[53] C.G. Caro, T.J. Pedley, R.C. Schroter and W.W. Seed, The Mechan-
ics of Circulation, Oxford University Press, Oxford (1978).
[54] C. Castellano, S. Fortunato and V. Loreto, Rev. Mod. Phys. 81, 591
(2009).
[59] D. L. Cohn, “Optimal systems. Parts I and II,” Bull. Math. Biophys.
16, 59-74 (1955); 17, 219-227 (1954).
[61] J.J. Collins and I. N. Stewart, “Coupled Nonlinear Oscillators and the
Symmetries of Animal Gaits”, J. Nonlinear Sci. 3, 349-392 (1993).
[62] J.J. Collins and C.J. De Lucca, “Random walking during quiet stand-
ing”, Phys. Rev. Lett. 73, 764-767 (1994).
[63] M. Conrad, “What is the use of chaos?” in Chaos, ed. A.V. Holden,
Manchester University Press, Manchester UK (1986).
[66] Couzin, I.D. (2007) Collective minds. Nature 445, 715; Couzin, I.D.
(2009) Collective cognition in animal groups. TRENDS in Cognitive
Sciences 13, 36-43.
[71] C.A. Darveau, R.K. Suarez, R.D. Andrews and P.W. Hochachka,
“Allometric cascade as a unifying principle of body mass effects on
metabolism”, Nature 417, 166-170 (2002).
[72] C.A. Darveau, R.K. Suarez, R.D. Andrews and P.W. Hochachka,
“Darvearu et al. reply”, Nature 417, 714 (2003).
[74] G.S. Dawes, H.E. Cox, M.B. Leduc, E.C. Liggins and R.T. Richards,
J. Physiol. Lond. 220, 119-143 (1972).
[75] R. Dawkins, The Selfish Gene, Oxford University Press, New York
(1976).
[77] Dodds P.S., Rothman D.H. and Weitz J.S., “Re-examination of the
“3/4-law” of Metabolism”, J. Theor. Biol. 209, 9-27 (2001).
[78] S.N. Dorogovtsev and J.F.F. Mendes, Adv. Phys. 51, 1079 (2002).
[111] L.R. Ginzburg, O. Burger and J. Damuth, “The May threshold and
life-history allometry”, Bio. Lett. doi:10.1098/rsbl.2010.0452
[117] D.S. Glazier, ”Beyond the ’3/4-power law’: variation in the intra-
and interspecific scaling of metabolic rate in animals”, Biol. Rev. 80,
611-662 (2005).
[118] D.S. Glazier, “The 3/4-power law is not universal: Evolution of iso-
meric, ontogenetic metabolic scaling in pelagoic animals”, BioScience
56, 325-332 (2006).
[146] J.M. Hausdorff, S.L. Mitchell, R. Firtion, C.K. Peng, M.E. Cud-
kowicz, J.Y. Wei and A.L. Goldberger, “Altered fractal dynamics
of gait: reduced stride-interval correlations with aging and Hunting-
ton’disease”, J. Appl. Physiol. 82, (1997).
[147] J.M. Hausdorff, Y, Ashkenazy, P.K. Peng, et al., “When human walk-
ing becomes random walking: fractal analysis and modeling of gait
rhythm fluctuations”, Physica A-Stat. Mech. and its Appl., 302: 138-
147 (2001).
[153] A.A. Heusner, “Energy metabolism and body size: I. Is the 0.75 mass
exponent of Kleiber’s equation a statistical artifact?”, Resp. Physiol.
48, 1-12 (1982).
[154] A.A. Heusner, “Size and power in mammals”, J. Exp. Biol. 160, 25-
54 (1991).
[157] A.V. Hill, “The dimensions of animals and their muscular dynamics”,
Sci. Prog. 38, 209-230 (1950).
[158] C.C. Hilgetag, G.A.P.C. Burns, M.A. O’Neill, J.W. Scannell, and
M.P. Young, “Anatomical Connectivity Defines the Organisation of
Clusters of Cortical Areas in Macaque Monkey and Cat”, Phil Trans
R Soc Lond B 355, 91-110 (2000).
[159] C.C. Hilgetag and M. Kaiser, “Clustered organization of cortical con-
nectivity”, Neuroinformatics 2, 353-360 (2004).
[160] M.A. Hofman, “Size and shape of the cerebral cortex in mammals. I.
The cortical surface”, Brain Behav. Evol. 27, 28-40 (1985).
[161] “Heart rate variability”, European Heart Journal 17, 354-381 (1996).
[162] J.T.M. Hosking, “Fractional Differencing”, Biometrika 68, 165-176
(1982).
[163] J. Hou, H. Zhao and D. Huang, “The Computation of Atrial Fibrilla-
tion Chaos Characteristics Based on Wavelet Analysis”, Lect. Notes
in Comp. Sci. 4681, 803-809 (2007).
[164] J. L. Hudson and J. C. Mankin, “Chaos in the Belousov-Zhabotinsky
reaction,” J. Chem. Phys. 74, 6171-6177 (1981).
[165] J.S. Huxley, Problems of Relative Growth, Dial Press, New York
(1931).
[166] R. E. Ideker, G. J. Klein and L. Harrison, Circ. Res. 63, 1371 (1981).
[167] N. Ikeda, ”Model of bidirectional interaction between myocardial
pacemakers based on the phase response curve,” Biol. Cybern. 43,
157-167 (1982).
[168] N. Ikeda, H. Tsuruta and T. Sato, ”Difference equation model of
the entrainment of myocardial pacemaker cells based on the phase
response wave,” Biol. Cybern. 42, 117-128 (1981).
[169] L. Isella, J. Stehlé, A. Barrat, C. Cattuto, J.F. Pinton, and W. Van
den Broeck, “What’s in a crowd? Analysis of face-to-face behavioral
networks”, J Theor Biol 271,166-180 (2010).
[170] T. M. Itil, “Qualitative and quantitative EEG findings in schizophre-
nia,” Schizophrenia Bulletin 3, 61-79 (1977).
[171] P. Ch. Ivanov, M.G. Rosenblum, C.-K. Peng, J. Mietus, S. Havlin,
H.E. Stanley and A.L. Goldberger, “Scaling behavior of heartbeat in-
tervals obtained by wavelet-based time-series analysis”, Nature 383,
323-327 (1996).
[173] H.J. Jerison, “Allometry, brain size, cortical surface, and convoluted-
ness” in Armstrong E. and Falk O. (Eds.) Primate Brain Evolution,
Plenum, New York, pp. 77-84 (1982).
[179] E.R. Kandel, “Small Systems of Neurons,” Mind and Behavior, R.L.
Atkinson and R.C. Atkinson eds. W.H. Freeman and Co., San Fran-
cisco (1979).
[184] Kerkhoff A.J. and B.J. Enquist, “Multiplicative by nature: why log-
arithmic transformation is necessary in allometry”, J. Theor. Biol.
257, 519-521 (2009).
[200] Lindstedt S.L. and W.A. Calder III, “Body size, physiological time,
and longevity of homeothermic animals”, Quart. Rev. Biol. 36, 1-16
(1981).
[201] Lindstedt S.L., B.J. Miller and S.W. Buskirk, “Home range, time and
body size in mammals”, Ecology 67, 413-418 (1986).
[202] C.-C. Lo, L.A. Nunes Amaral, S. Havlin, P.Ch. Ivanov, T. Penzel,
J.-H. Peter and H.E. Stanley, “Dynamics of sleep-wake transitions
during sleep”, Europhys. Lett. 57, 625-631 (2002).
[203] C.-C. Lo, T. Chou, T. Penzel, T.E. Scammell, R.E. Strecker, H.E.
Stanley and P.Ch. Ivanov, “Common scale-invariant patters of sleep-
wake transitions across mammalian species”, PNAS 101, 17545-
17548 (2004).
[209] P.C. Ivanov, L.A.N. Amaral, A.L. Goldberger, S. Havlin, M.G. Rosen-
blum, Z.R. Struzik, H.E. Stanley, “Multifractality in human heart-
beat dynamics”, Nature 399, 461 (1999).
[216] B.B. Mandelbrot, “How Long is the Coast of Britain? Statistical Self-
Similarity and Fractal Dimension”, Science 156, 636-640 (1967).
[224] B.D. Malamud, G. Morein and D.L. Turcotte, “Forest fires: an ex-
ample of self-organized critical behavior”, Science 281,1840 –1842
(1998).
[227] R.D. Mauldin and S.C. Williams, “On the Hausdorff dimension of
some graphs,” Trans. Am. Math. Soc. 298, 793-803 (1986).
[229] R.M. May and G.F. Oster, “Bifurcations and dynamic complexity in
simple ecological models”, Am. Nat. 110, 573-599 (1976).
[232] T.A. McMahon and J.T. Bonner, On Size and Life, Sci. Am. Library,
New York (1983).
[235] Micheloyannis, S., Pachou, E., Stam, C.J., Vourkas, M., Erimaki, S.,
Tsirka, V. (2006) Using graph theoretical analysis of multi channel
EEG to evaluate the neural efficiency hypothesis. Neurosci. Lett. 402,
273-277; Stam, C.J., de Bruin, E.A. (2004) Scale-free dynamics of
global functional connectivity in the human brain. Hum. Brain Mapp.
22, 97-109.
[237] M. Mobilia, A. Peterson and S. Redner, J. Stat. Mech.: Th. and Exp.,
P08029 (2007).
[268] C.-K. Peng, J. Metus, Y. Li, C. Lee, J.M. Hausdorff, H.E. Stanley,
A.L. Goldberger and L.A. Lipsitz, “Quantifying fractal dynamics of
human respiration: age and gender effects”, Ann. Biom. Eng. 30,
683-692 (2002).
[269] J.I. Perotti, O.V. Billoni, F.A. Tamarit, D.R. Chialvo and S.A. Can-
nas, Phys. Rev. Lett. 103, 108701 (2009).
[270] R.H. Peters, The Ecological Implications of Body Size, Cambridge
University Press, Cambridge (1983).
[271] S.M. Pincus, “Greater signal regularity may indicate increased system
isolation”, Math. Biosci. 122, 161-181 (1994).
[272] http://www.physionet.org/
[273] A. Pikovsky, M. Rosenblum and J. Kurths, Synchronization: A Uni-
versal Concept in Nonlinear Science, Cambridge University Press,
Cambridge, UK (2001).
[274] D. Plenz, “Neuronal avalanches and coherence potentials”, The Eu-
ropean Physical Journal-Special Topics 205, 259-301 (2012).
[275] H. Poincaré, Mémoire sur les courves définies par les equations
différentielles, I-IV, Oevre 1, Gauthier-Villars, Paris, (1888).
[276] R. Pool, “Is it chaos, or is it just noise?” in Science 243, 25 (1989).
[277] I. Podllubny, Fractional Differential Equations, Academic Press, San
Diego, CA (1999).
[278] C.A. Price, B.J. Enquist and V.M. Savage, “A general model for
allometric covariation in botanical form and function”, PNAS 104,
13204-09 (2007).
[279] J. C. Principe and J. R. Smith, “Microcomputer-based system for
the detection and quantification of petit mal epilepsy,” Comput. Biol.
Med. 12, 87-95 (1982).
[280] J.W. Prothero, “Scaling of cortical neuron density and white matter
volume in mammals”, J. Brain Res. 38, 513-524 (1997).
[281] O.G. Raabe, H.D. Yeh, G.M. Schum and R.F. Phalen, Tracheo-
bronchial Geometry: Human, Dog, Rat, Hamster. Albuquerque:
Lovelace Foundation for Medical Education and Research (1976).
[282] B. Rajagopalon and D.G. Tarboton, Fractals 1, 6060 (1993).
[289] L.E. Reichl, A Modern Course in Statistical Physics, John Wiley &
Sons, New York (1998).
[292] J.P. Richter, Ed., The Notebooks of Leonardo da Vinci, Vol. 1, Dover,
New York (1970); unabridged edition of the work first published in
London in 1883.
[295] F. Rohrer, “Flow resistance in human air passages and the effect of
irregular branching of the bronchial system on the respiratory pro-
cess in various regions .of the lungs.” Pflugers Arch. 162, 225-99.
[296] S. Rossitti an dH. Stephensen, Acta Physio. Scand. 151, 191 (1994)
[301] V.M. Savage, J.P. Gillooly, W.H. Woodruff, G.B. West, A.P. Allen,
B.J. Enquist and J.H. Brown, “The predominance of quarter-power
scaling biology”, Func. Ecol. 18, 257-282 (2004).
[304] N. Scafetta, L. Griffin and B.J. West, “Hölder exponent for human
gait”, Physica A 328, 561-583 (2003).
[313] M.F. Shlesinger and B.J. West, ”Complex Fractal Dimension of the
Bronchial Tree”, Phys. Rev. Lett. 67, 2106-2108 (1991).
[320] J.K.L. da Silva, G.J.M. Garcia and L.A. Barbosa, “Allometric scaling
laws of metabolism”, Phys. Life Reviews 3, 229-261 (2006).
[322] J. M. Smith and R. J. Cohen, Proc. Nat!. Acad. Sci. 81,233 (1984).
[331] M.P.H. Stumpl and M.A. Porter, “Critical Truths About Power
Laws”, Science 335, 665-666 (2012).
[332] H.H. Szeto, P.Y. Cheng, J.A. Decena, Y. Chen, Y. Wu and G. Dwyer,
“Fractal properties of fetal breathing dynamics”, Am. J. Physiol. 262
(Regulatory Integrative Comp. Physiol. 32) R141-R147 (1992).
[334] G. R. Taylor, The Great Evolution Mystery, H;uper and Row (1983).
[337] N.L. Tilney, G.L. Bailey and A.P. Morgan, “Sequential system failure
after rupture of abdominal aortic aneurysms: an unsolved problem in
postopertive care”, Ann. Surg. 178, 117-122 (1973).
[341] M. Turalska, B.J. West and P. Grigolini, Phys. Rev. E 83, 061142
(2011).
[343] D.L. Turcotte, Fractals and chaos in geology and geophysics, Cam-
bridge University Press, Cambridge (1992).
[344] B. van der Pol and J. van der Mark., “The heartbeat considered as a
relaxation oscillator and an electrical model of the hean,” Phil. Mag.
6, 763 (1928).
[345] B. van der Pol and J. van der Mark., Extr. arch. neerl. physiol. de
l’homme et des animaux 14, 418 (1929).
[346] F. Vanni, M. Lukovic and P. Grigolini, Phys. Rev. Lett. 107, 078103
(2011).
[349] Vierordt, Ueber das Gehen des Menchen in Gesunden und kranken
Zustaenden nach Selbstregistrirender Methoden, Tuebigen, Germany
(1881).
[350] M.O. Vlad, F. Moran, V.T. Popa, S.E. Szedlacsek and J. Ross, “Func-
tional, fratal nonlinear response with application to rate proesses with
memory, allometry, and population genetics”, Proc. Natl. Acad. Sci.
USA 104, 4798-4803 (2007).
[353] D.I. Warton, I.J. Wright, D.S. Falster and M. Westoby, “Bivariate
line fitting methods for allometry”, Biol. Rev. 85, 259-291 (2006).
[355] D.J. Watts and S.H. Strogatz, Nature (London) 393, 440 (1998).
[361] E.R. Weibel, “The pitfalls of power laws”, Nature 417, 131-132
(2002).
[391] G.B. West, V.M. Savage, J. Gillooly, B.J. Enquist, W.H. Woodruff
and J.H. Brown, “Why does metabolic rate scale with body size?”,
Nature 421, 712 (2003)
[392] C.R. White and R.S. Seymour, “Allometric scaling of mammalian
metabolism”, J. Exp. Biol. 208, 1611-1619 (2005).
[393] H. Whitney, Ann. Math. 37, 645 (1936).
[394] C. Wickens, A. Kramer, L. Vanasse and E. Donchin, “Performance
of concurrent tasks: a psychophysiological analysis of the reciprocity
of information-processing resources,” Science 221, 1080-1082 (1983).
[395] N. Wiener, Time Series, MIT press, Cambridge, Mass. (1949).
[396] N. Wiener, Cybernetics, MIT Press, Cambridge, Mass. (1963).
[397] N. Wiener, Harmonic Analysis, MIT Press, Cambridge Mass (1964).
[398] K. G. Wilson, “Problems in physics with many scales of length,” Sci.
Am. 241, 158-179 (1979).
[399] T. A. Wilson, “Design of the bronchial tree,” Nature Lond. 18, 668-
669 (1967).
[400] A. T. Winfree, J. Theor. Biol. 16, 15 (1977).
[401] A. T. Winfree, J. Theor. Biol. 249, 144 (1984).
[402] J.M. Winters and P. E. Crago, Biomechanics and Neural Control of
Posture and Movements, Spring-Verlag, New York 2000.
[403] A. Wolf, J. B. Swift, H. L. Swinney and J. A. Vastano, “Determin-
ing Lyapunov exponents from a time series,” Physica D 16, 285-317
(1985).
[404] S. J. Worley, J. L. Swain and P. G. Colavita, Am. J. Cardiol.. 5, 813
(1985).
[405] C.A. Yates, r. Erban, C. Escudero, I.D. Couzin, J. Buhl, I.G.
Kevrekidis, P.K. Maini and D.J.T. Sumpter, PNAS 106, 5464 (2009).
[406] J. A. Yorke and E. D. Yorke, “Metastable Chaos: The transition to
sustained chaotic behavior in the Lorenz model,” J. Stat. Phys. 21,
263 (1979).
[407] J. Xie, S. Sreenivasan, G. Korniss, W. Zhang, C. Lim and B.K. Szy-
manski, Phys. Rev. E 84, 011130 (2011).
[408] R. Zhang, J.H. Zuckerman, C. Giller and B.D. Levine, Am. J. Physiol.
274, H233 (1999)..
biochemical branchings, 39
reactions, 258 tube, 37
biochemistry, 228 tube sizes, 30
biological clock, 228 bronchial airways, 20
biological evolution, 7 bronchial tree, 27
biological time, 65 bronchial tube, 180
biology, 128, 178 bronchioles, 29
biomechanics, 108, 205 Brown, J.H., 66, 71
biomedical Brownian motion, 52, 184
processes, 3 BRV, 203
bioreactor, 65 Buchman T.G., 278
birth rate, 132 Buchman T.K., 2
blood circulation time, 65 bursting, 222
blood flow, 70 BZ reaction, 228
to brain, 24
blood flow velocity, 195 Calder III, W.A., 65
body cooling, 74 Cannon, W., 2
bone, 27 canonical surface, 216
Bonifazi P., 275 Cantor set, 40, 123
boundaries Cantor, G., 40
metabolic level, 78 capillary bed, 70
boundary carbon dioxide, 203
constraints, 74 cardiac
bowel, 58 conduction, 116
brain, 255 depolarization pulse, 84
dynamics, 275 output, 70
injury, 278 pulses, 3
waves, 3 cardiac chaos, 156, 232
brain wave, 4 cardiac oscillator, 110
branching process, 55 cardiac pulse, 32
breath rate variability cardiac system, 79
BRV, 202 cascade, 276
breath time, 65 catatonia, 94
breathing, 97 CBF, 195
episodes, 203 Central Limit Theorem, 9
broadband, 124 central nervous system, 225
bromide ion, 166 central pattern generator
bromide ions, 228 CPG, 101
bronchi, 29 cerebellum, 205
bronchial cerebral
airway, 29, 61 auto-regulation, 198
architecture, 29 cerebral blood flow
narrowband, 94 stroboscope
spinal cord transfer function, 231
transection, 102 stroboscopic map, 223
spontaneous excitation, 23 Strogatz S.H., 263
squid structure function, 172
giant axon, 224 Struzik,Z.R., 105
squirrel monkey subharmonic bifurcation, 156
brain, 222 subharmonic synchronization, 227
sress, 106 super central pattern generator
SRV, 194, 206 SCPG, 105
fluctuations, 105 superior colliculus, 102
stability, 135 superior vena cava, 233
local, 17 surface of section, 216, 229, 240
stable equilibria, 123 susceptibles, 212
stationary, 5, 243 swarms, 263
statistical, 20 Swift, J., 8
artifact, 66 swinging heart, 158
statistical fluctuations sympathetic, 201
origins in AR, 66 synchronization, 262
statistical mechanics synchronize, 103
classical, 119 syncopated, 101
steady state, 31, 134, 155, 228 system
Stein, K.M., 157 dissapative, 17
Stephenson, P.H., 195 system response, 186
stimulation, 24, 223 systems
Stirling’s approximation, 185 cognitive, 17
stochastic process, 52 systems theory, 4
strange, 124 nonlinear dynamics, 21
attractor, 148 Szeto, H.H., 203
strange attractor, 118
strange attractors, 152 taco, 126
stress, 2 tail, 219
natural, 109 Takens, F., 24, 166
psychophysical, 109 tangent
stress relaxation, 190 vector, 149
stretched exponential, 190 Tauberian Theorem, 84
stretching, 126 Tauberian theorem, 238
stretching rate, 151 taxonomy, 243
stride interval temperature, 99, 270
variabililty, 24 temperature gradient, 120
stride rate variability temporal complexity, 273
SRV, 105, 205 tendrils, 220