100% found this document useful (2 votes)
344 views345 pages

Bruce J West - Fractal Physiology and Chaos in Medicine-World Scientific (2013)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
344 views345 pages

Bruce J West - Fractal Physiology and Chaos in Medicine-World Scientific (2013)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 345

FRACTAL PHYSIOLOGY

AND CHAOS IN MEDICINE


2nd Edition

60709_8577 -Txts#150Q.indd 1 13/10/12 9:11 AM


STUDIES OF NONLINEAR PHENOMENA IN LIFE SCIENCE*

Editor-in-Charge: Bruce J. West

Vol. 5 Nonlinear Dynamics in Human Behavior


edited by W Sulis & A Combs

Vol. 6 The Complex Matters of the Mind


edited by F Orsucci

Vol. 7 Physiology, Promiscuity, and Prophecy at the Millennium: A Tale of Tails


by B J West

Vol. 8 Dynamics, Synergetics, Autonomous Agents: Nonlinear Systems


Approaches to Cognitive Psychology and Cognitive Science
edited by W Tschacher & J-P Dauwalder

Vol. 9 Changing Mind: Transitions in Natural and Artificial Environments


by F F Orsucci

Vol. 10 The Dynamical Systems Approach to Cognition: Concepts and Empirical


Paradigms based on Self-Organization, Embodiment, and Coordination
Dynamics
edited by W Tschacher & J-P Dauwalder

Vol. 11 Where Medicine Went Wrong: Rediscovering the Path to Complexity


by B J West

Vol. 12 Mind Force: On Human Attractions


by F Orsucci

Vol. 13 Disrupted Networks: From Physics to Climate Change


by B J West & N Scafetta

Vol. 14 Fractal Time: Why a Watched Kettle Never Boils


by S Vrobel

Vol. 15 Decision Making: A Psychophysics Application of Network Science


edited by P Grigolini & B J West

Vol. 16 Fractal Physiology and Chaos in Medicine, 2nd Edition


by B J West

*For the complete list of titles in this series, please go to


http://www.worldscientific.com/series/snpls

60709_8577 -Txts#150Q.indd ii 19/10/12 4:28 PM


Studies of Nonlinear Phenomena in Life Science – Vol. 16

FRACTAL PHYSIOLOGY
AND CHAOS IN MEDICINE
2nd Edition

Bruce J West
Army Research Off ice, USA

:RUOG6FLHQWLÀF
NEW JERSEY t LONDON t SINGAPORE t BEIJING t SHANGHAI t HONG KONG t TA I P E I t CHENNAI

60709_8577 -Txts#150Q.indd iii 19/10/12 4:28 PM


Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

Studies of Nonlinear Phenomena in Life Science — Vol. 16


FRACTAL PHYSIOLOGY AND CHAOS IN MEDICINE
2nd Edition
Copyright © 2013 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.

ISBN 978-981-4417-79-2

60709_8577 -Txts#150Q.indd iv 19/10/12 4:28 PM


57144_p i-iv (8323)#150Q.indd 4 4/23/12 4:52 PM
Contents

Preface ix

1 Introduction 1
1.1 What is Linearity? . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Why Uncertainty? . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 How Does Nonlinearity Change Our View? . . . . . . . . . 12
1.4 Complex Networks . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Summary and a Look Forward . . . . . . . . . . . . . . . . 21

2 Physiology in Fractal Dimensions 27


2.1 Complexity and the Lung . . . . . . . . . . . . . . . . . . . 29
2.2 The Principle of Similitude . . . . . . . . . . . . . . . . . . 33
2.2.1 Fractals, Self-similarity and Renormalization . . . . 40
2.2.2 Fractal Lungs . . . . . . . . . . . . . . . . . . . . . . 54
2.2.3 Why fractal transport? . . . . . . . . . . . . . . . . 59
2.3 Allometry Relations . . . . . . . . . . . . . . . . . . . . . . 63
2.3.1 Empirical Allometry . . . . . . . . . . . . . . . . . . 65
2.3.2 WBE model . . . . . . . . . . . . . . . . . . . . . . . 68
2.3.3 WW model . . . . . . . . . . . . . . . . . . . . . . . 73
2.4 Fractal Signals . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.4.1 Spectral decomposition . . . . . . . . . . . . . . . . 81
v

60709_8577 -Txts#150Q.indd v 19/10/12 4:28 PM


vi Contents

2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

3 Dynamics in Fractal Dimensions 93


3.1 Nonlinear Bio-oscillators . . . . . . . . . . . . . . . . . . . . 95
3.1.1 Super Central Pattern Generator (SCPG) model of
gait . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.1.2 The cardiac oscillator . . . . . . . . . . . . . . . . . 110
3.1.3 Strange attractors (deterministic randomness) . . . . 118
3.2 Nonlinear Bio-mapping . . . . . . . . . . . . . . . . . . . . . 128
3.2.1 One-dimensional maps . . . . . . . . . . . . . . . . . 131
3.2.2 Two-dimensional maps . . . . . . . . . . . . . . . . . 144
3.2.3 The Lyapunov exponent . . . . . . . . . . . . . . . . 148
3.3 Measures of Strange Attractors . . . . . . . . . . . . . . . . 152
3.3.1 Correlational dimension . . . . . . . . . . . . . . . . 159
3.3.2 Attractor reconstruction from data . . . . . . . . . . 162
3.3.3 Chaotic attractors and false alarms . . . . . . . . . . 167
3.4 Summary and perspective . . . . . . . . . . . . . . . . . . . 173

4 Statistics in Fractal Dimensions 177


4.1 Complexity and Unpredictability . . . . . . . . . . . . . . . 177
4.1.1 Scaling Measures . . . . . . . . . . . . . . . . . . . . 179
4.2 Fractal Stochastic Dynamics . . . . . . . . . . . . . . . . . . 182
4.2.1 Simple Random Walks . . . . . . . . . . . . . . . . . 183
4.2.2 Fractional random walks and scaling . . . . . . . . . 185
4.2.3 Physical/physiological models . . . . . . . . . . . . . 187
4.3 Physiologic Time Series . . . . . . . . . . . . . . . . . . . . 198
4.3.1 Heart Rate Variability (HRV) . . . . . . . . . . . . . 199
4.3.2 Breath rate variability (BRV) . . . . . . . . . . . . . 202
4.3.3 Stride rate variability (SRV) . . . . . . . . . . . . . 205
4.4 Summary and Viewpoint . . . . . . . . . . . . . . . . . . . . 207

5 Applications of Chaotic Attractors 211


5.1 The Dynamics of Epidemics . . . . . . . . . . . . . . . . . . 212
5.2 Chaotic Neurons . . . . . . . . . . . . . . . . . . . . . . . . 220
5.3 Chemical Chaos . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.4 Cardiac Chaos . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.5 EEG Data and Brain Dynamics . . . . . . . . . . . . . . . . 242
5.5.1 Normal activity . . . . . . . . . . . . . . . . . . . . . 245
5.5.2 Epilepsy: reducing the dimension . . . . . . . . . . . 250
5.5.3 Task-related scaling . . . . . . . . . . . . . . . . . . 255
5.6 Retrospective . . . . . . . . . . . . . . . . . . . . . . . . . . 258

60709_8577 -Txts#150Q.indd vi 19/10/12 4:28 PM


Fractal Physiology and Chaos in Medicine vii

6 Physiological Networks: The Final Chapter? 261


6.1 Introduction to Complex Networks . . . . . . . . . . . . . . 262
6.1.1 A little history . . . . . . . . . . . . . . . . . . . . . 263
6.1.2 Inverse power laws . . . . . . . . . . . . . . . . . . . 265
6.2 The Decision Making Model (DMM) . . . . . . . . . . . . . 267
6.2.1 Topological Complexity . . . . . . . . . . . . . . . . 270
6.2.2 Temporal Complexity . . . . . . . . . . . . . . . . . 273
6.3 Criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
6.3.1 Neuronal Avalanches . . . . . . . . . . . . . . . . . . 275
6.3.2 Multiple Organ Dysfunction Syndrome (MODS) . . 278
6.4 Finale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

References 281

Index 313

60709_8577 -Txts#150Q.indd vii 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd viii 19/10/12 4:28 PM


Preface

This book is concerned with the application of fractals and chaos (as well
as other concepts from nonlinear dynamics systems theory) to biomedical
phenomena. In particular, I have used biomedical data sets and modern
mathematical concepts to argue against the outdated notion of homeosta-
sis. It seems to me that health is at least a homeodynamics process with
multiple steady states — each being capable of survival. This idea was
developed in collaboration with my friend and colleague A. Goldberger
during long discussions in which we attempted to learn each others disci-
plines. This book is not restricted to our own research, however, but draws
from the research of a large number of investigators. Herein we seek breadth
rather than depth in order to communicate some of the excitement being
experienced by scientists making applications of these concepts in the life
sciences. I have tried in most cases to motivate a new mathematical con-
cept using a biomedical data set and have avoided discussing mathematics
for its own sake. Herein the phenomena to be explained take precedence
over the mathematics and therefore one will not find any proofs, but some
attempt has been made to provide reference as to where such proofs can
be found.

ix

60709_8577 -Txts#150Q.indd ix 19/10/12 4:28 PM


x Preface

I wish to thank all those who have provided help and inspiration over the
years, in particular L. Glass, A. Goldberger, A. Mandell and M. Shlesinger;
with special thanks to A. Babloyantz for a critical reading of an early
version of the manuscript. I also wish to thank Ms. Rosalie Rocher for her
expert word processing of the manuscript and W. Deering for making the
time to complete this work available to me.
Bruce J. West
Denton, TX
July 4, 1990

Second printing
I am gratified that this small book is going into a second printing. If
time had allowed I might have updated the examples given in the last
chapter and winnowed out some of the more speculative comments sprin-
kled throughout. However, much that was tentative and speculative ten
years ago, has since become well documented, if not universally accepted.
So if I had started down the road of revision I could easily have written an
entirely new book. Upon rereading the text I decided that it accomplished
its original purpose rather well, that being, to communicate to a broad au-
dience the recent advances in modeling that have had, and are continuing
to have, a significant influence on physiology and medicine. So I decided to
leave well enough alone.
Bruce J. West
Research Triangle Park, NC
January 1, 2000

Second Edition
In the first edition of this book fractal physiology was a phrase intended
to communicate what I and a few others thought, along with nonlinear
dynamics and chaos, was a dominant feature of phenomena in physiology
and medicine. In the nearly quarter century since its publication fractal
physiology has matured into an active area of research on its own. In a
similar way nonlinear dynamic models have replaced earlier, less inclusive
and more restrictive, linear models of biomedical phenomena. Much of the
content of the earlier book was preliminary and tentative, but has withstood
the test of time and settled into a life of its own. But rather than taking
victory laps I have elected to leave relatively unchanged those sections that
were correct and useful and to supplement the text with discussion of and
reference to the breakthroughs that have occurred in the intervening years.

60709_8577 -Txts#150Q.indd x 19/10/12 4:28 PM


Fractal Physiology and Chaos in Medicine xi

The list of colleagues to who I am indebted for sharing their knowledge


and wisdom with me has only grown over time and in particular I wish
to thank P. Allegrini, M. Bologna, P. Grigolini, and N. Scafetta. With the
advent of the word processor I can no longer acknowledge a secretary for
expert tying, I have had to rely on my own meager skills to accomplish
what has been done, such as it is.
Bruce J. West
Research Triangle Park, NC
August 30, 2012

60709_8577 -Txts#150Q.indd xi 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd xii 19/10/12 4:28 PM


Chapter 1
Introduction

The driver dozing behind the wheel of a car speeding along the highway,
the momentary lapse in concentration of the airtraffic controller, or the
continuous activity of an airplane pilot could all benefit from a diagnostic
tuned to the activity of the brain associated with wakeful attentiveness. As
systems become more complex and operators are required to handle ever-
increasing amounts of data and rapidly make decisions, the need for such a
diagnostic becomes increasingly clear. The development of new techniques
to assess the state of the operator in real time has now become available, so
that the capability exists for alerting the operator, or someone in the com-
mand structure, to the possibility of impending performance breakdown.
This is an example of how new ideas for understanding dynamics networks
have been used in biomedical and social systems.
Clinicians with their stethoscopes poised over the healthy heart, radiolo-
gists tracking the flow of blood and bile, and physiologists probing the ner-
vous system are all, for the most part unknowingly, exploring the frontiers
of chaos and fractals. The related topics of chaos and fractals are central
concepts in the discipline of nonlinear dynamics developed in physics and
mathematics over the past quarter century. Perhaps the most compelling
applications of these concepts are not in the physical sciences but rather in
physiology and medicine where fractals and chaos have radically changed
long-held views about order and variability in health and disease. One of
the things I attempt to document here is that a healthy physiological net-
1

60709_8577 -Txts#150Q.indd 1 19/10/12 4:28 PM


2 Introduction

work has a certain amount of intrinsic variability, and a transition to a


more ordered or less complicated configuration may be indicative of dis-
ease. For example, the healthy variability in the normal mammalian heart
is lost when an individual experiences heart failure, a state in which the
normal spectrum of the cardiac pulse narrows dramatically.
Goldberger et al. [130] pointed out that the conventional wisdom in
medicine holds disease and aging to arise from stress on an otherwise or-
derly machine-like network. It was believed that this stress decreases order
by provoking erratic responses or by upsetting the body’s normal periodic
rhythms. Investigators have, over the past quarter century or so, discov-
ered, modeled and verified that the heart and other physiological networks
behave most erratically when they are young and healthy. Counterintu-
itively, increasingly regular behavior accompanies aging and disease. For
over a century the normal operation of physiological networks have been
interpreted to be to reduce the variability and maintain a constant internal
function: the Principle of Homeostasis. This view of how the body works
was introduced into medicine in the nineteenth century by the French sci-
entist Claude Bernard (1813–1878). He developed the concept of stability
of the human body, which was popularized by the American physiologist
Walter Cannon (1871–1945) with the introduction of the term homeostasis.
Cannon argued that any physiological variable should return by means of
negative feedback to its ’normal’ steady-state operation (fixed point) af-
ter being perturbed. Subsequently the notion of homeostasis became the
guiding principle of western medicine.
The operation of the human eye can be used a concrete example of home-
ostasis. The size of the eye’s pupil changes inversely with the intensity of
the light entering the retina. The greater the intensity of the incoming illu-
mination the smaller the pupil and vice versa. This physiological balancing
occurs because too much light will destroy the light sensitive cones of the
retina and the person is blinded. This mechanism and a myriad of others
is the culmination of countless years of evolution by which the body is be-
lieved to react to every change in the environment with an equilibrating
response. The question remains whether or not homeostasis with its feed-
back mechanisms to stable fixed points of operation is as fundamental to
our understanding the operation of the human body as is taught in all the
medical schools throughout the world.
A very different perspective was developed over the same time by a col-
league of Cannon’s at Harvard University, L.J. Henderson. Buchman [49]
explains that for Henderson, the organization of physiologic systems and
their underlying mechanisms were not exclusive, but rather interdependent.
This view of physiology was consistent with the development of General
Systems Theory by L. von Bertalanffy in the first half of the twentieth cen-

60709_8577 -Txts#150Q.indd 2 19/10/12 4:28 PM


Introduction 3

tury and Cybernetics by N. Wiener just after World War Two. The systems
approach has gained currency in the past quarter century particularly as
regards the importance of nonlinear dynamics [381]. This contemporary
perspective supports the notion that homeostasis is overly restrictive and
one might more reasonably associate a principle of homeodynamics with
the present day understanding of biomedical processes. Such a principle
would require the existence of multiple metastable states for any physio-
logical variable rather than a single steady state.
In the physical sciences erratic fluctuations are often the result of the
phenomenon of interest being dynamically coupled to an unknown and of-
ten unknowable environment. This is how the phenomenon of diffusion is
understood, as we subsequently discuss. However in the life sciences this
model is probably unsuitable. The erratic behavior of healthy physiolog-
ical networks should not be interpreted solely as transient perturbations
produced by a fluctuating environment, but should also include the normal
’chaotic’ behavior associated with a new paradigm of health. The articula-
tion of such a principle was probably premature in the first edition of this
book twenty odd years ago, but the evidence garnered in the intervening
years supports its acceptance.
Herein I review such mathematical concepts as strange attractors, the
generators of chaos in many situations and fractal statistics arguing that
far from being unusual, these still somewhat unfamiliar mathematical con-
structs may be the dynamical maps of healthy fluctuations in the heart,
brain and other organs observed under ordinary circumstances. Broad in-
verse power-law spectra of time series representing the dynamic behavior
of biological systems appear to be markers of physiological information,
not ‘noise’. Thus, rather than neglecting such irregular behavior scientists
now attempt to extract the information contained therein when assessing
physiologic networks.
The activity of cardiac pulses and brain waves are quite similar to a wide
variety of other natural phenomena that exhibit irregular and apparently
unpredictable or random behavior. Examples that immediately come to
mind are the changes in the weather over a few days time, the height of the
next wave breaking on the beach as I sit in the hot sun, shivering from a
cold wind blowing down my back, and the infuriating intermittency in the
time intervals between the drips from the bathroom faucet just after I crawl
into bed at night. In some cases such as the weather, the phenomenon ap-
pears to be always random, but in other cases such as the dripping faucet,
sometimes the dripping is periodic and other times each drip appears to be
independent of the preceding one, thereby forming a irregular sequence in
time [316]. The formal property that all these phenomena share is nonlin-
earity, so that my initial presentations focus on how nonlinear models differ

60709_8577 -Txts#150Q.indd 3 19/10/12 4:28 PM


4 Introduction

from linear ones. In particular I examine how simple nonlinearities can gen-
erate aperiodic processes, and consequently apparently random phenomena
in a brainwave context [32] in a cardiac environment [127] and in a broad
range of other biomedical situations [381].

1.1 What is Linearity?


Nonlinearity is one of those strange concepts that is defined by what it
is not. As more than one physicist has put it: “It is like having a zoo of
non-elephants.” Thus, we need to clearly identify the properties of linear-
ity in order to specify which property a particular nonlinear process does
not share with its linear counterpart. Consider, for example, a complicated
system that consists of multiple factors any one of which would invalidate
linearity. One property of linearity is that the response of the action of each
separate factor is proportional to its value. This is the property of propor-
tionality. Consider the response of a well-oiled swing to being pushed. The
height attained by the swing is directly proportional to how hard it is
pushed, assuming that it does not rotate about the support bar or the
chains do not become slack as the swing returns. Each of these extraordi-
nary effects destroys the linearity of the swing. Therefore we say the swing
(or pendulum to give a more rigid example) is linear for gentle pushes but
becomes increasingly nonlinear as the applied force is increased.
A second property of linearity is that the total response of the system
to an action is equal to the sum of the results of the values of the separate
factors. This is the property of independence; see for example, Faddeev
[88]. Pushing a stalled automobile on level ground exemplifies this effect.
The more individuals one can convince to push the vehicle, the greater is
its subsequent speed prior to releasing the clutch; assuming you remember
how a clutch and a standard transmission work.
Historically systems theory has been used in the analysis of complicated
linear systems. This discipline has been used for a number of years in the
analysis of biomedical time series including brain wave data, as well as,
in interpreting the response of this activity to external stimulations [30].
In the standard theory one asserts that a process (or system) is linear
if the output of an operation is directly proportional to the input. The
proportionality constant is a measure of the sensitivity of the system to
the input. Formally the response R of a physical system is linear when it is
directly proportional to the applied force F. This relation can be expressed
algebraically by the relation

R = αF + β (1.1)

60709_8577 -Txts#150Q.indd 4 19/10/12 4:28 PM


What is Linearity? 5

where α and β are constants. If there is no response in the absence of the


applied force, then β = 0. This is the swing response to pushing mentioned
above. If Eq. (1.1) corresponds to time series data then F (t) is the inde-
pendent variable, R(t) is the dependent variable, α is a constant and β
denotes the steady-state response in the absence of a force.
In a linear system if two distinct forces F1 and F2 are applied the net
response would be the scalar

R = α1 F1 + α2 F2 (1.2)

where α1 and α2 are independent constants and it is assumed the system is


stationary in the absence of the forces. If there are N independent applied
forces denoted by the vector F = (F1 , F2 , ..., FN ) then the response of
the system is linear if there is a vector α = (α1 , α2 , ...αN ) of independent
constant components such that


N
R=α·F= α j Fj . (1.3)
j=1

In this last equation, we see that the total response of the system, here
a scalar, is a sum of the independent applied forces Fj each weighted by
its own sensitivity coefficient αj . These ideas carry over to more general
systems where F is a generalized time dependent force vector and R is the
generalized scalar response.
As discussed by Lavrentév and Nikol’skii [195] one of the most fruitful
and brilliant ideas of the second half of the seventeenth century was the
concept that a function and the geometric representation of a line are re-
lated. Geometrically the notion of a linear relation between two quantities
implies that if a graph is constructed with the ordinate denoting the values
of one variable and the abscissa denoting the values of the other then the
relation in question appears as a straight line. In systems of more than
two variables, a linear relation defines a higher order ‘flat’ surface. For ex-
ample, three variables can be realized as a three-dimensional coordinate
space, and the linear relation defines a plane in this space. One often sees
this notion employed in the analysis of data by first transforming one or
more of the variables to a form in which the data is anticipated to lie on
a straight line. Thus, one often searches for a representation in which lin-
ear ideas may be valid since the analysis of linear systems is completely
understood, whereas that for nonlinear systems of various kinds is still rel-
atively primitive [364, 385]. Of course nothing is free, so the difficulty of
the original problem reasserts itself in properly interpreting the nonlinear
transformation.

60709_8577 -Txts#150Q.indd 5 19/10/12 4:28 PM


6 Introduction

The two notions of linearity that we have expressed here, algebraic and
geometric, although equivalent, have quite different implications. The latter
use of the idea is a static graph of a function expressed as the geometrical
locus of the points whose coordinates satisfy a linear relationship. The
former expression in our examples has to do with the response of a system
to an applied force which implies that the system is dynamic, that is,
the physical observables change over time even though the force-response
relation may be independent of time. This change of the observable in
time is referred to as the evolution of the system and for only the simplest
systems is the relation between the dependent and independent variables
a linear one. Even the relation between the position and time of a falling
object is nonlinear, even though the force law of gravity is linear. We have
ample opportunity to explore the distinction between the above static and
dynamic notions of linearity. It should be mentioned that if the axes for
the graphical display exhaust the independent variables that describe the
system, then the two interpretations dovetail.
In the child’s swing example, specifying the height of the swing or equiv-
alently its angular position, completely determines the instantaneous con-
figuration of the swing. The swing after all is constrained by the length of
the chains and is therefore one-dimensional. As time moves on the point
(swing seat) traces out a curve, called an orbit or trajectory, that describes
the history of the system’s evolution. Each point in phase space is a state
of the system. Thus, an orbit gives the sequence of states occupied by the
system through time, but does not indicate how long the system occupies
a particular state. The state of an individual’s health, to give a ‘simple’
example, consists of their age, weight, height, blood pressure, and all the
sundry measures physicians have come to rely on as the various technolo-
gies have become available. The ‘space’ of health has an axis for each of
these variables and one’s life might be viewed as a trajectory in this high-
dimensional space. Height, weight and other factors change as life unfolds
but the trajectory never strays too far from a region we associate with
health. Such details are discussed subsequently.
This geometrical representation of dynamics is one of the more useful
tools in dynamic systems theory for analyzing the time-dependent prop-
erties of nonlinear systems. By nonlinear we now know that we mean the
output of the system is not proportional to the input. One implication of
this is the following: If the system is linear, then two trajectories initiated
at nearby points in phase space would evolve in close proximity, so that
at any point in future time the two trajectories (and therefore the states
of the system they represent) would also be near one another. If the sys-
tem is nonlinear then two such trajectories could diverge from one another
and at subsequent times (exactly how long is discussed subsequently) the

60709_8577 -Txts#150Q.indd 6 19/10/12 4:28 PM


What is Linearity? 7

two trajectories become arbitrarily far apart, that is, the distance between
the orbits does not evolve in a proportionate way. Of course this need not
necessarily happen in a nonlinear system; it is a question of stability.
The accepted criteria for understanding a given phenomena varies as
one changes from discipline to discipline since different disciplines are at
different levels of scientific maturity. In the early developmental stage of a
discipline one is often satisfied with characterizing a phenomenon by means
of a detailed verbal description. This stage of development reaches maturity
when general concepts are introduced which tie together observations by
means of one or few basic principles, for example, Darwin [73] did this for
biological evolution through the introduction of: (1) the principle of uni-
versal evolution, (2) the law of natural selection, and (3) the law of survival
of the fittest. Freud [106] did this with human behavior through the intro-
duction of concepts such as conversion hysteria and the gross properties of
the systems examined. As observational techniques became more refined
additional detailed structures associated with these gross properties were
uncovered. In the examples cited the genetic structure of the DNA molecule
has for some replaced Darwin’s notion of ’survival of the fittest’ and causal
relations for social behavior are now sought at the level of biochemistry
[75]. The schism between Freud’s vision of a grand psychoanalytic theory
and microbiology is even greater. The criteria for understanding the latter
stages of development are quite different from those in the first stage. At
these ‘deeper’ levels the underlying principles must be universal and tied
to the disciplines of mathematics, physics, and chemistry. This is no less
true for medicine as we pass from the clinical diagnosis of a ailment to its
laboratory cure. Thus, concepts such as energy and entropy appear in the
discussion of microbiological processes and are used to guide the progress
of research in these areas.
The mathematical models that have historically developed throughout
Natural Philosophy have followed the paradigms of physics and chemistry.
Not just in the search for basic postulates that are universally applicable
and from which one can draw deductions, but more restrictively at the op-
erational level the techniques that have been adopted, with few exceptions,
have been linear. One example of this, the implications of which prove to
be quite important in physiology, has to do with the ability to isolate and
measure, that is, to operationally define a variable. In Natural Philosophy
this operational definition of a variable becomes intertwined with the con-
cept of linearity and therein lies the problem. To unambiguously define a
variable it must be measured in isolation, that is, in a context in which
the variable is uncoupled from the remainder of the universe. This situa-
tion can sometimes be achieved in the physical sciences (leaving quantum
mechanical considerations aside), but not so in the social and life sciences.

60709_8577 -Txts#150Q.indd 7 19/10/12 4:28 PM


8 Introduction

Thus, one must assume that the operational definition of a variable is suf-
ficient for the purposes of using the concept in the formulation of a model.
This assumption presumes that the interaction of the variable with other
‘operationally defined’ variables constituting the system is sufficiently weak
that for some specified conditions the interactions may be neglected. In the
physical sciences one has come to call such effects ‘weak interactions’ and
perturbation theories have been developed to describe successively stronger
interactions between a variable and the physical system of interest. Not only
is there no a priori reason why this should be true in general, but in point
of fact there is a great deal of experimental evidence that it is not true.
Consider the simple problem of measuring the physical dimensions of a
tube, when that tube is part of a complex physiological structure such as the
lung or the cardiovascular system. Classical measuring theory tells us how
we should proceed. After all, the diameter of the tube is just proportional
to a standard unit of length with which the measurement is taken. Isn’t it?
The answer to this question may be no. The length of a cord or a tube is
not necessarily given by the classical result. In a number of physical and
biomedical systems there may in fact be no fundamental scale of length (be
it distance or time) with which to measure the properties of the system,
the length may depend on how we measure it. The experimental evidence
for and implications of this remark are presented in Chapter Two where
we introduce and discuss the concept of fractal introduced by Mandelbrot
(1924-2010) [217, 219] and first discussed quantitatively in a physiologic
context by West and Goldberger [367] and followed by a parade of others.
In 1733 Jonathan Swift wrote: “So, Nat’ralist observe, a Flea Hath
smaller Fleas that on him prey, And these have smaller Fleas to bit ’em,
And so proceed ad infinitum.”; and some 129 years later de Morgan mod-
ified these verses to: “Great fleas have little fleas upon their backs to bite
’em and little fleas have lesser fleas, and so ad infinitum.” These couplets
capture an essential feature of what is still one of the more exciting con-
cepts in the physical, social and life sciences. This is the notion that the
dynamical activity observed in many natural phenomena is related from
one level to the next by means of a scaling relation. These poets observed
a self-similarity between scales, small versions of what is observed on the
largest scales repeat in an ever decreasing cascade of activity at smaller
and smaller scales. Processes possessing this characteristic are known as
geometric fractals. There is no simple compact definitions of a fractal, but
all attempts at one incorporates the idea that the whole is made up of
parts similar to the whole in some way. For example, those processes de-
scribed by fractal time manifest their scale invariance through their spectra
in which the various frequencies contributing to the dynamics are tied to-
gether through an inverse power law of the form 1/f a , where f is the

60709_8577 -Txts#150Q.indd 8 19/10/12 4:28 PM


Why Uncertainty? 9

frequency and α is a positive constant related to the fractal dimension, a


fractal dimension in general being non-integer as shown subsequently.

1.2 Why Uncertainty?


Science is built on the belief that the events in our lives are not capricious,
but follow from well ordered causal chains. The brick that barely misses a
walker’s head as she walks under a ladder does so not because she walked
under the ladder; the near accident is due to the fact that someone inadver-
tently kicked the brick over the edge in the upper floors of the building just
as she was passing beneath. Of course it is not good practice to walk under
a ladder at any time. Not because it is bad luck, but because the ladder
generally indicates ongoing work that could be dangerous. Accidents such
as this are by definition unpredictable and consequently cannot be com-
pletely avoided even through they can be minimized with a little thought.
However they cannot be eliminated altogether because life is intrinsically
uncertain. No matter how hard we try, how carefully we prepare for the fu-
ture, how circumspect we are in our dealings, we trip while walking, bump
into the total stranger, spill a glass of water, and are late for our son’s
ball game, are early for our wife’s surprise party and forget the doctor’s
appointment. Such small disruptions in our lives are easily explained after
the fact; a lapse of attention, a stacked agenda and an unconscious wish
to avoid the situation. These are all perfectly reasonable explanations for
why things did not turn out the way we planned due to the occurrence
of unpredicted events. This uncertainty is ubiquitous and deserves some
examination. We begin with the simplest kind of randomness and that is
the unpredictability of physical phenomena.
The Marquis Pierre Simon de Laplace (1740–1827) was one of the great
French mathematical physicists of the eighteenth and nineteenth centuries.
He was a product of his age and believed that given sufficiently detailed
information about all the particles in the universe at a given instant of
time he could predict the subsequent behavior of the planets in their orbits,
waves crashing on the beach down to the movement of grains of sand. He
did not believe humanity would ever be able do this, but he did believe
that the reason mere mortals could not predict the events of the world in
their smallest detail was a matter of incomplete information not a matter
of principle. He even developed a mathematical proof of this belief called
the Central Limit Theorem (CLT). A quote from his seminal work clearly
summarizes his perspective [193]:

60709_8577 -Txts#150Q.indd 9 19/10/12 4:28 PM


10 Introduction

The present state of the system of nature is evidently a


consequence of what it was in the preceding moment, and if we
conceive of an intelligence which at a given instant comprehends
all the relations of the entities of this universe, it could state
the respective positions, motions, and general affects of all these
entities at any time in the past or future.
Physical astronomy, the branch of knowledge which does
the greatest honor to the human mind, gives us an idea, albeit
imperfect, of what such an intelligences would be. The sim-
plicity of the law by which the celestial bodies move, and the
relations of their masses and distances, permit analysis to follow
their motions up to a certain point; and in order to determine
the state of the system of these great bodies in past or future
centuries, it suffices for the mathematician that their position
and their velocity be given by observation for any moment in
time. Man owes that advantage to the power of the instrument
he employs, and to the small number of relations that it em-
braces in its calculations. But ignorance of the different causes
involved in the productions of events, as well as their complex-
ity, taken together with the imperfection of analysis, prevents
our reaching the same certainty about the vast majority of phe-
nomena. Thus there are things that are uncertain for us, things
more or less probable, and we seek to compensate for the impos-
sibility of knowing them by determining their different degrees
of likelihood. So it is that we owe to the weakness of the human
mind one of the most delicate and ingenious of mathematical
theories, the science of chance or probability.

Laplace believed that the calculus of probability is the mathematics of


science due to human’s limited ability to know and understand. In this he
drew in part from the German polymath Karl Fredrich Gauss (1777–1855)
who, along with his American counterpart Robert Adrian (1775–1843), in-
troduced a new view of the world at the turn of the nineteenth century.
The Gauss-Adler view explained why experimental results vacillate from
one experiment to the next, never yielding exactly the same result twice
and whose view resulted in the Law of the Relative Frequency of Errors. In
their analysis a physical experiment has a predictable result determined by
the Normal probability distribution that peaks at the most probable value
as depicted in Figure 1.1. The universe was understood as a clockwork me-
chanical process and therefore important variables ought to be quantifiable,
measurable and predictable even those referring to an individual’s life and
to society. The majority of experimental results are in the immediate vicin-

60709_8577 -Txts#150Q.indd 10 19/10/12 4:28 PM


Why Uncertainty? 11

ity of the average value (most probable value; predicted value) so that the
largest fraction of the experimental results is concentrated at the center of
the distribution. The farther a value is from the peak the fewer times it is
observed in the experimentally data.

FIGURE 1.1. The universal Normal distribution is obtained by subtracting the average
value from each data element and dividing by the standard deviation. The peak is the
most probable value and the width of the distribution is unity since the variable has
been normalized by the standard deviation.

The Normal distribution became the mathematical expression of the law


of error in which the measurement ought to have a proper value determined
by the underlying dynamics of the phenomena being measured. Gauss and
Adrian maintained that the majority of the measured values are close to the
predicted one where the bell-shaped curve peaks, with as many data points
above as there are below this value. The peak of the symmetric distribution
occurs at the average value from which Gauss and Adrian independently
concluded that this value is the best representation of the collection of
measurements, all of which is contained in the universal curve given in
Figure 1.1. They posited that the deviations from the average value are
to be interpreted as errors in the measurements. The deviations are seen
as error because the average is interpreted as the predicted value of the
variable.
A couple of years after Gauss and Adrian introduced the Normal distri-
bution Laplace presented a proof of the CLT establishing that the validity
and applicability of the Normal distribution is much broader than the law
of error. In his proof Laplace stipulated four conditions necessary to prove

60709_8577 -Txts#150Q.indd 11 19/10/12 4:28 PM


12 Introduction

the CLT: 1) the errors are independent; 2) the errors are additive; 3) the
statistics of each error are the same and 4) the width of the distribution
is finite. These four assumptions were either explicitly or implicitly made
by Gauss, Adrian and Laplace and result in the Normal distribution. I
emphasize that this distribution requires linearity.
The Normal distribution has been used as the backbone for describ-
ing statistical variability in the physical, social and life sciences well into
the twentieth century. The entire nineteenth and most of the twentieth
century was devoted to experimentally verifying that the statistical fluc-
tuations observed in naturally occurring phenomena are Normal. It was
disconcerting when such careful experiments began to reveal that complex
phenomena have fluctuations that are not Normal, such as the distribu-
tion of income determined using the data collected by the Maquis Vilfredo
Frederico Damaso Pareto (1848–1923) at the end of the nineteenth century.
The income distribution was determined by Pareto to be an inverse power
law, with a long tail, and now bears his name. There is a great deal to
say about such distributions and where they are found in medicine [381].
In subsequent chapters I show that the properties necessary to prove the
CLT are violated by complex phenomena, particularly the assumptions of
independence and additivity and their violations give rise to inverse power
laws.

1.3 How Does Nonlinearity Change Our View?


Mathematical models of physiologic phenomena and those developed for
biomedical applications have traditionally relied on the paradigm of clas-
sical physics. The potency of this paradigm lies in the ability of physics to
relate cause and effect in physical phenomena, and thereby enable predic-
tions. Not all natural phenomena are predictable, however. As mentioned
earlier, weather is an example of a physical phenomenon that is so com-
plex that it eludes prediction. Scientists believe that they understand how
to construct the basic equations of motion governing the weather, and to
a greater or lesser extent they understand how to solve these equations.
But even with that the weather remains an enigma; predictions can only
be made in terms of probabilities [205]. The vulnerability of the traditional
physics paradigm is revealed in that these phenomena do not display a clear
cause/effect relation. A slight perturbation in the equations of motion can
generate an unpredictably large effect. Thus, the underlying process is said
to be random and the equations of motion are stochastic. A great deal of
scientific effort has gone into making this view consistent with the idea
that the random elements in the description would disappear if sufficient

60709_8577 -Txts#150Q.indd 12 19/10/12 4:28 PM


How Does Nonlinearity Change Our View? 13

information were available about the initial state of the system, so that in
principle the evolution of the system would be predictable.
Crutchfield et al. [65] and others have pointed out, this viewpoint has
been altered by the discovery that simple deterministic systems with only a
few degrees of freedom can generate random behavior. They emphasize that
the random aspect is fundamental to the system dynamics and gathering
more information does not reduce the degree of uncertainty. Randomness
or uncertainty generated in this way is now called chaos. The distinction
between the ‘traditional’ view and the ‘modern’ view of randomness is
captured in the quotations from Henri Poincaré (1854–1912) [275]:
A very small cause which escapes our notice determines a
considerable effect that we cannot fail to see, and then we say
that the effect is due to chance. If we knew exactly the laws of
nature and the situation of the universe at the initial moment,
we could predict exactly the situation of that same universe
at a succeeding moment. But even if it were the case that the
natural laws had no longer any secret for us, we could still only
know the initial situation approximately. If that enabled us to
predict the succeeding situation with the same approximation,
that is all we require, and we should say that the phenomenon
had been predicted, that it is governed by laws. But it is not
always so; it may happen that small differences in the initial
conditions produce very great ones in the final phenomena. A
small error in the former will produce an enormous error in the
latter. Prediction becomes impossible, and we have the fortu-
itous phenomenon.
Laplace believed in strict determinism and to his mind this implied
complete predictability. Uncertainty for him is a consequence of impre-
cise knowledge, so that probability theory is necessitated by incomplete
and imperfect observations. Poincaré on the other hand sees an intrinsic
inability to make predictions due to a sensitive dependence of the evolution
of the system on the initial state of the system. This sensitivity arises from
an intrinsic instability of the system as first explained in a modern context
by Lorenz [205].
Recall the notion of a phase space and of a trajectory to describe the
dynamics of a system. Each choice of an initial state produces a different
trajectory. If however there is a limiting set in phase space to which all
trajectories are drawn after a sufficiently long time, we say that the sys-
tem dynamics are described by an attractor. An attractor is the geometric
limiting set on which all the trajectories eventually find themselves, that
is, the set of points in phase space to which the trajectories are attracted.

60709_8577 -Txts#150Q.indd 13 19/10/12 4:28 PM


14 Introduction

Attractors come in many shapes and sizes, but they all have the property
of occupying a finite volume of phase space. Initial points off the attractor
initiate trajectories that are drawn to it if they lie in the attractor’s basin
of attraction. As a system evolves it sweeps through the attractor, going
through some regions rather rapidly and others quite slowly, but always
staying on the attractor. Whether or not the system is chaotic is determined
by how two initially adjacent trajectories cover the attractor over time. As
Poincaré stated, a small change in the initial separation (error) of any two
trajectories produces an enormous change in their final separation (error).
The question is how this separation is accomplished on an attractor of
finite size. The answer has to do with the layered structure necessary for
an attractor to be chaotic.
Rössler [298] described chaos as resulting from the geometric operations
of stretching and folding often called the bbaker’s transformation. The con-
ceptual baker in this transformation takes some dough and rolls it out on
a floured bread board. When thin enough he folds the dough back onto
itself and rolls it out again. To transform this image into a mathematically
precise statement we assume that the baker rolls out the dough until it is
twice as long as it is wide (the width remains constant during this opera-
tion) and then folds the extended piece back reforming the initial square.
For a cleaner image we may assume that the baker cuts the dough before
neatly placing the one piece atop the other. Arnol’d gave a memorable im-
age of this process using the image of the head of a cat (cf. Arnol’d and
Avery [13]).

FIGURE 1.2. Arnol’d’s cat is decimated by the Baker’s transformation.

In Figure 1.2a cross section of the square of dough is shown with the head
of cat inscribed. After the first rolling operation the head is flattened and
stretched, that is, it becomes half its height and twice its length. It is then
cut in the center and the segment of dough to the right is set above the one
on the left to reform the initial square, as depicted in the center frame. The
operation is repeated again and we see that at the right the cat’s head is now

60709_8577 -Txts#150Q.indd 14 19/10/12 4:28 PM


How Does Nonlinearity Change Our View? 15

embedded in four layers of dough. Even after two of these transformations


the cat’s head is clearly decimated and unrecognizable. After twenty stages
of transformation the head is distributed across one million layers of dough
− impossible to identify. As so charmingly put by Ekeland [86]: “Arnol’d’s
cat has melted into the square, gradually disappearing from sight like the
Cheshire cat in Wonderland.”
Two initially nearby orbits cannot rapidly separate forever on a finite at-
tractor, therefore the attractor must eventually fold over onto itself. Once
folded the attractor is again stretched and folded again. This process is re-
peated over and over yielding an attractor structure with an infinite number
of layers to be traversed by the various trajectories. The infinite richness of
the attractor structure affords ample opportunity for trajectories to diverge
and follow increasingly different paths. The finite size of the attractor in-
sures that these diverging trajectories eventually pass close to one another
again, albeit on different layers of the attractor, not unlike the folded lines
waiting for inspection at the airport. Crutchfield et al. [65] visualize these
orbits on a chaotic attractor as being shuffled by the dynamic process,
much as a deck of cards is shuffled by a dealer [86]. Thus, the randomness
of the chaotic orbits is a consequence of this shuffling process. This process
of stretching and folding creates folds within folds ad infinitum, resulting in
a fractal structure in phase space. I discuss the fractal concept in Chapter
Two; the essential fractal feature of interest here is that the greater the
magnification of a region of the attractor, the greater the degree of detail
that is revealed.
There are many measures of the degree of chaos of these attractors. One is
its ‘dimension’, integer values of which indicate a simple attractor, whereas
a non-integer dimension indicates a chaotic attractor in phase space. Part
of the task here is to understand the various definitions of dimension and
how each of them can be realized from experimental data sets. For this rea-
son I devote a great deal of space to a discussion of fractals and dimension
in Chapter Two. A large part of this discussion centers around static physi-
ological structure such as the lung and allometry relations [109], which here
serve as paradigms of physiological complexity. If the general idea of the
dimension of a static structure is understood, it makes the interpretation
of the non-integer dimension of a dynamic process that much easier. In
particular this geometric interpretation of a fractal is important because
the attractor set in phase space is just such a static structure.
A second measure of the degree of irregularity generated by a chaotic at-
tractor is the ‘entropy’ of the motion. Entropy is interpreted by Crutchfield
et al. [65] as the average rate of stretching and folding of the attractor, or
alternatively, as the average rate at which information is generated. The
application of the information concept in the dynamic systems context has

60709_8577 -Txts#150Q.indd 15 19/10/12 4:28 PM


16 Introduction

been championed by Shaw [316, 317] and Nicolis [248, 249]. One can view
the preparation of the initial state of the system as initializing a certain
amount of information. The more precisely the initial state can be speci-
fied, the more information one has available. This corresponds to localizing
the initial state of the system in phase space, the amount of information is
inversely proportional to the volume of state space localized by measure-
ment. In a regular attractor, trajectories initiated in a given local volume
stay near to one another as the system evolves, so the initial information is
preserved in time and no new information is generated. Thus, the initial in-
formation can be used to predict the final state of the system. On a chaotic
attractor the stretching and folding operations smear out the initial vol-
ume, thereby destroying the initial information as the system evolves and
the dynamics create new information. As a result the initial uncertainty in
the specification of the system is eventually spread over the entire attractor
and all predictive power is lost, that is, all causal connection between the
present and the future is lost. This is referred to as sensitive dependence
on initial conditions.
Let us denote the region of phase space as initially occupied by Vi (initial
volume) and the final region by Vf . The change in the observable informa-
tion I is then determined by the change in value from the initial to the
final state [248, 316]
 
Vf
δI = log2 . (1.4)
Vi
The rate of information creation or dissipation is given by

dI 1 dV
= (1.5)
dt V dt
where V is the time-dependent volume over which the initial conditions
are spread. In non-chaotic systems, the sensitivity of the flow in the initial
conditions grows with time at most as a polynomial, for example, let ω(t)
be the number of distinguishable states at time t so that

ω (t) ∝ tn . (1.6)

The relative size of the volume and the relative number of states in this
case remains the same

Vf ωf
= (1.7)
Vi ωi
so that for the rate of change in the information [316]

60709_8577 -Txts#150Q.indd 16 19/10/12 4:28 PM


How Does Nonlinearity Change Our View? 17

dI n
∼ . (1.8)
dt t
Thus, the rate of information generation converges to zero as t → ∞ and
the final state is predictable from the initial information. On the other
hand, in chaotic systems the sensitivity of the flow on initial conditions
grow exponentially with time,

ω (t) ∝ ent (1.9)


so that the rate of information generation is constant

dI
∼ n. (1.10)
dt
This latter system is therefore a continuous source of information, the at-
tractor itself generates the information independently of the initial condi-
tions. This property of chaotic dynamic systems was used by Nicolis and
Tsuda [248] to model cognitive systems. The concepts from chaotic at-
tractors are used for information processing in neurophysiology, cognitive
psychology and perception [249]. To pursue these latter applications in any
detail would take up too far afield, but we continue to mention the existence
of such applications where appropriate.
The final measure of the degree of chaos associated with an attractor with
which I am concerned is the set of Lyapunov exponents. These exponents
quantify the average exponential convergence or divergence of nearby tra-
jectories in the phase space of the dynamical systems. Wolf [403] believed
the spectrum of Lyapunov exponents provides the most complete qualita-
tive and quantitative characterization of chaotic behavior. A system with
one or more positive Lyapunov exponents is defined to be chaotic. The local
stability properties of a system are determined by its response to pertur-
bations; along certain directions the response can be stable whereas along
others it can be unstable. If we consider a d−dimensional sphere of initial
conditions and follow the evolution of this sphere in time, then in some di-
rections the sphere will contract, whereas in others it will expand, thereby
forming a d−dimensional ellipsoid. Thus, a d−dimensional system can be
characterized by d exponents where the j th Lyapunov exponent quantifies
the expansion or contraction of the flow along the j th ellipsoidal principal
axis. The sum of the Lyapunov exponents is the average divergence, which
for a dissipative system (possessing an attractor) must always be negative.
Consider a three-dimensional phase space in which the attractor can be
characterized by the triple of Lyapunov exponents (λ1 , λ2 , λ3 ). The quali-
tative behavior of the attractor can be specified by determining the signs of
the Lyapunov exponents only, that is, (signλ1 , signλ2 , signλ3 ). As shown

60709_8577 -Txts#150Q.indd 17 19/10/12 4:28 PM


18 Introduction

in Figure 1.3a the triple (−, −, −) corresponds to an attracting fixed point.


In each of the three directions there is an exponential contraction of trajec-
tories, so that no matter what the initial state of the system it eventually
finds itself at the fixed point. This fixed point need not be the origin, as
it would be for a dissipative linear system, but can be anywhere in phase
space. The arrows shown in the figure do not necessarily represent trajec-
tories since the fixed point can be approached at any angle by an evolving
nonlinear system.
An attracting limit cycle is denoted by (0, −, −) in which there are two
contracting directions and one that is neutrally stable. In Figure 1.3b we
see that this attractor resembles the orbit of a van der Pol oscillator. The
limit cycle itself defines the neutrally stable direction and initial points
within and outside the limit cycle are drawn onto it asymptotically.

fixed point limit cycle

(−,−,−) (0,−,−)
torus strange attractor

(0,0,−) (+,0,−)

FIGURE 1.3. The signs of Lyapunov exponents of different attractor types in a three-
dimensional phase space. From the upper left, going clock-wise, we have a fixed-point,
a Van der Poll limit cycle, a two - dimensional torus, and a two-dimensional projection
of a Rössler oscillator.

60709_8577 -Txts#150Q.indd 18 19/10/12 4:28 PM


Complex Networks 19

The triple (0, 0, −) has two neutral directions and one that is contracting
so that the attractor is the 2-torus depicted in Figure 1.3c. The surface of
the torus is neutrally stable and trajectories off the surface are drawn onto
it asymptotically.
Finally(+, 0, −) corresponds to a chaotic attractor in which the trajecto-
ries expand in one direction, are neutrally stable in another and contracting
in a third. In order for the trajectories to continuously expand in one di-
rection and yet remain on a finite attractor, the attractor must undergo
stretching and folding operations in this direction. Much more is said about
this stretching and folding operation on such attractors in Chapter 3.
It should be emphasized that the type of attractor describing a system’s
dynamics is dependent on certain parameter values. I review the relation
between parameter values and some forms of the dynamic attractor in
Chapter 3 and show therein how a system can undergo transitions from
simple periodic motion to apparently unorganized chaotic dynamics. It is
therefore apparent that the Lyapunov exponents are dependent on these
control parameters.
The notion of making a transition from periodic to chaotic dynamics
lead Mackey and Glass [212] to introduce the term dynamical disease to
denote pathological states of physiological systems over which control has
been lost. Rapp et al. [286] as well as Goldberger and West [127] make the
general observation that chaotic behavior is not inevitably pathological.
That is to say that, for some physiological processes, chaos may be the
normal state of affairs and transitions to and from the steady state and
periodic behavior may be pathological. Experimental support for this latter
point of view is presented subsequently.

1.4 Complex Networks


All complex dynamical networks manifest fluctuations, either due to intrin-
sic nonlinear dynamics producing chaos [206, 261] or due to coupling of the
network to an infinite dimensional, albeit unknown environment [198], or
both; completely aside from any question of measurement error. One of the
manifestations of complex networks is a relation between the functionality
of the phenomenon and its size. This is referred to an allometry relation
(AR).
The modeling strategies adopted to explain ARs have traditionally taken
one of two roads: the statistical approach in which residual analysis is used
to understand statistical patterns and identify the causes of variation in
the AR [52, 309]; or the reductionist approach to identify mechanisms that
explain specific values of the allometry parameters [26, 388]. I show that

60709_8577 -Txts#150Q.indd 19 19/10/12 4:28 PM


20 Introduction

neither approach separately can provide a complete explanation of all the


phenomena described by ARs. The influence of the environment, whether
inducing fluctuations in a reductionist model, or producing a systematic
change in a statistical model, has been taken into account in multiple stud-
ies [117, 119, 233]. However my son Damien and I [386, 387] developed
the probability calculus to systematically incorporate both reductionistic
and statistical mechanism into the phenomenological explanation of ARs.
This calculus enables modelers to associate characteristics of the measured
probability density function (pdf ) with specific deterministic mechanisms
and with structural properties of the coupling between variables and fluc-
tuations [198, 289].
There is a non-trivial number of empirical relations that began as the
identification of a pattern in data; were shown to have a terse power-law
description; were interpreted using existing theory; reached the level of ‘law’
and given a name, not always after the discoverer; only to subsequently fade
away when it proved impossible to connect the ‘law’ with a larger body of
theory and/or data. A counter-example that has withstood the test of time
is drawn from the Notebooks of Leonardo da Vinci [292] that relates the
diameter of a parent limb d0 to two daughter limbs d1 and d2 :
dα α α
0 = d1 + d 2 . (1.11)
The da Vinci scaling relation supplies the phenomenological mechanism
necessary for AR to emerge in a number of disciplines. Nearly five hundred
years after de Vinci recorded his observations Murray [243] used energy
minimization to derive the same equation with the theoretical value α = 3,
which is known in the literature as Murray’s law. In the simplest case the
diameter of the daughter limbs are equal d1 = d2 and the da Vinci scaling
relation reduces to scaling between sequential generations of a bifurcating
branching network having daughter branches of equal radii dk+1 = 2−1/α dk
resulting in an exponential reduction in branch diameter from generation
to generation. This was also the relation imposed in the scaling of the
bronchial airways and found to be accurate for the first ten generations
[358].
Scale invariance or scaling requires that a function Φ(X1 , ..., XN ) be
such that scaling of the N variables by an appropriate choice of exponents
(α1 , ..., αN ) always recovers the same function up to an overall constant:
Φ(X1 , .., XN ) = γ β Φ(γ α1 X1 , .., γ αN XN ). (1.12)
We observe that the allometry relation Eq. (2.58) is possibly the simplest
of such scaling relations between variable X and variable Y such that they
satisfy the renormalization group relation
Y (γX) = γ b Y (X). (1.13)

60709_8577 -Txts#150Q.indd 20 19/10/12 4:28 PM


Summary and a Look Forward 21

The lowest-order solution to this equation is, of course, given by Eq.(2.58)


and we provide the general solution subsequently. Changes in the host
network X (size) control (regulate) changes in the subnetwork Y (property)
in living networks and in some physical networks through the homogeneous
scaling relation.
Inhomogeneity in space and intermittency in time are the hallmarks of
fractal statistics and it is the statistical rather than the geometrical same-
ness that is evident at increasing levels of magnification. In geometrical
fractals the observable scales from one level to the next. In statistical frac-
tals where the phase space variables (z, t) replaces the dynamic variable
Z(t) it is the pdf P (z, t) that satisfies a scaling relation:

P (az, bt) = b−μ P (z, t) (1.14)


where the homogeneity relation is interpreted in the sense of the pdf in
Eq. (1.14). Time series with such statistical properties are found in multiple
disciplines including finance [220], economics [222], neuroscience [5, 363],
geophysics [343], physiology [384] and general complex networks [385]. A
complete discussion of pdf ’s with such scaling behavior is given by Beran
[38] in terms of the long-term memory captured by the scaling exponent.
One example of a scaling pdf is given by

1 z
P (z, t) = F z , (1.15)
tμ tμ
as discussed in the sequel. Note that in a standard diffusion process Z(t) is
the displacement of the diffusing particle from its initial position at time t,
μ = 12 and the functional form of Fz (·) is a Normal distribution. However,
for general complex phenomena there is a broad class of distributions for
which the functional form of Fz (·) is not Normal and the scaling index
μ = 12 . All this is made clear subsequently.

1.5 Summary and a Look Forward


Nonlinear dynamical systems theory (NDST) emerged from the fusion of
two classical areas of mathematics: topology and the theory of differential
equations. The importance of NDST to the experimental sciences lies in
its capacity to quantitatively characterize complex dynamical behavior. In
this monograph I review how dynamical systems theory is applied in vari-
ous biomedical contexts such as is in the construction of simple dynamical
models that give rise to solutions that resembles the time series data ob-
served experimentally. Another way is through the development of data

60709_8577 -Txts#150Q.indd 21 19/10/12 4:28 PM


22 Introduction

processing algorithms that capture the essential features of the dynamics


of a network, such as its degree of irregularity and the structure of the at-
tractor on which the network’s dynamics takes place. It is obvious that the
theory of differential equations is useful because it enables construction of
the dynamic equations that describe the evolution of the biomedical system
of interest. Topology is of value here because it allows the determination
the unique geometrical properties of the resulting dynamic attractors. The
degree of irregularity or randomness of measured time series is closely re-
lated to the geometrical structure of the underlying attractor and so we
devote Chapter Two to an understanding of the appropriate geometry.
Euclidean geometry is concerned with the understanding of straight lines
and regular forms and it is assumed that the world consists of continuous
smooth curves in spaces of integer dimension. When we look at billow-
ing cumulus clouds, trees of all kinds, coral formations and coastlines we
observe that the notions of classical geometry are inadequate to describe
them. Detail does not become less and less important as regions of these
various structures are magnified, but perversely more and more detail is
revealed at each level of magnification. The rich texture of these structures
is characteristic of fractals. In Chapter Two we show that a fractal struc-
ture is not smooth and homogeneous, and that the smaller-scale structure
is similar to the large-scale form. What makes such a structure different
from what we usually experience is that there is no characteristic length
scale. The more traditional concepts of scaling [211, 309] are quite famil-
iar in biology, and the application of fractal concepts are no longer new.
The lungs, heart and many other anatomical structures are shown to be
fractal-like.
It is not only static structures that have fractal properties but dynamics
processes as well. The concept of a fractal or fractional dimension is applied
to time series resulting from physiological processes. A dynamic fractal pro-
cess is one that cannot be characterized by a single scale of time, analogous
to a fractal structure like the lung, which is shown in Chapter Two not to
have a characteristic scale of length. Instead, fractal processes have many
component frequencies, that is, they are characterized by a broad-band
spectrum. Fractal dynamics can be detected by analyzing the time series
using special techniques often resulting in inverse power-law spectra. This
kind of spectrum suggests that the processes that regulate different complex
physiological systems over time are also governed by fractal scaling mech-
anisms [367]. The fractal structure of the underlying network are shown to
give rise to allometry relations in Chapter Two; it is the fractal statistics
that entail the allometry relations.
The nonlinear dynamics of biological networks are considered in Chap-
ter Three. There are a large number of rather sophisticated mathematical

60709_8577 -Txts#150Q.indd 22 19/10/12 4:28 PM


Summary and a Look Forward 23

concepts that must be developed for latter use and this is done through
various worked out examples. The whole idea of modeling physiological
networks by continuous differential equations is discussed in the context of
bio-oscillators, which are nonlinear oscillators capable of spontaneous exci-
tation, and strange attractors, which are sets of dissipative nonlinear equa-
tions capable of generating aperiodic time series. The distinction between
limit cycle attractors and strange attractors is basic to the understanding
of biomedical time series data taken subsequently.
Not only continuous differential equations are of interest in Chapter
Three, but so too are discrete equations. Discrete dynamical models ap-
pear in a natural way to describe the time evolution of biosystems in which
successive time intervals are distinct, for example, to model changes in pop-
ulation levels between successive generations where change occurs between
generations and not within a given generation. These discrete dynamical
models are referred to as mappings and may be used directly to model
the evolution of a network or they may be used in conjunction with time
series data to deduce the underlying dynamical structure of a biological
process. As in the continuum case the discrete dynamic equations can have
both periodic and aperiodic solutions, that is to say the maps also generate
chaos in certain parameter regimes. Since such physiological processes as
the interbeat interval of the mammalian heart can be characterized as a
mapping, that is, one beat is mapped into the next beat by the ‘cardiac
map’ it is of interest to know how the intervals between beats are related to
the map. We discuss how a map can undergo a sequence of period doubling
bifurcations to make a transition from a periodic to a chaotic solution. The
latter solution has been used by some to describe the normal dynamic state
of the human heart.
As we mentioned earlier, one indicator of the qualitative dynamics of a
system, whether it is continuous or discrete, is the Lyapunov exponent. In
either case its sign determines whether nearby orbits exponentially sepa-
rate from one another in time. Chapter Three presents the formal rules
for calculating this exponent in both simple systems and for general N-
dimensional maps. Of particular concern is how to relate the Lyapunov
exponents to the information generated by the dynamics. This question is
particularly important in biological networks because it provides one of the
measures of a strange attractor. Other measures that are discussed include
the power spectrum of a time series, that is, the Fourier transform of the
two-point correlation function; the correlation dimension (a bound on the
fractal dimension) obtained from the two-point correlation function on a
dynamical attractor; and the phase space portrait of the attractor recon-
structed from the data. These latter two measures are shown to be essential

60709_8577 -Txts#150Q.indd 23 19/10/12 4:28 PM


24 Introduction

in the processing of biomedical time series and interpreting the underlying


dynamics generating the observed time trace.
Chapter Four concentrates on the statistics of complex physiologic net-
works and what we have learned in the quarter century since the first
edition of the book. In particular the connection between complexity and
fractal statistics is highlighted in tracking both the bizarre fluctuations
arising from stochastic nonlinear equations and the instabilities resulting
from nonlinear dynamics. Simple procedures for determining if a given time
series contains such behavior using scaling ideas are presented and these
in turn motivate some of the mathematical techniques that have proven
to be valuable in gaining new understanding of the complexity of physio-
logic networks. The simple random walk model of diffusion is replaced by
the fractional random walk for anomalous diffusion, which in turn morphs
into fractional stochastic differential equations. In themselves these mod-
els would not draw the attention of physicians but how they explain the
experimental observations of stride interval variability, blood flow to the
brain and migraines is worthy of note.
The processing of time series that scale is one of the challenges that was
recognized early on in the application of nonlinear techniques to the under-
standing of physiology. A simple method of data analysis that determines
the scaling index of a time series is presented and applied to heartbeat,
interstride, and breathing data. The interpretation in terms of underlying
mechanisms is elaborated.
The method of reconstructing the phase space portrait of the dynamic
system using time series data was first demonstrated by Packard et al. [262],
and was an application of the embedding theorems of Whitney [393] and
Takens [333]. Chapter Five is devoted to the application of this technique
to a number of biomedical and chemical phenomena. It has helped in un-
derstanding the dynamics of epidemics, including how chaotic attractors
may explain the observed variability in certain cases without external fluc-
tuations driving the system. In a similar way the excitability of neurons
do not require membrane noise in the traditional sense to account for their
fluctuations, but rather can result from chaotic response to stimulation.
The first example of the application of this technique to data was to chem-
ical reactions, such as the Belousov–Zhabotinskii reaction [97] and certain
enzyme reactions [254]. Finally we discuss how chaos arises in the heart,
from the excitation of aggregates of embryonic cells of chick hearts [115] to
the normal beating of the human heart [21].
In Chapter Five we also review the use of the attractor reconstruction
technique (ART) on EEG time series data to help us understand the vari-
ous configurations of variability that are so apparent in the human brain.
In addition various dimensions are used to determine the geometrical struc-

60709_8577 -Txts#150Q.indd 24 19/10/12 4:28 PM


Summary and a Look Forward 25

ture of the attractor underlying the brain wave activity. First we examine
normal brain wave activity and find that one can both construct the phase
space portraits of the attractors and determine the fractional dimension of
the attractors. A number of difficulties associated with the data processing
techniques are uncovered in these analyses and ways to improve the effi-
ciency of these methods are proposed. One result that clearly emerges from
the calculations is that the dimension of the ‘cognitive attractor’ decreases
monotonically as a subject changes from quiet, awake and eyes open to
deeper stages of sleep.
On the theoretical side the model of Freeman [105], which he developed to
describe the dynamics of the olfactory system in a rat, is briefly discussed.
It is found that the basal olfactory EEG signal is not sinusoidal, but is
irregular and aperiodic. This intrinsic unpredictability is captured by the
model in that the solutions are chaotic attractors for certain classes of
parameter values. These theoretical results are quite in keeping with the
experimental observations of normal EEG records.
One of the more dramatic results that has been obtained is the precip-
itous drop in the correlation dimension of the EEG time series when an
individual undergoes an epileptic seizure. The brain’s attractor seems to
have a dimensionality on the order of 4 or 5 in deep sleep and to have
the much lower dimensionality of approximately 2 in the epileptic state.
This sudden drop in dimensionality was successfully captured in Freeman’s
model in which he calculated the EEG time series for a rat undergoing a
seizure.
The closing chapter attempts to loosely weave together the strings of
chaos theory, fractal geometry and statistics, complexity theory and a num-
ber of the other techniques developed in this revision in the context of the
nascent discipline of Network Science. A brief introduction into complex
networks that has blossomed in the past decade is presented in Chapter Six,
particularly as the ideas apply to physiologic networks. In order to make the
discussion concrete the decision making model [340, 341] (DMM) is used to
develop a number of the theoretical concepts such as synchronization and
criticality. The inverse power laws of connectivity and the time intervals
between events are shown to be emergent properties of network dynamics
and do not require separate assumptions regardless of whether physical,
social or physiological networks are under investigation. Some additional
detail is given on the network theory explanation of neuronal avalanches
[35] in normal cognitive behavior and the new disease of multiple organ
dysfunctional syndrome (MODS) [49].

60709_8577 -Txts#150Q.indd 25 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd 26 19/10/12 4:28 PM


Chapter 2
Physiology in Fractal Dimensions

Although it is my intent to present the understanding of certain of the dy-


namics features contained in biomedical time series data using the meth-
ods of nonlinear data analysis, I find it useful to introduce a number of
the fundamental concepts through an investigation of more familiar static
physiological structures. This approach highlights the insights that can be
gained by the application of such concepts as self-similarity, fractals, renor-
malization group relations and power-law distributions to physiology.
The complex interrelationship between biological development, form and
function are evident in many physiological structures including the finely
branched bronchial tree and the ramified His-Purkinje conduction network
of the heart. In the early part of this century such relations were explored
in exquisite detail in the seminal work of D’Arcy Thompson [336]. It was
his conviction that although biological systems evolve by rules that may
be distinct from those which govern the development of physical systems,
they cannot violate basic physical laws. This ideal of underlying physical
constraints led to the formulation of several important scaling relations
in biology − describing, for example, how proportions tend to vary as an
animal grows.
Relationships that depend on scale can have profound implications for
physiology. A simple example of Thompson’s approach is provided by the
application of engineering considerations to the determination of the maxi-
mum size of terrestrial bodies (vertebrates). The strength of a bone, in the
27

60709_8577 -Txts#150Q.indd 27 19/10/12 4:28 PM


28 Physiology in Fractal Dimensions

simplest model, increases in direct proportion to its cross-sectional area


(the square of its linear dimension) whereas its weight increases in propor-
tion to its volume (the cube of its linear dimension). Thus, there comes a
point where a bone does not have sufficient strength to support its own
weight, as first observed by Galileo Galilei (1564–1642) in 1638. The point
of collapse is given by the intersection of a quadratic and a cubic curve
denoting, respectively, the strength and weight of a bone, cf. Figure 2.1.
A second example, which is actually a variant of the first, recognizes that
mass increases as the cube of its linear dimension, but the surface area
increases only as the square. According to this principle, if one species is
twice as tall as another, it is likely to be eight times heavier but to have
only four times as much surface area. Consequently, the larger plants and
animals must compensate for their bulk; respiration depends on surface
area for the exchange of gases as does cooling by evaporation from the skin
and nutrition by absorption through membranes. One way to add surface
to a given volume is to make the exterior more irregular, as with branches
and leaves on trees; another is to hollow out the interior as with some
cheeses. The human lung, with 300 million air sacs, approaches the more
favorable ratio of surface to volume enjoyed by our evolutionary ancestors,
the single-celled microbes.

0
0.5 1 1.5

FIGURE 2.1. The strength of a bone increases with the cross-sectional area A  l2
whereas its weight increases as the volume W  l3 . The intersection of the two curves
yields A = W . Beyoind this point the structure becomes unstable and collapses under
its own weight.

60709_8577 -Txts#150Q.indd 28 19/10/12 4:28 PM


Complexity and the Lung 29

It is at this last point that the classical concepts of scaling developed by


Thompson and others fails. Classical scaling cannot account for the irreg-
ular surfaces and structures seen in the heart, lung, intestine, and brain.
The classical approach relies on the assumption that biological processes
like their physical counterparts, are continuous, homogeneous, and regular.
Observations and experiment however, suggest the opposite. Most biolog-
ical systems, and many physical ones, are discontinuous, inhomogeneous,
and irregular and are necessarily this way in order to perform a particu-
lar function. It has long been recognized that the characterization of these
kinds of systems requires new models. In this chapter we discuss how the
related concepts of fractals, nonanalytic mathematical functions, and renor-
malization group transformations provide novel approaches to the study of
physiological form and function.

2.1 Complexity and the Lung


Perhaps the most compelling feature of all physiological systems is their
complexity. Capturing the richness of physiological structure and function
in a single model presents one of the major challenges of modem biology.
On a static (structural) level, the bronchial system of the lung serves as
a useful paradigm for such anatomic complexity. One sees in this tree-like
network a complicated hierarchy of airways, beginning with the trachea
and branching down on an increasingly smaller scale to the level of tiny
tubes called bronchioles, see Figure 2.2. We return to the pulmonary tree in
considerable detail subsequently, but an essential prelude to a quantitative
analysis of this kind of complex structure is an appreciation of both its
qualitative and quantitative features.
Any successful model of pulmonary structure must account not only
for the details of microscopic (small scale) measurements, but also for the
global organization of these smaller units. It is the macroscopic (large scale)
structure we observe with the unaided eye, and initially one is struck with
at least two features of bronchial architecture. The first is the extreme vari-
ability of tube lengths and diameters and the second is the high level of
organization. The first of these paradoxical observations results from the
fact that the branching of a given airway is not uniform: the two tubes
emerging from a given branching vertex are not of equal length. One num-
bering convention is to label successive bifurcations of the bronchial tree
by generation number. The first generation of tubes is comprised of just
two members, the left and right mainstem bronchi. The second generation
consists of four tubes, and so forth. Clearly from one generation to the
next the tubes vary, tending to get smaller and smaller in both length and

60709_8577 -Txts#150Q.indd 29 19/10/12 4:28 PM


30 Physiology in Fractal Dimensions

FIGURE 2.2. The photograph shows a rubber cast of the human bronchial tree, from
the trachea to the terminal bronchioles. The mammalian lung, has long been a paradigm
of natural complexity, challenging scientists to reduce its structure and growth to simple
rules.

diameter. But the variability of the lung is not restricted to comparisons


between generations. The tubes also vary markedly in size within any given
generation.
The second predominant impression of the lung which seems to contra-
dict this initial sense of variability, is that of organization. The bronchial
tree, for all its asymmetries, is clearly constructed along some ordering
principle(s). There appear to be some pattern or patterns underlying the
irregularity of the multiple tube sizes. It is this paradoxical combination
of variability and order which must emerge from any successful model of
bronchial architecture. Indeed, we are forced to reject as ‘unphysiologic’
any model which fails to encompass these two features. Further, we find
that the fractal concept is quite useful in modeling the observed variability
of the lung [366].
The question of anatomic regularity and variability is only one aspect
of the general problem of physiologic complexity. Investigators also seek to
understand certain features of dynamical complexity, so that in addition
to their static structure, the real time functioning of physiologic networks
can be explained. I postpone this aspect of the discussion to the next chap-

60709_8577 -Txts#150Q.indd 30 19/10/12 4:28 PM


Complexity and the Lung 31

ter where dynamic processes are considered in general. Measurement of


physiological networks under ‘free-running’ circumstances give rise to data
sets that are notable for their erratic variations. The statistical techniques
required for the analysis of such data sets are formidable. In dealing with
healthy physiological networks, therefore, the tradition is to restrict the
experiment sufficiently so that this ‘noise’ is filtered from the data. Such
carefully controlled observations, while useful in dissecting selected aspects
of physiological behavior, do have a major shortcoming: they do not allow
a general, quantitative description of healthy function with its potentially
unbounded number of degrees of freedom.

1.0 REST

0.8

0.6
INTERBEAT INTERVAL

1.2

1.0

0.8
ACTIVE
0.6

0.4

0.2

50 100 150 200 250


HEARTBEAT NUMBER

FIGURE 2.3. The contrast in the heart rate variabiity for a healthy individual between
the resting state and that of normal activity is quite dramatic. Any model that is to
successfully describe cardiovascular dynamics must be able to explain both the order of
the resting state and the variability of the active state.

If, for example, I feel my pulse while resting, my heart rate appears rela-
tively regular. However, if I were to record the activity of my heart during
a vigorous day’s activity, a far different impression of the normal heart-
beat would be obtained. Instead of exclusively observing some apparently
regular steady state, the record would show periods of sharp fluctuations
interspersed between these apparently regular intervals, see Figure 2.3. Any
useful model of lung anatomy would explain both its variability and order.
The same criteria can be adopted for judging the success of any model of

60709_8577 -Txts#150Q.indd 31 19/10/12 4:28 PM


32 Physiology in Fractal Dimensions

cardiovascular dynamics. Any understanding of heart rate variability must


account for the fluctuations seen in the free-running, ‘non-equilibrium’,
healthy state of the heart.
Over the past quarter century a relatively small group of scientists have
developed quantitative models which suggest mechanism for the ‘organized
variability’ inherent in physiological structure and function. The essential
concept underlying this kind of constrained randomness is that of scaling
[221, 364]. The general notion of scaling, as we have already mentioned,
is well established in biology via the work of D’Arcy Thompson [336] and
others [123, 211, 309, 370]. However, the scaling mechanism adds to these
traditional theories a few wrinkles which are still unfamiliar to most physi-
ologists. At the same time, in the non-biological sciences, these new models
of scaling have emerged as an important strategy in understanding a vari-
ety of complex networks. The new scaling appears in related guise in the
description of a ubiquitous class of irregular structures called fractals, in
the theory of critical phenomena (renormalization group theory), and in
the ‘chaotic’ dynamics of nonlinear systems.
The fractal concept developed by Mandelbrot [217, 219] arises in three
distinct, but related guises. The first context in which we find fractals
deals with complex geometric forms. A fractal structure is not smooth
and homogeneous but rather when examined with stronger and stronger
magnification, reveals greater and greater levels of detail. Many objects in
nature, including trees, coral formations, cumulus clouds and coastlines are
fractal. As we have mentioned and subsequently show, lungs, hearts, and
many other anatomic structures possess such geometric fractal properties
[123, 366]. A second guise in which one finds fractals has to do with the
statistical properties of a process. Here it is the statistics that are inhomo-
geneous and irregular rather than smooth. A fractal statistical process is
one in which there is a statistical rather a geometrical sameness to the pro-
cess at all levels of magnification. Thus, just as the geometrical structure
satisfies a scaling relation so too does the stochastic process. The extreme
variability in the sizes of airways ‘within’ a given generation of the lung is
an example of such statistics [367, 370]. The final context in which frac-
tals are observed involves time and is related to dynamical processes. An
example is the voltage measured at the myocardium arising from the car-
diac pulses emerging from the His-Purkinje condition system of the heart
[123]. Again, more and more structure is revealed in the voltage time se-
ries as the scale of observation is reduced. Furthermore, the smaller scale
structure is similar to the larger scale form. In this latter situation there
is no characteristic time scale in the time series because the structure of
the conduction system is a fractal tree. Here we see one of the connections
between geometric structure and dynamics.

60709_8577 -Txts#150Q.indd 32 19/10/12 4:28 PM


The Principle of Similitude 33

In applying the new scaling ideas to physiology scientists have seen that
irregularity, when admitted as fundamental rather than treated as a patho-
logical deviation from some classical ideal, can paradoxically suggest a more
powerful unifying theory. To describe the advantage of the new concepts I
must first review some classical theories of scaling.

2.2 The Principle of Similitude


The concept of similitude or sameness emerges in a general way as the
central theme in D’Arcy Thompson’s studies of biological structure and
function. A compelling illustration of this principle is provided by the geom-
etry of spiral sea shells, such as the Nautilus shown in Figure 2.4. Based on
carefully compiled measurements, Thompson contended that the Nautilus
followed a pattern originally described by Rene Descartes (1596–1650) in
1683 as the equiangular spiral and subsequently by Jakob Bernoulli (1654–
1705) as the logarithmic spiral. Bernoulli was so taken with this figure
that he called it spira mirabilis and requested that it be inscribed on his
tombstone. The special feature of this type of spiral which has intrigued
mathematicians and which became the central theme of Thompson’s bi-
ological modeling is the similitude principle. As D’Arcy Thompson [336]
wrote:

In the growth of a shell, we can conceive no simpler law than


this, namely that it shall widen and lengthen in the same un-
varying proportions: and this simplest of laws is that which
nature tends to follow. The shell, like the creature within it,
grows in size but does not change its shape and the existence of
this constant relativity of growth, or constant similarity of form,
is of the essence, and may be made the basis of a definition, of
the equiangular spiral.

This spiral-shape is not restricted to the Nautilus but was described by


Thompson in many other shells. However it seems likely that the shell-
like structure in the inner ear, the cochlea (from the greek word for snail),
also follows the design of the logarithmic spiral. The ability to preserve
basic proportions is remarkable; the lung, by contrast, seems riddled with
structural variations. In 1915, two years prior to D’Arcy Thompson’s work
On Growth and Form the German physiologist, Fritz Rohrer [295], reported
his investigations on scaling in the bronchial tree. Rohrer reviewed the
properties of the flow of a Newtonian fluid in systems of pipes of varying
lengths and cross-sectional areas arranged in cascades of different kinds. His

60709_8577 -Txts#150Q.indd 33 19/10/12 4:28 PM


34 Physiology in Fractal Dimensions

purpose was to determine the properties of the flow in branched networks


and from this theoretical reasoning to derive formulae for the average length
and diameter of a conduit as a function of the stage (generation z) of a
sequence of branches. He explored a number of assumptions regarding the
scaling properties of one branch to the next in a sequence; for example, if
there is a scaling only in length but not diameter, or if there is equal scaling
in length and diameter, and so on. Each of his assumed properties led to
different scaling relations between the flow at successive generations of the
branching system of pipes. A theoretical version of this empirical reasoning
is presented in the sequel.

FIGURE 2.4. The principle of similitude is exemplified in the logarithmic spiral. An


organism growing in such a spiral retains its original proportions while its size increases,
as can be seen in the shell of the pearly nautilus.

The next major attempt to apply scaling concepts to the understand-


ing of the respiratory system was made in the early sixties by Weibel and
Gomez [358]. The intent of their investigation was to demonstrate the exis-
tence of fundamental relations between the size and number of lung struc-
tures. They considered the conductive airways as a dichotomous branching
process, so if z denotes the generation index and n(z) denotes the number
of branches in the z th generation, then

n(z) = qn(z − 1), (2.1)

60709_8577 -Txts#150Q.indd 34 19/10/12 4:28 PM


The Principle of Similitude 35

where q is a scaling parameter; the fractional change in the number of


airways from one generation to the next. This functional equation relating
the number of branches at successive generations has the solution
n(z) = q z = exp(zlnq), (2.2)
which indicates that the number of airways increases exponentially with
generation number at a rate given by lnq. This solution corresponds to
having only a single conduit at the z = 0 generation with all conduits in
each stage being of equal length. The average volume of the total airway
is the same between successive generations on the average, but with signif-
icant variability due to the irregular pattern of dichotomous branching in
the real lung.
Weibel and Gomez comment that the linear dimension of the tubes in
each generation do not have a fixed value, but rather, show a distribution
about some average. This variability was neglected in their theoretical anal-
ysis since it was the first attempt to capture the systematic variation in
the linear dimension of the airways from generation to generation, although
it was accounted for in their data analysis. Their formal results are con-
tained in the earlier work of Rohrer [295] if one interprets the fixed values
of lengths and diameters at each generation used by him as the average
values used by Weibel and Gomez [358].
Rashevsky [287] introduced the Principle of Optimal Design in which
the material used and the energy expended to achieve a prescribed func-
tion is minimal. Rashevsky applied the principle to the basic problem of
how the arterial network could branch in space in order to supply blood
to every element of tissue. To address this problem he used the model of a
bifurcating branching network supplying blood to a restricted volume and
reducing the total resistance to the flow of blood. His purpose was to deter-
mine the condition imposed by the requirement that the total resistance is
minimum. This particular design principle has apparently been superseded
by a number of others, but it is still useful in many contexts as I show
below. Others that have a more modern flavor are maximum efficiency,
entropy minimization and fractal design all of which are discussed in due
course. But more importantly, it has been determined that they are not
independent of one another.
Here we assume the branching network is composed of N generations
from the point of entry (k = 1) to the terminal branches (k = N ). A
typical tube at some intermediate generation k has length lk , radius rk and
pressure drop across the length of the branch Δpk . The volume flow rate Qk
is expressed in terms of the flow velocity averaged over the cross sectional
area uk : Qk = πrk2 uk . Each tube branches into n smaller tubes with the
branching of the vessel occurring over some distance that is substantially

60709_8577 -Txts#150Q.indd 35 19/10/12 4:28 PM


36 Physiology in Fractal Dimensions

level k level k + 1

rk+1

rk

rk+1
lk
lk+1

FIGURE 2.5. Sketch of a branching structure such as a blood vessel or bronchial airway
with the parameters used in a bifrucating network model.

smaller than the lengths of the tubes of either generation. Consequently,


the total number of branches generated up to generation k is Nk = nk . The
pressure difference at generation k between the ends of a tube is given by a
suitably indexed version of Poiseuille’s law and the total resistance to the
flow is given by the ratio of the pressure to flow rate

Δpk 8νlk
Ωk = = . (2.3)
Qk πrk4

The total resistance for a network branch with m identical tubes in paral-
lel is 1/m the resistance of each individual tube. Thus, in this oversimplified
case we can write the total network resistance as

8ν  1 lk
N
8νl1
ΩT = + . (2.4)
πr04 π j=1 Nj rj4

In order to minimize the resistance for a given mass Rachevsky first ex-
pressed the initial radius r0 in terms of the total mass of the network. The
optimum radii for the different branches of the bifurcation network having
the total mass M are then determined such that the total resistance is a
−1/3
minimum ∂Ω T
∂rj = 0 yielding the equality rk = Nk r0 . The ratio of the

60709_8577 -Txts#150Q.indd 36 19/10/12 4:28 PM


The Principle of Similitude 37

radii between successive generations is

1/3
rk+1 /rk = (Nk /Nk+1 ) (2.5)

so that inserting the number of branches at the k th generation Nk = nk


yields

rk+1 /rk = n−1/3

yielding an exponential reduction in the branch radii across generations.


The size of the reduction is determined by the number of daughter branches
being generated. Rashevsky considered the bifurcating case n = 2 where
the ratio of radii reduces to

rk+1 /rk = 2−1/3 = 0.794. (2.6)

This is the classic ‘cube law’ branching of Thompson [336] in which he used
the ‘principle of similitude’. The value 2−1/3 was also obtained by Weibel
and Gomez [358] for the reduction in the diameter of bronchial airways for
the first ten generations of the bronchial tree. However they noted a sharp
deviation away from this constant fractional reduction beyond the tenth
generation as shown in Figure 2.6.
Theodore Wilson [399] subsequently offered an alternate explanation for
the proposed exponential decrease in the average diameter of a bronchial
tube with generation number by demonstrating that this is the functional
form for which a gas of a given composition can be provided to the alveoli
with minimum metabolism or entropy production in the respiratory mus-
culature. His hypothesis was that the characteristics of the design of physi-
ologic networks take values for which a given function can be accomplished
with minimum total entropy production. This principle was articulated
in great detail somewhat later by Glansdorf and Prigogine [113] in a much
broader physical context that includes biological systems as a special appli-
cation. Rather than minimum entropy production Rashevsky believed that
the optimal design is accomplished with minimum of material used and
energy expended. Each of these principles takes the form of minimizing the
variation of the appropriate quantity between successive generations. The
relative merits of which quantity is to be minimized and why this is a rea-
sonable modeling strategy is not taken up here, but rather we stress that
the anatomic data apparently suggest an underlying principle that guides
the morphogenesis of the bronchial tree. We return to the question of the
possible relation between scaling and morphogenesis in due course.

60709_8577 -Txts#150Q.indd 37 19/10/12 4:28 PM


38 Physiology in Fractal Dimensions

LOG DIAMETER (CM), (LOG d(Z)) 0

−1

−2

−3

−4
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28
GENERATION (z)

FIGURE 2.6. The human lung cast data of Weibel and Gomez [358] for 23 generations
are indicated by the circles and the prediction using the exponential form for the average
diameter is given by the straight line. The fit is quite good until z = 10, after which
there is a systematic deviation of the anatomic data from the theoretical curve.

Note that the analyses up to this point are consistent with the data for
ten generations of the bronchial tree. However, when we examine Weibel
and Gomez’s data for the entire span of the bronchial tree data (more than
twenty generations) a remarkable systematic deviation from the exponen-
tial behavior appears as depicted in Figure 2.6. Weibel and Gomez [358]
attributed this deviation to a change in the flow mechanism in the bronchial
tree from that of minimum resistance to that of molecular diffusion. I con-
tend that the observed change in the average diameter can equally well be
explained without recourse to such a change in flow properties. Recall that
the arguments reviewed neglect the variability in the linear scales at each
generation and uses only average values for lengths and diameters. The
distribution of linear scales at each generation accounts for the deviation

60709_8577 -Txts#150Q.indd 38 19/10/12 4:28 PM


The Principle of Similitude 39

in the average diameter from a simple exponential form. The problem is


that the seemingly obvious classical scaling sets a characteristic scale size,
which clearly fails for complex systems where no characteristic scale should
exist. My colleagues and I found that the fluctuations in the linear sizes
are inconsistent with simple scaling but are compatible with a more general
scaling theory that satisfies a renormalization group property [368]. Histor-
ically renormalization group theory has been applied almost exclusively to
the understanding of complex physical processes that are dependent on
many scales [398]. We introduced the relevance of this new scaling theory
to physiologic variability [366] and review it subsequently.
The bridge between the classical scaling principles just outlined and the
renormalization theory of scaling is the theme of similitude, a notion en-
countered in the discussion of the logarithmic spiral. Intuition suggests that
the type of simple scaling function implicit in the classical notion of simil-
itude is not adequate to describe the full range of structural variability
apparent in the lung and elsewhere in physiology. Classical scaling princi-
ples, as noted before, are based on the notion that the underlying process is
uniform, filling an interval in a smooth continuous fashion. In the example
of the bone given by Galileo the ‘strength’ was assumed to be uniformly
distributed over the cross-sectional area with its weight having a similar
uniformity. Such assumptions are not necessarily accurate. It is well known
that the marrow of the bone is more porous than the periphery, so that
neither the distribution of strength nor mass in a bone is uniform. This
non-uniformity manifests itself through a new scaling property. The de-
viation of bronchial diameter measurements from the simple exponential
derived by Rohrer [295] and later by Weibel and Gomez [358], confirm this
suspicion.
The two major limitations of this classical similitude principle are: (1) it
neglects the variability in linear scales at each generation and (2) it assumes
the system to be homogeneous on scales smaller than some characteristic
size. One sees however that the bronchial tube dimensions clearly show
prominent fluctuations around their mean values and as the bronchial tree
is magnified with greater and greater resolution, finer and finer details of
structure are revealed. Thus, the small-scale architecture of the lung, far
from being homogeneous, is richly and somewhat asymmetrically struc-
tured. At the same time, there is clearly a similarity between the bronchial
branchings on these smaller levels and the overall tree-like appearance.
It is necessary to make a transition, therefore, from the notion of simili-
tude with its implicit idea of homogeneity, to a more general concept which
has come to be referred to as self-similarity. Is there any known scaling
mechanism which yields self-similar behavior but which is not dependent
on a single scale factor? Clearly any theory of self- similar scaling based

60709_8577 -Txts#150Q.indd 39 19/10/12 4:28 PM


40 Physiology in Fractal Dimensions

on a multiplicity of scales would be an attractive candidate to test physio-


logical structures and processes which are characterized by variability and
order.

2.2.1 Fractals, Self-similarity and Renormalization


In the late l9th century mathematicians addressed the problem of charac-
terizing structures that have features of self-similarity and lack a charac-
teristic smallest scale. Although they were not motivated by physiological
concerns their work has relevance to complex physiological structures such
as the lung in that as one proceeds from the trachea to the alveoli there
is an average decrease in the cross-sectional area of the airway of the self-
similar branches. Thus as one traverses the bronchial tree more and more
tubes of smaller and smaller size appear. Although there is a smallest size
to the bronchial tubes this can be disregarded for most of the mathematical
arguments in this section since the existence of this scale does not strongly
influence the conclusions. However the existence of these smallest scales
becomes important in subsequent modeling. At any generation we can con-
sider the distribution in tube sizes as constituting a mathematical set. To
understand the bronchial tree, therefore, it is apparent that we need to
have a model of a set that can be progressively ‘thinned out’. The study of
such self-similar sets was initiated by the mathematician Georg Cantor and
they now bear his name [177]. Some of his ideas are surprisingly relevant
to biology.
A simple example of what has come to be called a Cantor Set can be
constructed starting from a line of unit length by systematically removing
segments from specified regions of the line such as depicted in Figure 2.7.
We indicate the set in stages, generated by removing the middle third
of each line segment at the z th generation to generate a more depleted
structure at the (z + 1)st generation. When this procedure is taken to
the limit of infinitely large z the resulting set of points is referred to as a
Cantor set. It is apparent that the set of line segments becomes thinner and
thinner as z is increased. It is important to visualize how the remaining line
segments fill the one-dimensional line more and more sparsely with each
iteration, since it is the limit distribution of points that we wish to relate
to certain features of physiological structure. The line segments, like the
bronchial tube sizes, become smaller and smaller, and the set of points at
the limit of this trisecting operation is not continuous. How then does one
characterize the limit set?
The Cantor set can be characterized by a fractional dimension D that is
less than the topological dimension of the line, that is, D < 1. The fractal
dimension can be specified by invoking the fiction of mass points. Imagine

60709_8577 -Txts#150Q.indd 40 19/10/12 4:28 PM


The Principle of Similitude 41

FIGURE 2.7. A Cantor set can be generated by removing the middle third of a line
sement at each generation z. The set of points remaining in the limit z → ∞ os called a
Cantor set. The line segments are distributed more and more sparsely with each iteration,
and the resulting set of points is both discontinuous and inhomogeneous.

that the mass points are initially distributed along a line of unit length. In
cutting out the middle third of the line, we redistribute the mass along the
remaining two segments so that the total mass of the set remains constant.
At the next stage, where the middle third is cut out of each of the two
line segments, we again redistribute the mass so that none is lost. We now
define the parameter a as the ratio of the total mass to the mass of each
segment after one trisecting operation. Thus, after z trisections the number
of line segments is

N (z) = az . (2.7)
We also define a second parameter b as the ratio of the length of the orig-
inal line to the length of each remaining segment, which for the case of
trisections gives

η(z) = bz . (2.8)
The fractal dimension of the resulting Cantor set is

ln N (z) ln a
D= = . (2.9)
ln η(z) ln b
Note that the dimension is independent of z and therefore is equal to the
asymptotic fractal dimension. In this example, since each segment receives
half the mass of its parent, a = 2 and since we are cutting out the middle
third b = 3 so that the fractal dimension is ln 2/ ln 3 = 0.6309.

60709_8577 -Txts#150Q.indd 41 19/10/12 4:28 PM


42 Physiology in Fractal Dimensions

The parameter a determines how quickly the mass is being redistributed


while the parameter b gives us a comparable idea of how quickly the space
being occupied is being thinned out. Thus, we see that the thinness of the
distribution of the elements of the set is dependent on three factors. First
is the dimension E of the Euclidean space in which the set is embedded;
here E = 1. Second is the dimension D of the set itself given by Eq.(2.9).
Third is the intuitive notion of a topological dimension DT . For example, a
string has a topological dimension of unity DT = 1 because it is essentially
a line regardless of how one distorts its shape and it is embedded in a
Euclidean space of one higher dimension. If D < E , but D is greater than
the topological dimension, then the set is said to be fractal and the smaller
the fractal dimension the more tenuous is the set of points.
Compared with a smooth, classical geometrical form, a fractal curve
(surface) appears wrinkled. Furthermore, if the wrinkles of a fractal are
examined through a microscope more wrinkles become apparent. If these
wrinkles are now examined at higher magnification, still smaller wrinkles
(wrinkles on wrinkles on wrinkles) appear, with seemingly endless levels of
irregular structure emerging. The fractal dimension provides a measure of
the degree of irregularity. A fractal as a mathematical entity has no char-
acteristic scale size and so the emergence of irregularity proceeds to ever
smaller scales. A real world fractal, on the other hand, always ends at some
smallest scale as well as some largest scale and whether or not this is a
useful concept depends on the size of the interval over which the process
appears to be scale-free.
What then is the length of a fractal line? Clearly, there can be no simply
defined length for such an irregular curve independent of the measurement
scale, since the smaller the ruler used to measure it, the longer the line ap-
pears to be. For example Richardson (1881-1953) noted that the estimated
length of an irregular coastline or boundary L (η) is given by the number of
times N (η) the measuring unit η is laid end to end to determine the length
[291]

L (η) = L0 N (η)η = L0 η 1−d . (2.10)


Here L0 is a constant with units of length and d is a constant given by the
slope of the linear log-log line

log L (η) = log L0 + (1 − d) log η (2.11)


For a classical smooth line d = DT = 1 and L (η) = constant, independent
of η where DT is the topological dimension. For a fractal curve, such as
an irregular coastline, d is the fractal dimension d = D > DT = 1. In
Figure 2.8 we see that the data for the apparent length of coastlines and

60709_8577 -Txts#150Q.indd 42 19/10/12 4:28 PM


The Principle of Similitude 43

boundaries fall on straight lines with slopes given by (d − 1). From these
data we find that d ≈ 1.3 for the coast of Britain and d = 1 for a circle, as
expected. Thus, it is evident that

L (η) = L0 η −0.3 → ∞ as η → 0 (2.12)


for a fractal curve since (1 − d) < 0 for the coastlines depicted. The self-
similitude of the irregular curve results in the measured length increasing
without limit as the ruler size diminishes.

AUSTRA
LI AN COA
ST

4.0
Log10 (Total Length in Kilometers)

CIRCLE

SOUTH AFRICAN COAST


GERMA
N LAND
3.5 -FRONT
IER, 1900

WEST
COAS
T OF
BRITA
IN

3.0
LAND-FRO
NTIER OF
PORTUGAL

1.0 1.5 2.0 2.5 3.0 3.5


Log10 (Length of Side in Kilometers)

FIGURE 2.8. Fractal plots of various coastlines in which the apparent length L(η) is
graphed versus the measuring unit η: plotted as log10 [total length (km)] versus log10
[length of scale (km)]. [216]

Mandelbrot [219] investigated a number of curves having the above prop-


erty, that is, curves whose length depend on the ruler size. One example
is the triadic Koch curve depicted in Figure 2.9. The construction of this
curve is initiated with a line segment of unit length, L(1) = 1. A triangu-
lar kink is then introduced into the line resulting in four segments each of
length one-third so that the total length of the pre-fractal, a term coined
by Feder [93], is (4/3)1 . If this process is repeated on each of the four line
segments the total length of the resulting curve is (4/3)2 . Thus after n
applications of this operation we have

L(η) = (4/3)n (2.13)

60709_8577 -Txts#150Q.indd 43 19/10/12 4:28 PM


44 Physiology in Fractal Dimensions

FIGURE 2.9. On a line segment of unit length a kink is formed, giving rise to four line
segments, each of length l/3. The total length of this line is 4/3. On each of these line
segments a kink is formed, giving rise to 16 line segments each of length 1/9. The total
length of this curve is (4/3)2 This process is continued as shown through n = 5 for the
triadic Koch curve.

where the length of each line segment is

η = 1/3n . (2.14)

Now the generation number n may be expressed in terms of the scale


η as

n = − log η/ log 3 (2.15)


so that the length of the prefractal is

 − log η/ log 3
4 ln η
L (η) = = e− ln 3 (ln 4−ln 3)
3
= η 1−d . (2.16)

60709_8577 -Txts#150Q.indd 44 19/10/12 4:28 PM


The Principle of Similitude 45

FIGURE 2.10. Here we schematically represent how a given mass can be non-uniformly
distributed in a given volume in such a way that the volume occupied by the mass has
a fractal dimension D = lna/lnb. The parameter b gives the scaling from the original
sphere of radius r and the parameter a gives the scaling from the original total mass M
assumed to be uniformly distributed in a volume r 3 to that non-uniformly distributed
in the volume r D .

Comparing Eq. (2.16) with Richardson’s Eq. (2.10) we obtain

d = ln 4/ ln 3 ≈ 1.2628 (2.17)

as the fractal dimension of the triadic Koch curve. Furthermore, we note


that the number of line segments at the nth generation is given by
1
N (η) = 4n = 4− ln η/ ln 3 = (2.18)
ηd

as the number of line segments necessary to cover an irregular curve of


fractal dimension D = d [93].

60709_8577 -Txts#150Q.indd 45 19/10/12 4:28 PM


46 Physiology in Fractal Dimensions

In the second decade of the last century Felix Hausdorff determined that
one could generally classify such a set as the one described above by means
of a fractional dimension [217, 219]. An application of Hausdorff’s reason-
ing can be made to the distribution of mass points in a volume of space of
radius R, where a mass point is again used to denote an indivisible unit of
physical mass (or probability mass) at a mathematical point in space. Any
observable quantity is then built up out of large numbers of these idealized
mass points. One way of picturing a distribution having a fractional dimen-
sion is to imagine approaching a mass distribution from a great distance.
At first, the mass seems to be in a single cluster. As one gets closer, it is
observed that the cluster is really composed of smaller clusters such that
upon approaching each smaller cluster, they are seen to be composed of a
set of still smaller clusters, etc. It turns out that this apparently contrived
example in fact describes the distribution of stars in the heavens, and the
Hausdorff dimension has been determined by astronomical observations to
be approximately 1.23 [266]. Figure 2.10 depicts how the total mass of such
a cluster is related to its Hausdorff (fractal) dimension.
The total mass M (R) of a distribution of mass points in Figure 2.10a
is proportional to Rd , where d is the dimension of space occupied by the
masses. In the absence of other knowledge it is assumed that the point
masses are uniformly distributed throughout the volume and that d is equal
to the Euclidean dimension E of the space, for example in three spatial di-
mensions d = E = 3. Let us suppose, however, that on closer inspection
we observe that the mass points are not uniformly distributed, but instead
are clumped in distinct spheres of size R/b each having a mass that is 1/a
smaller than the total mass as depicted in Figure 2.10b. Thus, what was
initially visualized as a beach ball filled uniformly with sand turns out to
resemble one filled with basketballs, each of the basketballs being filled uni-
formly with sand. Now examine one of these smaller spheres (basketballs)
only to find that instead of the mass points being uniformly distributed in
this reduced region it consists of still smaller spheres, each of radius R/b2
and each having a mass 1/a2 smaller than the total mass as shown in Fig-
ure 2.10c. Now again the image changes so that the basketballs appear to
be filled with ping-pong balls, and each ping-pong ball is uniformly filled
with sand. If we assume that this procedure of constructing spheres within
spheres can be telescoped indefinitely we obtain
M (R) = lim [M (R/bN )aN ]. (2.19)
N →∞

This relation yields a finite value for the total mass in the limit of N becom-
ing infinitely large only if D = ln a/ ln b, where D is the fractal dimension
of the distribution of mass points dispersed throughout the topological vol-
ume of radius R. The index of the power-law distribution of mass points

60709_8577 -Txts#150Q.indd 46 19/10/12 4:28 PM


The Principle of Similitude 47

FIGURE 2.11. Fractals are a family of shapes containing infinite levels of detail, as
observed in the Cantor set and in the infinitely clustering spheres. In the fractals repro-
duced here, the tip of each branch continues branching over many generations, on smaller
and smaller scales, and each magnified, smaller scale structure is similar to the larger
form, a property called self- similarity. As the fractal (Hausdorff) dimension increases
between one and two (left to right in the figure), the tree sprouts new branches more and
more vigorously. The organic, treelike fractals shown here bear a striking resemblance
to many physiological structures. (From [367] with permission.)

can therefore be distinct from the topological dimension of the space in


which the mass is embedded, that is, D < E = 3.
There are several ways to intuitively make sense of such a fractional
dimension. Note first that in the example of the Cantor set that E = 1 >
D > DT = 0. This makes sense when one thinks of the Cantor set as a
physical structure with mass: it is something less than a continuous line,
yet more than a vanishing set of points. Just how much less and more is
given by the ratio ln a/ ln b. If a were equal to b, the structure would not
seem to change no matter what the magnification of the original line; the
mass would lump together as quickly as the length scaled down, and a one-
dimensional Euclidean line is seen on every scale. If a were greater than
b, the case 2 > D > 1, there might be a branching or a flowering object,
one that seemed to develop finer and finer structure under magnification.
What could emerge is something like the fractal trees of Figure 2.11, which
burst out of a one-dimensional space but does not fill a two-dimensional
Euclidean plane. Again, the precise character depends on the value of D:
the tree at the left has a fractal dimension barely above one, and thus it
is wispy and broom-like; as the dimension increases from one into two, the
canopy of branches becomes more and more lush.

60709_8577 -Txts#150Q.indd 47 19/10/12 4:28 PM


48 Physiology in Fractal Dimensions

The physiological implications of these concepts depends on how such


a set of points can be generated, or in the more general case how a curve
with a fractal dimension can be constructed. Cantor’s original interest was
in the representation of functions by means of trigonometric series when
the function is discontinuous or divergent at a set of points. Although he
became more interested in how to choose such a set of points than in their
series representation, another German mathematician, Karl Weierstrass,
who was a teacher and later a colleague of Cantor, was keenly interested
in the theory of functions and suggested to him a particular series repre-
sentation of a function that is continuous everywhere but is differentiable
nowhere.
For a function to be differentiable, one must be able to draw a well-
defined, straight-line tangent to every point on the curve defined by the
function. Functions describing curves for which this tangent does not exist
are called non-analytic or singular and lack certain of the properties we
have come to expect from mathematical representations of physical and bi-
ological processes. For example in the empirical science of thermodynamics
the derivatives of certain functions often determine the physical properties
of materials such as how readily they absorb heat or how easily electricity is
conducted. In some circumstances, however, the property being measured
does become discontinuous as in the case of the magnetization of a piece of
iron as the temperature of the sample approaches a critical value (the Curie
temperature Tc ). At this value of the temperature, the magnetization which
is zero for T > Tc , jumps to a finite value and then smoothly increases with
decreasing temperature T . The magnetic susceptibility, the change in the
magnetization induced by a small applied field, therefore becomes singular
at T = Tc . Thus, the magnetic susceptibility is a nonanalytic function of
the temperature. Renormalization group theory found its first successful
application in this area of phase transitions [398]. We find that such non-
analytic functions, although present, had been thought to be exceptional in
the physical sciences [241]. However such singular behavior appears to be
more the rule than the exception in social, biological and medical sciences
[128, 315, 369]. Before discussing these applications we need to develop
some of the basic ideas regarding fractals and renormalization groups more
completely.
Weierstrass cast the argument presented earlier on the fractal distribu-
tion of mass points into a particular mathematical form. His intent was to
construct a series representation of a continuous non-differentiable function.
His function was a superposition of harmonic terms: a fundamental with
a frequency ω0 and unit amplitude, a second periodic term of frequency
bω0 with amplitude 1/a, a third periodic term of frequency b2 ω0 with am-
plitude 1/a2 , and so on as depicted in Figure 2.12. The resulting function

60709_8577 -Txts#150Q.indd 48 19/10/12 4:28 PM


The Principle of Similitude 49

is an infinite series of periodic terms each term of which has a frequency


that is a factor b larger than the preceding term and an amplitude that
is a factor of 1/a smaller. These parameters can be related to the Cantor
set discussed earlier if we take a = bμ with 1 < μ < 2. Thus, in giving
a functional form to Cantor’s ideas, Weierstrass was the first scientist to
construct a fractal function. Note that for this concept of a fractal function,
or fractal set, there is no smallest scale. For b > 1 in the limit of infinite N
the frequency ω0 bN goes to infinity so there is no highest frequency con-
tribution to the Weierstrass function. Of course if one thinks in terms of
periods rather than frequencies, then the shortest period contributing to
the series is zero.
Amplitude (arbitrary units)

Time (arbitrary units)

FIGURE 2.12. Here we show the harmonic terms contributing to the Weierstrass func-
tion: (a) a fundamental with frequency ω0 and unit amplitude; (b) a second periodic
terms of frequency bω0 with amplitude 1/a and so on until one obtains (c) a superposi-
tion of the first 36 terms in the Fourier series expansion of the Weierstrass function. We
choose the values a = 4 and b = 8, so that the fractal dimension is D = 2 − 2/3 = 4/3,
close to the value used in Figure 2.7.

Consider for a moment what is implied by the lack of a smallest period,


or equivalently the lack of a largest frequency in the Weierstrass function.
Imagine a continuous line on a two-dimensional Euclidean plane and sup-
pose the line has a fractal dimension greater than unity but less than two.
How would such a curve appear? At first glance the curve would seem to be
a ragged line with many abrupt changes in direction as depicted in Figure
2.12. If we now magnify a small region of the line, indicated by the box (a)
in Figure 2.13, we see that the enlarged region appears qualitatively the

60709_8577 -Txts#150Q.indd 49 19/10/12 4:28 PM


50 Physiology in Fractal Dimensions

same as the original curve as depicted in Figure 2.13b. If we now magnify


a small region of this new line, indicated by the box (b), we again obtain
a curve qualitatively indistinguishable from the first two, see Figure 2.13c.
This procedure can be repeated indefinitely just as we did for the mass dis-
tribution in space. This equivalence property is called self-similarity and
expresses the fact that the qualitative properties of the curve persist on
all scales and the measure of the degree of self-similarity is precisely the
fractal dimension.
The Weierstrass function can be written as the Fourier series

∞
1
F (z) = n
cos [bn ω0 z] , a, b > 1 (2.20)
n=0
a

1
Amplitude

−1 0.8
0 1 2
Time
Amplitude

0.6

0.78
0.4

0.8 0.9 1.0


Time
Amplitude

0.76

0.74
0.78 0.79 0.80
Time

FIGURE 2.13. We reproduce here the Weierstrass curve constructed in Figure 2.12 in
which we superpose smaller and smaller wiggles; so that the curve looks like the irregular
line on a map representing a very rugged seacoast. Inset (b) is a magnified picture of the
boxed region of inset (a). We see that the curve in (b) appears qualitatively the same as
the original curve but note the change in scale. We now magnify the boxed region in (b)
to obtain the curve in (c) and again obtain a curve that is qualitatively indistinguishable
from the first two, again note the change in scale. This procedure can in principle be
continued indefinitely because of the fractal dimension of the curve.

60709_8577 -Txts#150Q.indd 50 19/10/12 4:28 PM


The Principle of Similitude 51

which is the mathematical expression of the above discussion. We now


separate the n = 0 term from the series Eq.(2.20) and write

∞
1
F (z) = n
cos [bn ω0 z] + cos [ω0 z] (2.21)
n=1
a

so that shifting the summation by unity we have

1
F (z) = F (bz) + cos [ω0 z] . (2.22)
a
The solution to Eq.(2.22) is worked out in detail elsewhere [364] in terms
of an analytic part Fa and a singular part Fs , such that F = Fa + Fs .
Thus, if we drop the harmonic term on the right hand side of Eq. (2.22)
we obtain a functional scaling relation for Fs (z). The dominant behavior
of the Weierstrass function is then expressed by the functional relation

1
Fs (z) = Fs (bz) . (2.23)
a
The interpretation of this relation is that if one examines the properties of
the function on the magnified scale bz what is seen is the same function
observed at the smaller scale z but with an amplitude that is scaled by
a. This is self-similarity and an expression of the form Eq. (2.23) is called
a renormalization group relation. The mathematical expression for this
self-similarity property predicts how the function F (z) varies with z. The
renormalization group transformation can be solved to yield,

Fs (z) = A (z) z α (2.24)


where the power-law index α must be related to the two parameters in the
series expansion by
α = ln a/ ln b (2.25)
just as found earlier. A new wrinkle is the coefficient function

A(z) = A(bz) (2.26)

that it is periodic in the logarithm of the variable z with period ln b




A(z) = An ein2π ln z/ ln b . (2.27)
n=−∞

The complex coefficients {An } in the Fourier expansion Eq. (2.27) are de-
termined from data.

60709_8577 -Txts#150Q.indd 51 19/10/12 4:28 PM


52 Physiology in Fractal Dimensions

Mandelbrot’s concept of a fractal liberates the ideas of geometric forms


from the tyranny of straight lines, fiat planes and regular solids and extends
them into the realm of the irregular, disjoint and singular. As rich as this no-
tion is I require one additional extension into the arena of fluctuations and
probability, since it is usually in terms of averages that physiological data
sets are understood. If F (z) is now interpreted as a random function, then
in analogy with the Weierstrass function, the probability density satisfies a
scaling relation. Thus, the scaling property that is present in the variable
Fs (z) for the usual Weierstrass function is transferred to the probability
distribution for a stochastic (random) function. This transfer implies that
if the process Fs (z) is a random variable with a properly scaled probability
density then the two stochastic functions Fs (bz) and b1/α Fs (z) have the
same distribution [240]. This scaling relation establishes that the irregular-
ities of the stochastic process are generated at each scale in a statistically
identical manner. Note that for α = 2 this is the well known scaling prop-
erty of Brownian motion with the square root of the ‘time’ z. Thus, the
self-affinity that arises in the statistical context implies that the curve (the
graph of the stochastic function Fs (z) versus z) is statistically equivalent
at all scales rather than being geometrically equivalent [217, 364].
The algebraic increase of Fs (z) with z is a consequence of the scaling
property of the function with z. The scaling in itself does not guarantee
that F (z) is a fractal function, but Berry and Lewis [40] studied very similar
functions and concluded they are fractal. Consider the function

1  
∞
n
X(z) = n
1 − eib ω0 z eiφn (2.28)
n=−∞
a

where the phase φn is arbitrary. This function was first examined by Lévy
and later used by Mandelbrott [217]. The fractal dimension D of the curve
generated by the real part of Eq. (2.28) with φn = 0 is given by 2 − D = α
so that

D = 2 − ln a/ ln b, (2.29)
which for the parameters a = 4 and b = 8, is D = 4/3. Maudlin and
Williams [227] examined the formal properties of such functions and con-
cluded that for b > 1 and 0 < a ≤ b the dimension is in the interval
[2 − a − C/ ln b, 2 − a] where C is a positive constant and b is sufficiently
large.
The set of phases {φn } may be chosen deterministically as done above,
or randomly as now. If φn is a random variable uniformly distributed on
the interval (0, 2π), then each choice of the set of values {φn } constitutes
a member of an ensemble for the stochastic function X(z). If the phases

60709_8577 -Txts#150Q.indd 52 19/10/12 4:28 PM


The Principle of Similitude 53

are also independent and b → 1+ , then X(z) is a Normal random function.


The condition 1 < D < 2 is required to ensure the convergence of the sum
in Eq. (2.28).
Consider the increments of X(z):

ΔX(Z, z) = X(Z + z) − X(z)



  n n

= b−n(2−D) eib ω0 z − eib ω0 (Z+z) eiφn (2.30)
n=−∞

and assume that the φn are independent random variables uniformly dis-
tributed on the interval (0, 2π). The mean-square increment is

2
Q(Z) = |ΔX(Z, z)|
φ


= b−2n(2−D) 2 [1 − cos (bn ω0 Z)] (2.31)
n=−∞

where the φ subscript on the brackets denotes an average over an ensemble


of realizations of the phase fluctuations. The right hand side of Eq. (2.31)
is independent of z, that is, it depends only on the difference Z, so that
ΔX(Z, z) is a homogeneous (also called stationary when z is the time)
random process.
Note that Eq. (2.31) has the same form as the real part of the extended
Weierstrass function when φn = 0. If we shift the summation index in Eq.
(2.31) by unity we obtain the scaling relation

Q(bZ) = b2(2−D) Q(Z) (2.32)


which is of the same form as Eq. (2.23). Thus, the correlations in the
intervals of the extended Weierstrass function, like the function itself, are
self-similar. Here again the solution to the renormalization group relation
Eq. (2.32) is a modulated power law.
The usual Weierstrass function was shown to increase algebraically in z
with the power-law index α given by the ratio of logarithms. Therefore if
either a or b is less than unity (but not both) then the sign of α changes,
that is to say that the dominant behavior of F (z) becomes inverse power-
law (1/z α ) with α = ln a/ ln(1/b). The preceding interpretation of the
self-similarity of a process represented by such a function remains intact
if I replace the notion of going to successively smaller scales to one of
going to successively larger scales. Thus, an inverse power-law reflects self-
similarity under contraction whereas a power-law denotes self-similarity
under magnification.

60709_8577 -Txts#150Q.indd 53 19/10/12 4:28 PM


54 Physiology in Fractal Dimensions

2.2.2 Fractal Lungs


How do the apparently abstract notions of self-similar scaling, renormal-
ization group theory and fractal dimensionality relate to the architecture
of the lung? The classical model of bronchial diameter scaling, as we saw,
predicts an exponential reduction in diameter measurements [358]. How-
ever the data indicate marked divergence of the observed anatomy from the
predicted exponential scaling of the average diameter of the bronchial tubes
beyond the tenth generation. These early arguments assume the existence
of a simple characteristic scale governing the decrease in bronchial dimen-
sions across generations. If, however, the lung is a fractal structure, no
characteristic smallest scale is present. Instead there should be a distribu-
tion of scales contributing to the variability in diameter at each generation.
Based on the preceding arguments, the subsequent dominant variation of
the average bronchial diameter with generation number would then be an
inverse power law, not an exponential [366, 367].
Recall that the arguments leading to the exponential form of the depen-
dence of the average diameter of the bronchial tube with generation number
z neglect the variability in the linear scales at each generation and uses only
average values for the tube lengths and diameters. The fractal assumption,
on the other hand, focuses on this neglected variability and consequently
the observed deviation of the average diameter from a simple exponential
dependence on z results from the distribution in fluctuations in the linear
dimensions with generation. If the Weierstrass function F (z) is representa-
tive of the diameter of the bronchial tree, then the series has two distinct
contributions. One is the singular behavior of the inverse power law, which
is the dependence of the average bronchial diameter on generation number
and the other is an analytic, short-scale variation of the measured diameter
that is suppressed in the averaging process. The parameter b is a measure
of the interval between scales that contribute to the variation in the diam-
eter and the parameter a denotes the importance of that scale relative to
its adjacent scales. In the case of the lung, in addition to the single scale
assumed in the traditional models, the fractal model assumes that no single
scale is dominant, but instead there is an infinite sequence of scales each a
factor of b smaller than its neighbor that contribute to the structure. Each
such factor bn is weighted by a coefficient 1/an . This is exactly analogous
to the weighting of different frequencies in the Weierstrass function given
above.
A more formal strategy for incorporating the variability of the diameters
at each generation is now introduced into the discussion. In the classical
argument the average diameter is written as

d(z, γ) = d0 e−γz (2.33)

60709_8577 -Txts#150Q.indd 54 19/10/12 4:28 PM


The Principle of Similitude 55

where the notation is changed to explicitly account for the scale parame-
ter γ(= ln(1/q) > 0) in the contracting process of the bronchial tree. In
Eq.(2.33) there is a single value for γ, but in the bronchial tree there are
a number of such scales present at each generation. The fluctuations in
d(z, γ) could then be characterized by a distribution of the γ’s, that is,
P (γ)dγ is the probability that a particular scale in the interval (γ, γ + dγ)
is present in the measured diameter. The average diameter of an airway at
the z th generation is then formally given by


d (z) = d(z, γ)P (γ)dγ. (2.34)
0

If the branching process is sharply peaked at only a single scale, γ0 say,


then

P (γ) = δ(γ − γ0 ) (2.35)

and Eq. (2.34) reduces to Eq. (2.33) with γ restricted to the single value γ0 .
However, from the data in Figure 2.2 it is clear that the measured average
diameter d (z) is not of the exponential form for the entire range of z
values.
Rather than prescribing a particular functional form to the probabil-
ity density West, Bhargava and Goldberger [366] constructed a model, the
WBG or fractal model of the lung, based on the scaling of the parame-
ter γ. Consider a distribution P (γ) having a finite central moment, say a
mean value γ0 . Now, following Montroll and Shlesinger [241], WBG apply
a scaling mechanism such that P (γ) has a new mean value γ0 /b:

P (γ/γ0 ) → P (bγ/γ0 )/a (2.36)

and assume this occurs with relative frequency 1/a. WBG apply the scaling
again so that the scaled mean is again scaled and the new mean is γ0 /b2
and occurs with a relative frequency 1/a2 . This amplification process is
applied repeatedly and eventually generates the unnormalized distribution

1 1
G (ξ) = P (ξ) + P (bξ) + 2 P (b2 ξ) + · · · (2.37)
a a

in terms of the dimensionless variable ξ = γ/γ0 . Since the original distribu-


tion P (ξ) is normalized to unity the normalization integral from the series

60709_8577 -Txts#150Q.indd 55 19/10/12 4:28 PM


56 Physiology in Fractal Dimensions

Eq. (2.37) is

1 1
P (ξ) dξ = 1+ + 2 2 +···
ab a b
0
= N (ab) (2.38)

where N (ab) is the normalization constant, is finite for ab > 1 and in fact
1
N (ab) = 1 − (2.39)
ab
for an infinite series. WBG use the distribution Eq.(2.37) to evaluate the
observed average diameter, denoted by an overbar, and obtain

1 1 
d(z) = N (ab) d(z) + d(z/b) + 2 d(z/b2 ) + · · ·
a a
normalized to the value in Eq.(2.38). This series can be written in the more
compact form
1
d(z) =d(z/b) + N (ab) d (z) (2.40)
a
as the number of terms in the series becomes infinite.
Note the renormalization group relation that results from this argument
when the second term on the rhs of Eq. (2.40) is dropped. Here again
we restrict our attention to the dominant behavior of the solution to this
renormalization group relation. If we separate the contributions to d(z),
into that due to singularities, denoted by ds (z), and that which is analytic,
denoted by da (z), then the singular part satisfies the functional equation
1
ds (z) = ds (z/b) (2.41)
a
The solution to this equation is

A(z)
ds (z) = (2.42)

where by direct substitution the power-law index is found to be
ln a
α= (2.43)
ln b
and the periodic coefficient is


A(z) = A(z/b) = An e2πni ln z/ ln b . (2.44)
n=−∞

60709_8577 -Txts#150Q.indd 56 19/10/12 4:28 PM


The Principle of Similitude 57

Thus, the average diameter is an inverse power law in the generation


index modulated by the slowly oscillating function A(z) just as is observed
in the data shown in Figure 2.14. In point of fact the present model pro-
vides an excellent fit to the lung data in four distinct species: dogs, rats,
hamsters and humans. The quality of this fit shown in Figure 2.14 strongly
suggests that the renormalization group relation captures a fundamental
property of the structure of the lung that is distinct from traditional scal-
ing. Furthermore, the data shows the same type of scaling for bronchial
tube lengths and consequently volume.
3
Dog
Rat
Hamster
2 Human
Log Diameter (mm), (Log d(z))

Rat Dog
1 Human

Hamster

−1

0 1 2 3
Log Generation (Log z)

FIGURE 2.14. We plot the data of Weibel and Gomez [358] on log-log graph paper and
see that the dominant character of the functional dependence of the average bronchial
diameter on generation number is indeed an inverse power law. Thus on log-log graph
paper the relationship yields a straight line. In addition to this inverse power-law depen-
dence of the average diameter on z there appears to be a periodic variation of the data
about this power-law behavior. This harmonic variation is not restricted to the data
sets of humans but also appears in data obtained for dogs, rats and hamsters derived
from Raabe [281] and his colleagues. The harmonic variation is at least as pronounced
in these latter species as it is in humans. (From West et al. [366] with permission.)

On a structural level the notion of self-similarity can also be applied


to other complex physiological networks. The vascular system, like the
bronchial tree is a ramifying network of tubes with multiple scales sizes. To

60709_8577 -Txts#150Q.indd 57 19/10/12 4:28 PM


58 Physiology in Fractal Dimensions

describe this network Cohn [59] introduced the notion of an ‘equivalent bi-
furcation system’. The equivalent bifurcation systems were examined to de-
termine the set of rules under which an idealized bifurcating system would
most completely fill space. The analogy was based on the assumption that
the branchings of the arterial system should be guided by some general
morphogenetic laws enabling blood to be supplied to the various parts of
the body in some optimally efficient manner. The branching rule in the
mathematical system is then to be interpreted in the physiological context.
This was among the first physiological applications of the self-similarity
idea, predating the formal definition of fractals and was subsequently in-
dependently found and applied by West et al. [366].

FIGURE 2.15. The variation in diameter of the bronchial airways is depicted as a func-
tion of generation numbers for humans, rats, hamsters, and dogs. The modulated inverse
power law observed in the data of Raabe et al. [281] is readily captured by the function
F (z) = [A0 + A1 cos (2π ln z/ ln b)] /z α . (From Nelson et al [246] with permission).

Many other fractal-like structures in physiology are also readily identi-


fied by their multiple levels of self-similar branching or folding, for example,
the bile duct system, the urinary collecting tubes in the kidney, the con-
voluted surface of the brain, the lining of the bowel, neural networks, and
the placenta [127]. The fractal nature of the heart is particularly striking.
The cardiac surface is traversed and penetrated by a bifurcating system
of coronary arteries and veins. Within its chambers, branching strands of

60709_8577 -Txts#150Q.indd 58 19/10/12 4:28 PM


The Principle of Similitude 59

connective tissue, called chordae tendineae, anchor the mitral and tricuspal
valves, and the electrical impulse is conducted by a fractal neural network,
the His-Purkinje system, embedded within the muscle.
I examined the fluctuation-tolerance of the growth process of the lung
and found that its fractal nature does in fact have a great deal of survival
potential [370]. In particular fractal structures were shown to be much
more error-tolerant than those produced by classical scaling; an observation
subsequently made by others as well [360, 388]. Such error tolerance is
important in all aspects of biology, including the origins of life itself [79].
The success of the fractal model of the lung suggests that nature may prefer
fractal structures to those generated by more traditional scaling. I suggested
that the reason as to why this is the case may be related to the tolerance
that fractal structures (processes) seem to possess over and above those of
classical structures (processes). Said differently, fractal processes are more
adaptive to internal changes and to changes in the environment than are
classical ones. Let us review the construction of a simple quantitative model
of error response to illustrate the difference between the classical and fractal
models.

2.2.3 Why fractal transport?


Why are fractals important in the design of complex networks? Barenblatt
and Monin [28] suggested that metabolic scaling might be a consequence
of the fractal nature of biology and subsequently investigators [388] deter-
mined that fractal topology can maximize the efficiency of nutrient trans-
port in physiologic networks. Weibel [360] maintains that the fractal design
principle can be observed in all manner of physiologic networks quantifying
the observations and speculations of Mandelbrot [217] as reviewed by West
[381] in 2006.
Another answer to the question of the ubiquity of fractals in complex
networks that was posited over two decades ago still seems reasonable and
consistent with the myriad of other reasons offered. The success of fractal
models suggests that nature may prefer fractal structures to those gen-
erated by classical scaling. I [372] conjectured that fractal processes and
structures are more adaptive to internal changes and to changes in the
environment than are classical processes and structures. Moreover I con-
structed a simple proof of this conjecture using a quantitative model of a
complex network response to random fluctuation emphasizing the differ-
ence between a classical and a fractal structure.
Consider some property of a network characterized by classical scaling
at the level z

60709_8577 -Txts#150Q.indd 59 19/10/12 4:28 PM


60 Physiology in Fractal Dimensions

Fz = F0 e−λz (2.45)
compared with a fractal scaling characterization of the same property at
level z > 1

G0
Gz = . (2.46)

The network property of interest at generation z could be the diameter of
a tube, the number of branches, the length of a tube and so on. The two
functional forms are presented here somewhat abstractly but what is of
significance is the different functional dependence on the parameter λ. The
exponential Eq. (2.45) has emerged from a large number of optimization
arguments whereas the inverse power-law form Eq. (2.46) results from the
fractal arguments I first expressed with some colleagues [366].
I [371, 381] assumed the parameter λ is made up of two pieces; a constant
part λ0 that dominates in the absence of fluctuations and a random part ξ.
The random part can arise from unpredictable changes in the environment
during morphogenesis, non-systematic errors in the code generating the
physiologic structure or any of a number of other causes of irregularity.
The average diameter of a classically scaling airway is given by

Fz ξ = F0 e−λ0 z e−ξz ξ (2.47)

where an average over an ensemble of realizations of the ξ−fluctuations is


denoted by · ξ . To evaluate this average we must specify the statistics of
the ξ−ensemble. For convenience I assume the statistics to be described by
a zero-centered, Normal distribution
 
1 ξ2
P (ξ) = √ exp − 2 (2.48)
2πσ 2 2σ
where P (ξ)dξ is the probability that the random variable lies in the interval
( ξ, ξ + dξ ). The first two moments of ξ are


ξ = ξP (ξ)dξ = 0 (2.49)
−∞

and


2
ξ = ξ 2 P (ξ)dξ = σ 2 (2.50)
−∞

60709_8577 -Txts#150Q.indd 60 19/10/12 4:28 PM


The Principle of Similitude 61

Thus, the average in Eq.(2.47) can be evaluated to be



−ξz
 2 2
e ξ
= e−ξz P (ξ)dξ = eσ z /2
(2.51)
−∞

so that
2 2  
Fz ξ = F0 e−λ0 z eσ z /2
= Fz0 exp σ 2 z 2 /2 (2.52)
Note that the choice of Normal statistics has no special significance here
except to provide closed form expressions for the error that can be used to
compare the classical and fractal models.
 The error in the classical scaling
model of the lung grows as exp σ 2 z 2 /2 .
In the fractal model of the lung the same assumptions are made and the
average over the ξ−fluctuations is
 
G0 1
Gz ξ = λ0 (2.53)
z zξ ξ
using the Normal distribution yields
  ∞  
1 2
= e−ξ ln z P (ξ)dξ = exp σ 2 (ln z) /2 (2.54)
zξ ξ
−∞

resulting in
G0 σ2 (ln z)2 /2  
0 2 2
Gz ξ = e = G z exp σ (ln z) /2 . (2.55)
z λ0
 
2
Consequently the error in the fractal model grows as exp σ 2 (ln z) /2 .
The relative error generated by the fluctuations is given by the ratio of
the average value to the function in the absence of fluctuations and for the
two models we have the relative errors
  2 2 
exp
 σ z /2 
εz = 2 . (2.56)
exp σ 2 (ln z) /2
The two error functions are graphed in Fig. 2.16 for fluctuations with
a variance σ 2 /2 = 0.01. At z = 15 the error in classical scaling is 9.5.
This enormous relative error means that the perturbed average property
at generation 15 differs by nearly an order of magnitude from what it would
be in an unperturbed network. A biological network with this sensitivity
to error would not survive for very long in the wild. For example, the di-
ameter of a bronchial airway in the human lung could not survive this level

60709_8577 -Txts#150Q.indd 61 19/10/12 4:28 PM


62 Physiology in Fractal Dimensions

of sensitivity [313]. However, the average diameter of the fractal network


changes by less than 10% at the distal point z = 20. The implication is
that the fractal network is relatively unresponsive to fluctuations.

FIGURE 2.16. The error between the model prediction and the prediction averaged over
a noisy parameter is shown for the classical model (upper curve) and the fractal model
(lower curve).

A fractal network is consequently very tolerant of variability. This error


tolerance can be traced back to the broadband nature of the distribution
in scale sizes of a fractal object. This distribution ascribes many scales to
each generation in the network. The scales introduced by the errors are
therefore already present in a fractal object. Thus, the fractal network is
preadapted to variation and is therefore not sensitive to change [367, 368].
These conclusions do not vary with modification in the assumed statistics
of the errors.
Until now we have restricted our discussion to a static context, one de-
scribing the relevance of power-law scaling and fractal dimensionality to
anatomy. Such physiologic structures are only static in that they are the
‘fossil remnant’ of a morphogenetic process. It would seem reasonable there-
fore to suspect that morphogenesis itself could also be described as a fractal
process, but one which is time dependent. From the viewpoint of morpho-
genesis, the new scaling mechanisms have interesting implications regarding
the development of complex but stable structures using a minimal code.
One of the many challenges for future research is unraveling the molecular
and cellular mechanism whereby such scaling information is encoded and
processed.

60709_8577 -Txts#150Q.indd 62 19/10/12 4:28 PM


Allometry Relations 63

2.3 Allometry Relations


West and West [387] point out that large animals live longer than small
ones, their hearts beat more slowly, and their breathing is more measured.
This dependence of physiological function on size motivated the formula-
tion of allometry relations (ARs). A resurgence of interest in this area of
research was initiated by the seminal paper of Geoff West ( no relation to
me) and colleagues [388] who devised a fractal nutrient transport model
for how the capillary bed receives nourishment. In this section I review the
empirical links within complex physiologic phenomena between network
size and certain network properties. A concrete example of such an em-
pirical link identified in biology nearly two hundred years ago relates the
mass of an organ within an organism to the organism’s total body mass
(TBM). Grenfell et al. [139] among others point out that biologists have de-
scribed many such relationships linking body size to rates of physiological
processes interconnecting more than 21 orders of magnitude of TBM [232].
Over the course of time such interdependency became known as allometry,
literally meaning by a different measure and such links have been identified
in nearly every scientific discipline. Allometry has acquired a mathematical
description through its relations along with a number of theoretical inter-
pretations to account for its mathematical form. However no one theory
has been universally accepted as successfully explaining ARs in their many
guises so the corresponding origins remain controversial.
Cuvier [68] was the first to recognize that brain mass increases more
slowly than TBM as we proceed from small to large species within a taxon.
This empirical observation was subsequently made between many other bio-
logical observables and was first expressed mathematically as an allometric
relation by Snell [323]:

brainweight = a(bodyweight)b (2.57)

where on log-log graph paper a is the intercept with the vertical axis and
b is the slope of the line segment. Mammalian neocortical quantities X
have subsequently been empirically determined to change as a function
of neocortical gray matter volume Y as an AR. The neocortical allome-
try exponent was first measured by Tower [339] for neuron density to be
approximately −1/3. The total surface area of the mammalian brain was
found to have an allometry exponent of approximately 8/9 [160, 173, 280].
Changizi [56] points out that the neocortex undergoes a complex transfor-
mation covering the five orders of magnitude from mouse to whale but the
ARs persist; those mentioned here along with many others.

60709_8577 -Txts#150Q.indd 63 19/10/12 4:28 PM


64 Physiology in Fractal Dimensions

The generic equation of interest interrelates two observables in a complex


network X and Y, at least one of which is a measure of size and in a living
network this measure of size is taken to be the mass. The theoretical AR is

Y = aX b (2.58)
and by convention the variable on the right is the measure of size, such as
the TBM. Allometry laws in the life sciences, as stressed by Gould [134],
fall into two distinct groups. The intraspecies AR relates a property of
an organism within a species to its TBM. The interspecies AR relates a
property across species such as the basal metabolic rate (BMR) to TBM
[52, 309].
Equation (2.58) looks very much like the scaling relations that have be-
come so popular in the study of complex networks over the last two decades
[4, 47, 247, 356, 383]. Historically the nonlinear nature of Eq.(2.58) has
precluded the direct fitting of the equation to data. A logarithmic trans-
formation is traditionally made and a linear regression to the data on the
equation

ln Y = ln a + b ln X (2.59)
yield estimates of the parameters a and b.
To clarify the discussion distinguishing between the intraspecies and in-
terspecies metabolic allometry relations West and West [387] introduce the
index i to denote a species and a second index j to denote an individual
within that species. In this way the TBM is denoted X = Mij and the
BMR by Y = Bij and in this way the intraspecies metabolic AR is

b
Bij = aMij . (2.60)
Using the above notation the average size of the species i, such as average
TBM, is
1
N
Mi ≡ Mij (2.61)
N j=1

and the average function for the species i, such as BMR, is

1
N
Bi ≡ Bij (2.62)
N j=1

so that the interspecies metabolic AR is written in general as


b
Bi = a Mi . (2.63)

60709_8577 -Txts#150Q.indd 64 19/10/12 4:28 PM


Allometry Relations 65

These two kinds of AR are distinctly different from one another and the
models developed to determine the theoretical forms of the allometry coef-
ficient a and allometry exponent b in the two cases are quite varied. Note
that both ARs are traditionally expressed with the indices suppressed, so
that both Mij and Mi are usually written as M or m, occasionally re-
sulting in confusion between the two forms of the ARs.
Another quantity of interest is the time; not the chronological time mea-
sured by a clock but the intrinsic time of a biological process first called
biological time by Hill [157]. Hill reasoned that since so many properties of
an organism change with size that time itself may scale with TBM. Lind-
stedt and Calder [199, 200] develop this concept further and determine
experimentally that biological time, such as species longevity, satisfies an
AR with Y being the biological time. Lindstedt et al. [201] clarify that
biological time τ is an internal mass-dependent time scale

τ = αM β (2.64)

to which the duration of biological events are entrained. They present a


partial list of such events that includes breath time, time between heart
beats, blood circulation time, and time to reach sexual maturity. In all
these examples and many others the allometry exponent clusters around
the theoretical value β = 1/4. Note that the total energy of an organism
seen as a bioreactor is proportional to volume (M ) and the biological time
is proportional to M 1/4 , so the metabolic rate (energy/time) would scale
as M 3/4 . The value of the allometry exponent b = 3/4 is still the subject
of controversy.

2.3.1 Empirical Allometry


Sir Julian Huxley [165]; grandson of the Huxley of Darwin evolution fame,
bother of the novelist Aldous (Brave New World ) and half-brother of the
biophysicist Andrew (the Hodgkin-Huxley equations); proposed that two
parts of the same organism have proportional rates of growth. In this way if
Y is a living subnetwork observable with growth rate γ and X is a measure
of the size of a living host network with growth rate ϑ then the fractional
increase in the two is denoted according to Huxley by
dX dY
= . (2.65)
ϑX γY
This equation can be directly integrated to obtain the time-independent
intraspecies AR given by Eq.(2.60) where a and b (= γ/ϑ) are empirically
determined.

60709_8577 -Txts#150Q.indd 65 19/10/12 4:28 PM


66 Physiology in Fractal Dimensions

In biology/physiology the ARs associate functional variables with mea-


sures of body size such as the average TBM and the average BMR as given
by Eq.(2.63) as distinct from the case considered by Huxley. The most
prevalent theories of metabolic allometry argue for either b = 2/3, based
on body cooling, or b = 3/4, based on energy efficiency. Selected data
sets have been used by various investigators to support either of these two
values. However, there is also strong evidence that there is no universal
value of b that is satisfied by all metabolic data [44, 117, 118, 302]. On
the other hand, West [389] argues that living networks do have universal
scaling laws. West, Brown and Enquist [388] present a theory, the WBE
model of intraspecies AR, which we review subsequently, that has as one of
its tenets the existence of hierarchical fractal-like branching networks for
the delivery of nutrients resulting in b = 3/4. They attribute this origin of
AR to evolution’s solution to the grand challenge of how highly complex,
self-sustaining, reproducing, living networks service enormous numbers of
localized microscopic units in an efficient and ‘democratic’ way. Their con-
clusion, like that of the analysis presented in the previous section, was that
fractal networks have an evolutionary advantage over those that scale clas-
sically, independently of what the networks distribute from macroscopic
reservoirs to microscopic sites [360, 372].
The distinction between interspecies and intraspecies ARs depends on
the origin of the statistical fluctuations; whether they are from multiple
measurements within an individual or from multiple measurements across
species. West and West [386] address the statistical nature of the fluctua-
tions in the AR models using data from the literature. The data relating
average BMR that measures the average energy expended by a given species
in watts to the average TBM of that species in kilograms for 391 species of
mammal is plotted in Fig. 2.17 and also in Heusner [154] as well as in Dodds
et al. [77]. A fit of Eq.(2.59) to these data that minimizes the mean-square
error is a straight line on double logarithmic graph paper and was found to
have slope b = 0.71±0.008 so that empirically 2/3 < b < 3/4 and the allom-
etry coefficient a = 0.02. Heusner [153] had somewhat earlier questioned
Kleiber’s value of 3/4 and concluded from data analysis that this value
of 3/4 was a statistical artifact. Feldman and McMahon [96] agreed with
Heusner’s conclusions, but suggested that there was no compelling reason
for the intraspecies and interspecies allometric exponents to be the same,
with the intraspecies exponent being 2/3 based on geometric similarity and
the interspecies exponent being 3/4 based on elastic similarity.
Recently Savage et al. [301] obtained the same phenomenological value of
b using 626 species where the 95% confidence interval excludes both 3/4 and
2/3 as the value of the allometry exponent. These authors maintain that
because of the overwhelming number of small species (477 species with M <
1kg) that this estimate of the allometry exponent is biased. Consequently,

60709_8577 -Txts#150Q.indd 66 19/10/12 4:28 PM


Allometry Relations 67

4
In BMR

−2

−4
0 2 4 6 8 10 12 14
In TBM

FIGURE 2.17. The linear regression to Eq.(2.59) for Heusner’s (1991) data is indicated
by the line segment. The slope of the dashed line segment is 0.71 ± 0.008.(from West
and West [386] with permission)

they partition the data into 52 bins of size 0.1 on the logarithmic scale and
average the data in each bin. The resulting 52 average data points define
a uniform distribution and are fit to a straight line segment with slope
b = 0.737 over which the 95% confidence interval includes 3/4 but excludes
2/3. They accept this latter result as support of the allometry exponent
of 3/4 over 2/3. However they also point out that there is considerable
variation in the data around 3/4, which they attribute to sample size,
range of variation in mass, experimental methods and other such procedural
sources. Using the data of Peters [270] for biological rates they construct
a histogram that is seen to peak at 3/4. However they do not explore the
consequences of treating the allometry parameters themselves as stochastic
quantities.
White and Seymour [392] argue that contamination of BMR data with
non-basal measurements is likely to increase the allometry exponent even
if the contamination is randomly distributed with respect to the TBM.
They conclude that the allometry exponent for true BMR is statistically
indistinguishable from 2/3 and that the higher measured exponents may
well be the result of such contamination. Another interesting observation
they make is that the calculation of the AR regression line conceals the
adaptive variation in the BMR.

60709_8577 -Txts#150Q.indd 67 19/10/12 4:28 PM


68 Physiology in Fractal Dimensions

Now focus attention on modeling rather than fitting the allometry param-
eters. We begin with the deterministic fractal model of nutrient transport
constructed by West et al. [388] and follow with the statistical fractal model
of West and West [385]. It is probably worth pointing out that the Geoff
West in the first reference is not related to the Bruce (me) and Damien
(my son) West in the second reference.

2.3.2 WBE model


The WBE [388] quantitative model of metabolic allometry has had signif-
icant impact on how a significant portion of the physiology/biology com-
munity understand metabolic allometric relations. This model develops a
fractal representation of nutrient distribution within a complex network in
which the sizes of tubes decrease in a well prescribed manner with increas-
ing generation number. The fractal scaling in the transport network is a
consequence of the constraints imposed by three assumptions: 1) The entire
volume of the organism is filled with a space-filling fractal-like branching
network. 2) The tubes at the terminus of the network are size-invariant. 3)
The energy required to distribute resources using this network is minimal,
that is, the hydrodynamic resistance of the network is minimized. We fol-
low the reasoning of WBE in this presentation and note but do not accept
their claim that their fractal model is the origin of universal scaling laws of
biology [389, 390] with b = 3/4. However we emphasize at the start that the
existence of a single theoretical allometry exponent for metabolic allometry
has been questioned by a number of investigators [77, 154, 190, 191] based
on data analysis.
They [388] maintain that the total number of terminal branches scales
with TBM as
 b
M
NT = (2.66)
M0
where M0 is introduced as a normalization scale. For a network that gen-
erates n new branches in each generation there is a geometric progression
so that the total number of branches at generation k is Nk = nk . Conse-
quently at the capillaries where the network terminates the self-similarity
of the fractal network yields NT = nN , which together with Eq.(2.66) yields

ln (M/M0 )
N =b . (2.67)
ln n
WBE introduce two parameters to characterize the network branching
process, one determines the reduction in the radii of tubes with generation

60709_8577 -Txts#150Q.indd 68 19/10/12 4:28 PM


Allometry Relations 69

as was done in the energy minimization arguments [287] and the other
determines the reduction in tube length:

rk+1 lk+1
βk ≡ and γk ≡ . (2.68)
rk lk
Moreover WBE use Rashevsky’s energy minimization argument that the
transport of nutrients in complex networks is maximally efficient when the
ratio parameter βk is independent of generation number and refer to this
as the fractal scaling assumption. They assert that the total fluid volume
is proportional to TBM as a consequence of energy minimization so that

ln n
b=− (2.69)
ln(γβ 2 )
The estimates of the ratio parameters are done making two separate
assumptions. To estimate the ratio of lengths WBE assume that the volume
of a tube at generation k can be replaced by a spherical volume of diameter
lk and in this way implement the space-filling assumption. The conservation
of volume between generations therefore leads to

γ = γk = n−1/3 . (2.70)
WBE maintain that Eq. (2.70) is a generic property of all the space-filling
fractal networks they consider. A separate and distinct assumption is made
to estimate β using the classic rigid-pipe model to equate the cross-sectional
areas between successive generations to obtain

πrj2 = nπrj+1
2
(2.71)
so that using Eq. (2.68)

β = βk = n−1/2 . (2.72)
Note that this differs from the ratio parameter obtained using energy min-
imization, that is Murray’s law or the Hess-Murray law, which WBE main-
tain plays only a minor role in allometric scaling. Inserting Eqs. (2.70) and
(2.72) into Eq. (2.69) yields the sought after exponent

ln n 3
b=−  = (2.73)
ln n −1/3 n −2/2 4
and the metabolic AR becomes

B = aM 3/4 . (2.74)

60709_8577 -Txts#150Q.indd 69 19/10/12 4:28 PM


70 Physiology in Fractal Dimensions

Note that this is the intraspecies AR expressed in terms of the BMR and
TBM and is not related to the the interspecies AR expressed in terms of
the average BMR and average TBM given by Eq. (2.63).
WBE point out that Eq. (2.74) is a consequence of a strictly geometri-
cal argument applying only to those networks that exhibit area-preserving
branching. Moreover the fluid velocity is constant throughout the network
and is independent of size. They go on to say that these features are a
natural consequence of the idealized vessel-bundle structure of plant vas-
cular networks in which area-preserving arises automatically because each
branch is assumed to be a bundle of nN −k elementary vessels of the same
radius. They recognized that this is not the situation with vascular blood
flow where the beating of the heart produces a pulsating flow that gen-
erates a very different kind of scaling. Area-preserving is also not true in
the mammalian lung where there is a distribution of radii at each level of
branching as we discussed.
A physical property that the area preserving condition violates is that
blood slows down in going from the aorta to the capillary bed. Here WBE
return to the principle of energy minimization and as stated by West [389]
assert that to sustain a given metabolic rate in an organism of fixed mass,
with a given volume of blood, the cardiac output is minimized subject to a
space-filling geometry. This variation is essentially equivalent to minimizing
the total impedance since the flow rate is constant and again yields the
Hess-Murray law β = n−1/3 corresponding to area-increasing branching
[302, 320]. This change in scaling from the area-preserving n−1/2 to the
area-increasing n−1/3 solves the problem of slowing down blood flow to
accommodate diffusion at the capillary level. Moreover, the variation also
leads to an allometry exponent b = 1. Such an isometric scaling suggests
that plants and animals follow different allometry scaling relations as was
found [288, 320].
A detailed treatment of pulsate flow is not straightforward and will not
be presented here, but see Savage et al. [302], Silva et al. [320] and Apol
et al. [11] for details and commentary in the context of the WBE model.
We do note that for blood flow the walls of the tubes are elastic and con-
sequently the impedance is complex, as is the dispersion relation that de-
termines the velocity of the wave and its frequency. Consequently pulsate
flow is attenuated [53, 108] and WBE argue that the impedance changes its
r−dependence from r−4 for large tubes to r−2 for small tubes. The varia-
tion therefore changes from area-preserving flow β = n−1/2 for large vessels
to dissipative flow β = n−1/3 for small vessels where blood flow is forced to
slow. Thus, β is k−dependent in the WBE model for pulsate flow and at
an intermediate value of k the scaling changes and this change over value
is species dependent. These results are contradicted in the more extensive

60709_8577 -Txts#150Q.indd 70 19/10/12 4:28 PM


Allometry Relations 71

analysis of pulsate flow by Apol et al. [11], who conclude that Kleiber’s law
b = 3/4 remains theoretically unexplained.
Although the WBE model reinvigorated the discussion of metabolic al-
lometry over the past decade that model has not been universally accepted.
Kozlowski and Konarzewski [190] (KK1) critique the apparent limitations
of the WBE model assumptions. The size-invariance assumption regard-
ing the terminal branch of the network made in the WBE model has been
interpreted in KK1 to mean that NT ∝ M , that is, the terminal number
scales isometrically with size. This scaling causes the number of levels in
the network to be a function of body size since more levels are required
to fill a larger volume with the same density of final vessels. KK1 main-
tain that the size-invariance assumption leads to a contradiction within the
WBE model.
In rebuttal Brown, West and Enquist [46] (BWE) assert that KK1 make
a fundamental error of interpretation of the size-invariant assumption. The
gist of the error is that NT VT ∝ M so that NT ∝ M 3/4 and VT ∝ M 1/4 and
not the isometric scaling discussed in KK1. BWE go on to say that: “Having
got this critical part wrong, they went on to make incorrect calculations
and to draw erroneous conclusions about scaling...”
Of course, in their response to the rebuttal Kozlowski and Konarzewski
[191] (KK2) contend that BWE had not addressed the logical inconsisten-
cies they had pointed out. KK2 rather than abdicating refine their argu-
ments and emphasize that choosing NT ∝ M 3/4 is an arbitrary assumption
on the part of WBE and is not proven. Cyr and Walker [69] refer to this as
the illusion of mechanistic understanding and maintain that after a century
of work the jury is still out on the magnitude of the allometric exponents.
A quite different critique comes from Savage et al. [302] who emphasize
that the WBE model is only valid in the limit N → ∞, that is, for infinite
network size (body mass) and that the actual allometric exponent predicted
depends on the sizes of the organisms considered. The allometric relation
between BMR and TBM with corrections for finite N in the WBE model
is given by

M = a1 B + a2 B 4/3 (2.75)
1/3
from which it is clear that b = 3/4 only occurs when a2 B >> a1 , which
is not the case for finite size bodies. In their original publication WBE ac-
knowledged the potential importance of such finite size effects, especially
for small animals, but the magnitude of the effect remained unclear. Using
explicit expressions for the coefficients in Eq. (2.75) from the WBE model
Savage et al. [302] show that when accounting for these corrections over a
size range spanning the eight orders of magnitude observed in mammals a
scaling exponent b = 0.81 is obtained. Moreover in addition to this strong

60709_8577 -Txts#150Q.indd 71 19/10/12 4:28 PM


72 Physiology in Fractal Dimensions

deviation from the desired value of 3/4 there is a curvilinear relation be-
tween the TBM and the BMR in the WBE model given by
 
4 a1
ln M = ln a2 + ln B + ln 1 + B −1/3 (2.76)
3 a2
that behaves in the opposite direction to that observed in the data. Conse-
quently they conclude that the WBE model needs to be amended and/or
the data analysis needs reassessment to resolve this discrepancy. A start
in this direction has been made by Kolokotrones et al. [187]. Agutter and
Tuszynski [2] also review the evidence that the fractal network theory for
the two-variable AR is invalid.
Another variation on this theme was made by Price et al. [278] who re-
lax the fractal scaling assumptions of WBE and show that allometry expo-
nents are highly constrained and covary according to specific quantitative
functions. Their results emphasize the importance of network geometry
in determining the allometry exponents and supports the hypothesis that
natural selection minimizes hydrodynamic resistance.
Prior to WBE there was no unified theoretical explanation of quarter-
power scaling. Banavar et al. [26] show that the 3/4 exponent emerges
naturally as an upper bound for the scaling of metabolic rate in the ra-
dial explosion network and in the hierarchical branching networks models
and they point out that quarter-power scaling can arise even when the
underlying network is not fractal.
Finally, Weibel [361] presents a simple and compelling argument on the
limitations of the WBE model in terms of transitioning from BMR to the
maximal metabolic rate (MMR) induced by exercise. The AR for MMR
has an exponent b = 0.86 rather than 3/4, so that a different approach
to determining the exponent is needed. Painter [265] demonstrates that
the empirical allometry exponent for MMR can be obtained in the manner
pioneered by WBE by using the Hess-Murray law for the scaling of branch
sizes between levels.
Weibel [361] argues that a single cause for the power function arising
from a fractal network is not as reasonable as a model involving multiple
causes, see also Agutter and Tuszynski [2]. Darveau et al. [71] propose
such a model recognizing that the metabolic rate is a complex property
resulting from a combination of functions. West et al. [391] and Banavar et
al. [24] demonstrate that the mathematics in the distributed control model
of Darveau et al. is fundamentally flawed. In their reply Darveau et al. [72]
do not contest the mathematical criticism and instead point out consistency
of the multiple-cause model of metabolic scaling with what is known from
biochemical [324] and physiological [174] analysis of metabolic control. The
notion of distributed control remains an attractive alternative to the single

60709_8577 -Txts#150Q.indd 72 19/10/12 4:28 PM


Allometry Relations 73

cause models of metabolic AR. A mathematically rigorous development of


AR with fractal responses from multiple causes was recently given by Vlad
et al. [350] in a general context. This latter approach may answer the
formal questions posed by many of these critics. But I could not see a way
to present that material to the non-mathematician.

2.3.3 WW model
A logarithmic transformation is traditionally made on Eqs. (2.60) and
(2.63) resulting in a linear regression on intraspecies data of the equation

ln Bij = ln a + b ln Mij (2.77)

or on interspecies data of the equation

ln Bi = ln a + b ln Mi (2.78)

to yield estimates of the parameters a and b. The fitting of these trans-


formed ARs to data finds a great deal of variability such as depicted in
Figure 2.17. This variability is the underlying reason for my being so pedan-
tic in the presentation of the two forms of AR. Linear regression analysis
focuses on the conditional probability distribution of Y given X and is
often used to quantify the strength of the relation between the two vari-
ables or for forecasting. This is the interpretation that is often implicitly
assumed in the data analysis to determine the AR. However the fact that
the TBM M and BMR B are measured independently indicates that this
interpretation of linear regression is not appropriate for the data analysis
using Eq. (2.77) or (2.78). The independent measurements suggests that it
is more appropriate to address the joint probability distribution for bivari-
ate analysis of the data [386].
Modern explanations of AR begin with the application of fractal geome-
try as done in the last subsection and fractal statistics to scaling phenomena
as we do now. My son Damien and I [386] emphasize that the intraspecies
and interspecies ARs are not the same in a model we developed (WW) to
show that the interspecies AR can only be derived from the intraspecies
one for a narrow distribution of fluctuations. This condition is not satisfied
by metabolic data and has been shown separately for the fluctuations in
aviary and mammal data sets which have been found to have distributions
that are Pareto in form. A number of reductionist arguments conclude that
the allometry exponent is universal, however West and West [386] derive
a deterministic relation between the allometry exponent and the allometry
coefficient using the fractional calculus [386, 387]. The derived co-variation
of the allometry parameters is shown to violate the universality assumption.

60709_8577 -Txts#150Q.indd 73 19/10/12 4:28 PM


74 Physiology in Fractal Dimensions

Over the past 15 years there has been an avalanche of theory [69, 85, 278,
302, 350, 388] and statistical analyses [44, 77, 110, 135, 184, 301], in biology
and ecology attempting to pin down a value of the allometry exponent. The
most prevalent deterministic theories of metabolic allometry argue either
for b = 2/3, based on the geometry of body cooling, or b = 3/4, based
on some variation of fractal nutrient transport networks. Selected data
sets have been used by various investigators to support either of these two
values. White and Seymour [392] and Glazier [117] review the empirical
evidence for and against universal scaling in mammalian metabolism, that
is, having a specific value for b, and conclude that the empirical evidence
does not support the existence of a universal metabolic allometry exponent.
On the other hand, a number of theoretical studies [25, 84, 389] maintain
that living networks ought to have universal scaling laws.
Recently the debate has shifted away from the allometry exponent hav-
ing a single value to whether it has a continuum of values in the interval
2/3 ≤ b ≤ 1 and why [117, 119]. The most recent arguments point out that
allometry relations themselves may be only a first-order approximation to
relations involving nonlinearities beyond simple scaling [187]. We do not
address this last point here except to note that the additional nonlinearity
explains only an additional 0.3% of the total variation [187] and does not
directly influence the theory presented here.
Statistics involves the frequency of the occurrence of events and statisti-
cal methods were used by Glazier [119] to analyze the metabolic allometry
relations and he determined that the metabolic exponent (b) and metabolic
level (log a) co-vary. He posited idealized boundary constraints: the surface-
area limits fluxes of metabolic resources, waste, or heat (scaling allometri-
cally with b = 2/3); volume limits energy use or power production (scaling
isometrically with b = 1). He presents a logical argument for the relative
influence of these boundary constraints on the metabolic level (log a). The
resulting form of the co-variation function is V or U shaped, with maxima
at b = 1 and a minimum at b = 2/3.
Probability theory involves predicting the likelihood of future events and
is used in this section to determine the form of the function relating b and
log a entailed by the phenomenological metabolism probability densities.
Using the probability calculus we show that although the statistical anal-
ysis of the metabolic data of Heusner [153] and of McNab [233] yield an
allometry exponent in the interval 2/3 ≤ b ≤ 3/4 the corresponding proba-
bility densities entail a linear relation between the metabolic exponent and
metabolic level. Consequently, we derive a V-shaped functional form of the
co-variation of the allometry parameters that had been phenomenologically
obtained by Glazier [119].

60709_8577 -Txts#150Q.indd 74 19/10/12 4:28 PM


Allometry Relations 75

The data for the average BMR and average TBM across species are typ-
ically related through the logarithm of the interspecies allometry relation
Eq. (2.63) and is most often fit by minimizing the mean square error of
the linear regression of Eq. (2.78) to data. Warton et al. [353] emphasize
that there are two sources of error in the allometry relation; measurement
error and equation error. Equation error also called natural variability has
to do with the fact that the AR is a mathematical model so this error is not
physical and cannot be directly measured. Moreover there is no correct way
to partition equation error in the B and M directions. In particular the
causality implicit in choosing M as the independent variable and B as the
dependent variable as is often done in applying linear regression analysis is
unwarranted. The lines fitted to Eq. (2.78) by linear regression analysis are
not predictive, they merely provide a symmetric summary of the relation
between B and M [353].
The natural variability and measurement error in the metabolic allom-
etry data is manifest in fluctuations in the (B, M )−plane and linear re-
gression analysis determines the fitted values of the allometry parameters
a = α and b = β. West and West [386] argue that since one cannot
uniquely attribute fluctuations to one variable or the other a proper the-
ory must achieve consistency between the extremes, that is, yield the same
results if all the fluctuations were in one variable or the other.
One may also consider the fluctuations to reside in the allometry pa-
rameters in the (a, b)−plane instead of the average physiologic variables
in the (B, M )−plane. In this parametric representation we interpret the
variations in measurements to be given by fluctuations in the allometry
coefficient

a B
a = = (2.79)
α αM β
or in the allometry exponent

ln (B/α)
b−β = . (2.80)
ln M
Using Heusner’s data [153] it is possible to construct histograms of the
probability density functions (pdf ) for both allometry parameters. The pdf
for the allometry exponent b with the allometry coefficient held fixed at
a = α is determined to be that of Laplace [386]:
γ −γ|b−β|
G(b; α) = e (2.81)
2
as depicted in Figure 2.18. The empirically fit parameters are γ = 12.85,
β = 0.71 to the histogram with the quality of fit r2 = 0.97.

60709_8577 -Txts#150Q.indd 75 19/10/12 4:28 PM


76 Physiology in Fractal Dimensions

1
log P(Δb)

−1

−2

−0.3 −0.2 −0.2 0.0 0.1 0.2 0.3


Δb

FIGURE 2.18. The deviations from the prediction of the AR using the Heusner’s data
[153] partitioned into 20 equal-sized bins on a logarithmic scale. The solid line segment
is the best fit of Eq. (2.81) with Δb ≡ b − β to the twenty histogram numbers, and the
quality of the fit is measured by the correlation coefficient r2 = 0.97 with γ = 12.85.
(From [386] with permission.)

Using the same data it is also possible to determine the pdf for the
allometry coefficient a with the allometry exponent held fixed at b = β to
obtain a Pareto-like distribution. The probability density, in terms of the
normalized variable a = a/α [386], is:
μ−1
 μ a for a ≤ 1
P (a ; β) = −μ−1 . (2.82)
2 a for a ≥ 1
as depicted in Figure 2.19. The empirically fit parameter is μ = 2.79 with
the quality of fit r2 = 0.98. It should be mentioned that the same distribu-
tions with slightly different parameter values is obtained using the avian
data of McNab [233]. Dodds et al. [77] considered a similar shift in per-
spective and examined the statistics for the allometry coefficient obtaining
a log-normal distribution.
A given fluctuation in the (a , b)−plane is equally likely to be the result of
a random variation in the allometry coefficient or in the allometry exponent
and therefore the probability of either occurring should be the same:

G(b; α)db = P (a ; β)da . (2.83)

60709_8577 -Txts#150Q.indd 76 19/10/12 4:28 PM


Allometry Relations 77

If Eq. (2.83) is to be valid the allometry parameters must be functionally


related, so we assume:

b = β + f (a ). (2.84)

140
120
100
N (log a′)

80
60
40
20
0
−2 −1 0 1 2
log a′

FIGURE 2.19. The deviations from the prediction of the AR a = a/α using Heusner’s
data [153] partitioned into 20 equal sized bins on a logarithmic scale. The solid line
segment is the linear regression on Eq.(2.82) to the twenty histogram numbers, which
yields the power-law index μ = 2.79 and the quality of the fit measured by the correlation
coefficient r 2 = 0.98.

The unknown function f (a ) is determined by substituting this equation


into Eq. (2.83) to obtain the differential equation

db df (a ) P (a ; β)
= = . (2.85)
da da G(b; α)
Equation (2.85) defines a relation between the allometry parameters
through the function f (a ) in terms of the empirical pdf ’s. Inserting the
empirical distributions into Eq. (2.85) and using Eq. (2.84) to obtain G(β +
f (a ); α) the resulting differential equation:
μ−1
df (a ) μ  a for 0 < a ≤ 1
= exp [γ |f (a )|] −μ−1 (2.86)
da  γ a for a ≥ 1
must be solved.
By inspection the values of f (a ) in Eq. (2.86), including a constant of
integration C, in the indicated domains are

60709_8577 -Txts#150Q.indd 77 19/10/12 4:28 PM


78 Physiology in Fractal Dimensions


− ln a for a ≤ 1
f (a ) = C . (2.87)
ln a for a ≥ 1
Tailoring the solution to the metabolic level boundaries discussed by Galzier
[119] we introduce constraints on the solution such that the maximum value
of the allometry exponent is b = 1 and the unknown function has the value

f (a ) = 0.3 at log a = ±2 (2.88)


resulting in C = 0.346. Consequently Eq. (2.87) can be written in compact
form:

f (a ) = 0.15 |log a| . (2.89)


Thus, substituting Eq. (2.89) into Eq. (2.84) and noting that β = 0.71 the
allometry exponent is given by the relation

b = 0.71 + 0.15 |log a| (2.90)


which has the V-shaped form of the phenomenological expression con-
structed from data by Glazier [119] for both intraspecies and interspecies
allometry relations.
Note that the WW model of the co-variation of the allometry parame-
ters does not assume causality in the application of linear regression to the
metabolic data and consequently the observed variability can be associated
with either the physiologic variables or the allometry parameters. The vari-
ations in the allometry parameters are used to construct histograms and
suggest the separate probability densities for the allometry coefficient and
allometry exponent. These probability densities are then used to determine
the form of the entailed relation between the metabolic exponent (b) and
the metabolic level (log a) given by Eq. (2.90). It is determined that the
co-variation in the allometry parameters derived using the probability cal-
culus is the same as that determined from the statistical analyses of a great
deal of metabolic data by Glazier [119]. This V-shaped dependence is not
a statistical artifact but is implied by combining the AR and the statistics
drawn from the distribution functions given by Eqs. (2.81) and (2.82).

2.4 Fractal Signals


A biological/physiological signal carries information about the phenomenon
being measured and is typically a time series having both a regular and ran-
dom component. The output of dynamical physiologic networks, such as the

60709_8577 -Txts#150Q.indd 78 19/10/12 4:28 PM


Fractal Signals 79

cardiac network, the respiratory network and the motor control network,
have all been shown to be fractal and/or multifractal statistical time se-
ries as we subsequently explain [381]. Consequently, the fractal dimension
turns out to be a significantly better indicator of health than the more
traditional measures, such as heart rate, breathing rate and stride rate; all
average quantities. Fractal Physiology, as this field has come to be called
since the first edition of this book, focuses on the complexity of the human
body and the characterization of that complexity through fractal measures
and the dynamics of such measures, see for example the new journal Fron-
tiers in Fractal Physiology. These new measures reveal that the traditional
interpretation of disease as the loss of regularity is not adequate and a bet-
ter interpretation of disease is the loss of variability, or more accurately,
the loss of complexity [130].
A physiologic signal is a time series whose irregularities contain patterns
characteristic of the complex phenomenon being interrogated. The inter-
pretation of complexity in this context incorporates the recent advances
in the application of concepts from fractal geometry, fractal statistics and
nonlinear dynamics, to the formation of a new kind of understanding in
the life sciences. However, as was pointed out in the encyclopedia arti-
cle on which this section is based [382], physiological time series are quite
varied in their structure. In auditory or visual neurons, for example, the
measured quantity is a time series consisting of a sequence of brief electri-
cal action potentials with information regarding the underlying dynamical
system residing in the spacing between spikes, not in the pulse amplitudes.
A very different kind of physiologic signal is contained in an electrocardio-
gram (ECG), where the analogue trace of the ECG pulse measured with
electrodes attached to the chest, reproduces the stages of the heart pump-
ing blood. The amplitude and shape of the ECG analogue recording carries
information in addition to the spacing between heartbeats.
A third kind of physiological time series is an electroencephalogram
(EEG), where the output of the channels attached at various contact points
along the scalp, recording the brain’s electrical potential, appears at first
sight to be random. Information about the operation of the brain is as-
sumed to be buried deep within the erratic fluctuations measured at each
of these points along the scalp. Thus, a physiological signal (time series)
can have both a regular part and a fluctuating part; the challenge is how
to best analyze the time series data from a given physiologic network to
extract the maximum amount of information.
Physiologic time series have historically been fit to the engineering
paradigm of signal plus noise. The signal is assumed to be the smooth,
continuous, predictable, large-scale undulation in a time series. The idea
of signal and predictability go together, in that signals imply information

60709_8577 -Txts#150Q.indd 79 19/10/12 4:28 PM


80 Physiology in Fractal Dimensions

or patterns, and very often the mechanistic interpretation of information


has to do with our ability to associate that information with mechanical,
predictable processes within the network generating the time series. Noise,
on the other hand, typically consists of discontinuous, small-scale, erratic
fluctuations thought to disrupt and mask the signal. The noise is assumed,
by its nature, to contain no information about the network of interest,
but rather to be a manifestation of the influence of the unknown and un-
controllable environment on the network’s dynamics. It is considered to
be undesirable and is filtered from the time series whenever possible. The
mathematician Norbert Wiener (1894-1964), as his contribution to the war
effort during World War Two, gave the first systematic discussion of this
partitioning of erratic time series into signal and noise in his book Time
Series [395]. Wiener’s partitioning of effects did not however take into ac-
count the possibility that the underlying process itself can be complex and
such complexity would not allow for his neat separation into signal and
noise. Complexity is, in fact, the usual situation in physiology so we ought
not to expect that physiologic time series separate. The signal plus noise
paradigm does not apply to such time series in general because of the com-
plexity of the underlying phenomena; complexity that is often manifest in
the fractal properties of the time series.
At the time of Poincarè’s analysis of the three-body problem, the then
newly emerging perspective in medicine was that of homeostasis, which
asserts that physiological systems operate in such a way as to maintain
a constant output, given a variable input. This vision of medicine dates
from the middle nineteenth century and views the human body as con-
sisting of feedback loops and control mechanisms that guide the perturbed
physiology back to an equilibrium-like state of dynamic harmony. Recent
research indicates that this picture is no longer viable and a more com-
plete description of physiologic systems requires the use of non-equilibrium
statistical concepts. In particular, fractal physiology requires the use of
nonlinear dynamical concepts and non-stationary statistics, both of which
may be manifest through the scaling behavior of physiological time series.
Mandelbrot called into question the accuracy of the traditional perspec-
tive of the natural sciences by pointing to the failure of the equations of
physics to explain such familiar phenomena as turbulence and phase transi-
tions. He catalogued and described dozens of physical, social, and biological
phenomena that cannot be properly described using the familiar tenets of
dynamics from physics [217, 219]. The mathematical functions required to
explain these complex phenomena have properties that for a hundred years
had been categorized as mathematically pathological. Mandelbrot argued
that rather than being pathological these functions capture essential prop-
erties of reality and are therefore better descriptors of the physical world

60709_8577 -Txts#150Q.indd 80 19/10/12 4:28 PM


Fractal Signals 81

than are the traditional analytical functions of nineteenth century physics


and engineering.
Living organisms are immeasurable more complicated than inanimate
objects, which partly explains why we do not have available fundamental
laws and principles governing physiologic phenomena equivalent to those in
physics. For example, there are no equivalents of Newton’s Laws, Maxwell’s
equations and Boltzmann’s Principle for physiologic phenomena. Therefore
in this section we briefly review a strategy for analyzing a diverse set of
physiologic time series and content ourselves with suggesting that this strat-
egy reveals an underlying symmetry in these separate networks that can
be exploited. The analysis suggests a new kind of control theory and a new
interpretation of disease, both of which we take up in due course.
Schrödinger, in his book What is Life? [310], laid out his understanding
of the connection between the world of the microscopic and macroscopic,
based on the principles of equilibrium statistical physics. In that discussion
he asked why atoms are so small relative to the dimensions of the human
body. The high level of organization necessary to sustain life is only possible
in macroscopic systems; otherwise the order would be destroyed by micro-
scopic (thermal) fluctuations. A living organism must be sufficiently large
to maintain its integrity in the presence of thermal fluctuations that disrupt
its constitutive elements. Thus, macroscopic phenomena are characterized
by averages over ensemble distribution functions; averages that smooth
out microscopic fluctuations. Consequently, any strategy for understand-
ing physiologic time series must be based on a probabilistic description of
complex phenomena. Such an understanding of phenomena would be based
on patterns resulting from the lack of a characteristic time scales, that is,
on self-similar or fractal scaling.
All three types of fractals appear in the life sciences; geometrical frac-
tals, that determine the spatial properties of the tree-like structures of the
mammalian lung, arterial and venous systems, and other ramified struc-
tures ; statistical fractals, that determine the properties of the distribution
of intervals in the beating of the mammalian heart, breathing, and walking
[381]; finally, there are dynamical fractals, that determine the dynamical
properties of systems having a large number of characteristic time scales,
with no one scale dominating [373]. In the complex systems found in physi-
ology the distinction between these three kinds of fractals blur, and for time
series we focus our attention on the dynamical rather than the geometrical
fractals.

2.4.1 Spectral decomposition


The usual method for analyzing biomedical time series data is to deter-
mine the harmonic content of the time trace [30]. For an ordered set of
frequencies one finds an ordered set of constants (mode amplitudes), the
mean square value of a given mode amplitude being the energy contained

60709_8577 -Txts#150Q.indd 81 19/10/12 4:28 PM


82 Physiology in Fractal Dimensions

in the time series at a particular frequency. This procedure is often referred


to as a spectral decomposition of the time series because it extracts from
the data set (time trace) the spectrum of frequencies contributing to the
process of interest. The set of moduli of the mode amplitudes determines
the spectral strength of the time trace at the contributing frequencies, but
the set of phases determine the detailed shape of the time trace. Thus, for
a prescribed spectrum the time series can represent a coherent time pulse
or a random function of time and most things in-between. It is apparent
that since both of these time series can have the same harmonic content
it is the distribution of phases that is a central issue in determining the
shape of the time trace. In the output of physiological systems both types
of time series are obtained; coherent pulses as well as apparently random
time traces, see, for example, Figure 2.20.

FIGURE 2.20. We select frequencies that are integer multiples of a fundamental fre-
quency ω0 and the amplitudes decrease according to a scaling rule such that the nth
amplitude is a factor 1/a smaller than the (n-l)st. The spectrum consists of the har-
monics of ω0 with the nth harmonic having a spectral strength 1/a2n . The shape of a
time trace having this spectrum is quite variable. If we choose all the phases to have a
constant value, zero say, the result is curve (A), then the time trace is given by a single
pulse of height a/(a − 1). If we choose the phases to be random variables, uniformly
distributed on the interval (0, 2π), the result is curve (B), the time trace appears to be
a random function of time.

60709_8577 -Txts#150Q.indd 82 19/10/12 4:28 PM


Fractal Signals 83

The dramatic difference between the extremes of a coherent signal and


random noise is a manifestation of the different dynamics present in the
processes generating the phase relations between the different spectral com-
ponents. The time series for the pulse is reminiscent of the QRS-complex
observed in an electrocardiogram. The QRS is the representation of the
depolarization of the myocardial cells. The erratic time series with its ap-
parently random phase, on the other hand, is reminiscent of heart rate
variability in an active healthy subject or the EEG of an alert mammalian
brain, as discussed in Chapter Four.
If we interpret the usual series for the fractal function F (z), given by
Eq.(2.20) to be the spectral decomposition of a time series where the pre-
viously discrete z is interpreted as the continuous time t, then it represents
a dynamic process that does not have a time derivative. For a continuous
time series the energy content is determined by means of the autocorrela-
tion function which measures how long the influence of a given variation in
a times series persists. The autocorrelation function is obtained by multi-
plying F (t) by a displaced copy of itself F (t + τ ) and integrating t over a
long rime interval T and dividing by T in the limit T becomes infinite:
T /2
1
C(τ ) = lim F (t)F (t + τ )dt. (2.91)
T →∞ T
−T /2

An interesting aspect of the extended Weierstrass function is that its au-


tocorrelation function also has the form of an extended Weierstrass func-
tion, but with different parameters [40]. I can use the properties of the
extended Weierstrass function that were developed earlier to see this be-
havior. The first is that since F (t) has a modulated power law as its dom-
inant time behavior [see Eq. (2.24)], then so too does the autocorrelation
function of the time series, but with twice the power-law index:

T /2
1
∞ 
 ∞
1   
C(τ ) = lim dt n+n
cos [bn ω0 t] cos bn ω0 (t + τ ) (2.92)
T →∞ T a
n=0n =0
−T /2

Using the trigonometric identity for the product of cosines and the integral
relation
π
1
dθ cos mθ cos nθ = δm,n

−π

we obtain

60709_8577 -Txts#150Q.indd 83 19/10/12 4:28 PM


84 Physiology in Fractal Dimensions

∞
1
C(τ ) = 2n
cos [bn ω0 τ ] (2.93)
n=0
a
so that following the scaling analysis given earlier results in the dominant
behavior for the autocorrelation function

C(τ ) = A(τ )τ 2α (2.94)


where the scaling index is given by the ratio of logarithms in Eq.(2.25) and
the coefficient function has the Fourier expansion in ln τ with period ln b
given by Eq.(2.27). The energy spectral density of the time series is given
by the Fourier transform of the autocorrelation function,

S(ω) = eiωτ C(τ )dτ. (2.95)
−∞

Due to the slow time variation of A(τ ) the asymptotic spectrum is esti-
mated using a Tauberian Theorem [396] to be
A
S(ω) ≈ (2.96)
ω 2α+1
for small frequencies, which is an inverse power law in frequency.
The above argument indicates that a fractal time series is associated
with a power spectrum in which the higher the frequency component, the
lower its power. Furthermore, if the spectrum is represented by an inverse
power law, then a plot of log (frequency) versus log (power) should yield
a straight line graph of slope −(2α + 1). Since the frequency output of
physiological networks can be a determined using Fourier analysis, this
scaling hypothesis can be directly tested.
Let me now return to the example of the cardiac depolarization pulse.
Normally, each heartbeat is initiated by a stimulus from pacemaker cells
in the sinus node in the right atrium. The activation wave then spreads
through the atria to the AV junction. Following activation of the AV junc-
tion, the cardiac impulse spreads to the ventricular myocardium through
a ramifying conduction network, the His-Purkinje system. This branching
structure of the His-Purkinje conduction network is strongly reminiscent of
the bronchial fractal discussed previously. In both structures a self-similar
tree with finely-scaled details on a ‘microscopic’ level is seen. In the present
case the spread of the depolarization wave is represented on the body sur-
face by the QRS-complex of the electrocardiogram. Spectral analysis of the
QRS waveform (time trace) reveals a broadband frequency spectrum with
a long tail corresponding to an inverse power law in frequency. To explain

60709_8577 -Txts#150Q.indd 84 19/10/12 4:28 PM


Fractal Signals 85

BUNDLE OF HIS
LEFT BUNDLE BRANCH

RIGHT BUNDLE BRANCH

MYOCARDIUM
PURKINJE FIBERS

FIGURE 2.21. The ventricular conduction system (His-Purkinje) appears to be a fractal-


like network demonstrating repetitive branching on progressively smaller scales

this inverse power-law spectrum we [124] conjectured that the repetitive


branchings of the His-Purkinje system represent a fractal set in which each
generation of the self-similar segmenting tree imposes greater detail onto
the network. At each fork in this network, see Figure 2.21, the cardiac im-
pulse activates a new pulse along each conduction branch, thus yielding two
pulses for one. In this manner, a single pulse entering the proximal point of
the His-Purkinje network with N distal branches, generates N pulses at the
interface of the conduction network and myocardium. In a fractal network,
the arrival times of these pulses at the myocardium are not uniform. The
effect of the finely branching fractal network is to subtly decorrelate the
individual pulses that superpose to form the QRS-complex [124].
As we have discussed, a fractal network is one that cannot be expressed
in terms of a single scale, so that one cannot express the overall decorrela-
tion rate of impulses by a single time. Instead one must find a distribution
of decorrelation rates or times in the time trace in direct correspondence to
the distribution of branch lengths in the conduction network. These rates
are based on an infinite series in which each term corresponds to higher and
higher average decorrelation rates in direct analogy with the series expan-
sion for the Weierstrass function. Each term therefore represents the effect
of superposing finer and finer scales onto the fractal structure of the con-
duction system. Each new ‘layer’ of structure renormalizes the distribution
of average decorrelation rates. This renormalization procedure eventually

60709_8577 -Txts#150Q.indd 85 19/10/12 4:28 PM


86 Physiology in Fractal Dimensions

leads to a transition in the distribution of decorrelation rates to a power-


law form in the region of high decorrelation rates. The spectrum of the time
trace of the voltage pulses resulting from this fractal decorrelation cascade
of N pulses shows inverse power-law behavior.
I have argued that a voltage pulse emanating from the pacemaker model
region of the heart becomes shattered into a large number of equal am-
plitude pulses. Each pulse travels a different path length to reach the my-
ocardium and there superimposes to form the classical QRS pulse. The dis-
tribution in path lengths resulting from the fractal nature of the branches
gives rise to a distribution of decorrelation times τc among the individual
spikes impinging on the myocardium. The unknown distribution p(τc ) can
be obtained using an argument parallel to that presented for the mam-
malian lung.
Denote the correlation function constructed from the time series for the
QRS complex by c(t) and assume it has a maximum correlation time τc , for
example, the correlation function might have the exponential form e−t/τc .
We use the argument leading to Eq.(2.40) by considering a sequence of
shorter correlation times τc /b each with a relative frequency 1/a. At the sec-
ond stage of amplification, which we assume occurs with relative frequency
1/a2 , the correlation time becomes τc /b2 .The new correlation function C(t)
containing an infinite number of levels of amplification is
  
a−1 b b2 2
C(t) = c(t) + c(bt) + 2 c(b t) + · · · (2.97)
a a a
so that in the renormalization form

b a−1
C(t) = C(bt) + c(t) (2.98)
a a
The asymptotic solution to Eq.(2.98) where C(t) >> c(t) is given by

C(t) = A(t)tα−1 (2.99)

where the scaling index and the coefficient function have the same defini-
tions as Eqs.(2.90) and (2.79).
If we assume that the above scaling law is a good representation of the
asymptotic correlation function for the QRS complex then the power spec-
trum S(ω) for the QRS pulse is


1
S(ω) = 2 dtA(t)tα−1 cos ωt ∝ α (2.100)
ω
0

60709_8577 -Txts#150Q.indd 86 19/10/12 4:28 PM


Fractal Signals 87

QRS SPECTRUM

4
log(amplitude)2

0.2 0.6 1.0 1.4 1.8


log(harmonic)

FIGURE 2.22. The normal ventricular deploration (QRS) waveform (mean data of 21
healthy men) shows a broadband distribution with a long, high-frequency tail. The
straight line segment is the linear regression to an inverse power-law spectrum [S(ω)
∝ ω −α ] with a fundamental frequency of 7.81 Hz. (From Goldberger et al. [123] with
permission.)

when A(t) is slowly varying in time or is constant, so that the integral can
be evaluated using a Tauberian Theorem. In general the exponent α can de-
pend on other parameters such as temperature and pressure. Thus accord-
ing to this argument the QRS waveform should have an inverse power-law
spectrum.
The actual data fits this model quite well as shown in Figure 2.22. This
example, therefore, supports a connection between nonlinear structures,
represented by a fractal His-Purkinje system, and nonlinear function, re-
flected in the inverse power-law pulse [128]. Thus, just as nature selects
static anatomical structures with no fundamental length scale, s/he selects
structure for the His-Purkinje conduction system so as to have no funda-
mental time scale. Presumably the error-tolerance of the fractal structure
is as strong an influence on the latter as it is on the former.
In the case of the QRS-complex such power-law scaling could be related
to the fractal geometry of the His-Purkinje system. What is the ‘mech-
anism’ for self-similar scaling in the regulation of heart rate variability?
Fluctuations in heart rate are regulated by multiple control processes in-
cluding neurohumoral regulation (sympathetic and parasympathetic stim-

60709_8577 -Txts#150Q.indd 87 19/10/12 4:28 PM


88 Physiology in Fractal Dimensions

ulation), and local electrochemical factors. One strategy for ascertaining


the contribution of such factors would be to selectively block their effects,
for example by giving the drug propranolol to eliminate sympathetic ef-
fects or atropine to block parasympathetic effects. Such experiments have
been very helpful in assessing the directional effect of various modulators
of heart rate and estimating their quantitative contributions. However, this
type of experimental methodology does not address the basis of the inverse
power-law spectrum observed when the entire system is functioning nor-
mally. When we pose the question: “What is the mechanism of such inverse
power-law spectra?” we are not searching for a mechanism in the conven-
tional sense. Traditionally in physiology, the term mechanism applies to
the linear interaction of two or more (linear or nonlinear) elements which
causes something to happen. Receptor-ligand binding, enzyme substrate
interactions, and reflex-arcs are all examples of traditional physiological
mechanisms. The ‘mechanism’ responsible for inverse power-law behavior
in physiological systems, however, is probably not a result of a linear in-
teractive cause-effect chain, but more likely relates to the kinds of complex
scaling interactions we have been discussing. The inverse power-law spec-
trum can be viewed as the resultant of possibly many processes interacting
over a myriad of interdependent scales.

2.5 Summary
This has been a wonderful chapter to write and populate with some of the
exciting ideas that have been realized since the first edition of this book
was written twenty years ago. The notion of fractals in physiology that
seemed revolutionary then is not yet mainstream by any means, but it is
no longer dismissed out of hand as it once was. The utility of the fractal
concept has been established not just as a descriptive tool but as a measure
of diagnostic significance. In his tribute to the late Benoit Mandelbrot the
world class anatomist Ewald Weibel explained how Mandelbrot and fractals
changed the way he and the rest of the scientific community thinks about
biological forms [362]. This change in thinking is the basis for his 2000
book Symmorphosis [360] on how the size of parts of an organism must be
matched to the overall functional demands and the design principle that
accomplishes this goal is that of fractals.
It is interesting to consider about how dynamics might be phenomeno-
logically incorporated into the description of physiologic processes using
the arguments from this chapter. One image that suggests itself is that
of a feedback system that induces a response on time scaled a factor of b
faster than the input time. When this scaled response is fed back as part

60709_8577 -Txts#150Q.indd 88 19/10/12 4:28 PM


Summary 89

of the input, it generates a second scaled response on a time scale that is


again a factor b faster than the response of the preceding time scale. This
is an application of Weiner’s Cybernetics [396] concept that is of particular
physiological importance. It is a control feedback system whose self-similar
scaling property enhances the stability of the system response. In the con-
ventional control system the spectrum of the control mechanism is usually
a smooth function centered on a frequency ω0 and tapering rapidly to zero
over some restricted interval of frequency in the neighborhood of ω = ω0 .
For the network envisioned here, the feedback control yields a total spec-
trum that is an inverse power law in frequency due to the lack of a highest
characteristic frequency. The stability of the power-law network is greater
than that of the normal feedback network since if any one element of feed-
back in a self-similar cascade is lost it would not significantly affect the
overall network response characteristics. This is true because the series of
response times is lacunary, that is, the time scales have gaps rather than be-
ing continuous. Therefore one or a few additional gaps in the series would
not change the control properties of the feedback. This is similar to the
fluctuation-tolerance observed in the fractal model of the lung [367, 370].
This self-similar feedback hypothesis, of course, does not specifically an-
swer the more basic question of how the multiple scales are actually gen-
erated. What the hypothesis suggests is that this type of general scaling
mechanism is at play. Elucidating the basis of this generic scaling from
the molecular level on up is one of the major challenges still facing Fractal
Physiology. One particularly fruitful approach to explaining the source of
the inverse power-law behavior has its origin in network science [385].
In this chapter I have discussed the fact that fractals are found in three
distinct contexts: geometrical, statistical, and dynamical. A geometrical
fractal has to do with the static structure of an object, and stretches our
notions of geometry beyond that of a point, line and plane and the ac-
companying concepts of smoothness and continuity, into the realm of the
irregular and discontinuous. The classical geometry of Euclid is concerned
with regular forms in integer dimensions. However, as we saw, anatomical
shapes are perversely non-Euclidean as is apparent by looking at, say, the
mammalian lung and His-Purkinje conduction system of the heart. Fractal
geometry is concerned with irregular forms in these non-integer dimensions.
Statistical fractals share a number of characteristics with geometrical
fractals. We saw, for example, that the latter possessed a structural self-
similarity, so that as one magnifies a given region of such a structure then
more and more structural detail is revealed. Correspondingly in a statistical
fractal, one finds a statistical self-similarity. In a fractal stochastic process,
not only does the process itself display a kind of self- similarity, but so too
does the distribution function characterized the statistics of the process.

60709_8577 -Txts#150Q.indd 89 19/10/12 4:28 PM


90 Physiology in Fractal Dimensions

For example, if X(t) is a random function of time and it is fractal, then


for real constant β and α it satisfies the scaling relation X(t) = β −α X(βt),
that is, a given realization X(t) is identical with one that has been stretched
in time (βt) and scaled in amplitude (β −α ) where α is related to the fractal
dimension. In the case of the lung the statistics were not time dependent
since the scales appearing there are the consequence of its asymptotic (in
time) state.
The fractal character of the statistics reveals itself in that case in the
inverse power-law distribution function specifying the statistics of the scales
contributing to the linear scales of the bronchial tubes at each generation
in the lung. Recall that the inverse power law is a consequence of the
statistics having no fundamental scale, just as the geometrical fractal has
no fundamental spatial scale. We take up the time-dependent case of the
scaling probability density in the sequel.
Fractals are often manifest through the existence of scaling relations
as indicated by the scaling of the autocorrelation function. Mandelbrot
identified the scaling in allometry relations as being indicative of fractal
processes but many thought they were just another among many kinds
of power-law scaling. In their general criticism of power laws Stumpl and
Porter [331] point to allometry relations as being one of the few genuinely
good power laws, having as they do a great deal of empirical evidence
over many orders of magnitude (from bacteria to whales) but with strong
underlying theory as well. The WBE model of the basal metabolic allometry
relation relies on the fractal concept to describe how nutrients are supplied
to all parts of the body, whereas the WW model emphasizes the fractal
statistics of Pareto explaining the experimentally observed co-variation of
the allometry coefficient and allometry exponent.
The statistical analysis of metabolic data presented herein show that
empirical ARs exist across multiple species and consequently the form of
ARs are not solely dependent on reductionistic mechanisms. In the case of
metabolic ARs one could reasonably make a case for the allometry exponent
to be 2/3 for small animals, 3/4 for large animals and 1 for plants, but no
one value of b spans the total range of animal and plant sizes. In addition
the only theories that predict a universal value of the allometry exponent
do so to explain the theoretical AR and not the empirical AR. There is
only our phenomenological theory that derives the empirical AR between
the averaged variables [387]. Lastly, the allometry exponent and allometry
coefficient are determined to co-vary using both theory [386] and statistical
data analysis [118] and consequently the notion of a universal value for the
allometry exponent is ruled out..
Finally, we recall that a dynamical fractal was used in the interpreta-
tion of the His-Purkinje conduction system as a fractal network. In this

60709_8577 -Txts#150Q.indd 90 19/10/12 4:28 PM


Summary 91

example it was observed that there was no fundamental time scale (period)
in the fractal process, resulting in a correlation function that increased
algebraically in time. This power-law correlation function resulted in a
predicted inverse power-law spectrum of the QRS-complex which was also
observed.
The physiological examples considered in the present chapter share the
common feature of being static. Even the time dependence of the correlation
function obtained from the QRS time series resulted from the static struc-
ture of the His-Purkinje conduction network rather than as a consequence
of any time varying aspect of the network. Subsequent chapters address
some of the dynamic aspect of physiology, including dynamical diseases, as
well as the other aspects of physiology and medicine that are intrinsically
time dependent.

60709_8577 -Txts#150Q.indd 91 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd 92 19/10/12 4:28 PM


Chapter 3
Dynamics in Fractal Dimensions

Until now I have focused attention primarily on the relevance of scaling


ideas to structure and function in physiology. I now redirect that atten-
tion to the dynamics intrinsic to a large number of biomedical networks.
The present chapter presents some of the formal ideas of nonlinear dynam-
ics systems theory (NDST), and relates them to the foundational notions
presented in the previous chapters. Of obvious additional interest are the
potential medical implications of these concepts. If, for example, normal
function of a variety, if not all, physiological networks are characterized
by inverse power-law distributions, then a reasonable hypothesis is that at
least some disease states are associated with a loss of this scaling.
What is the evidence for these hypothesized scaling pathologies? At
present, only very preliminary answers can be given to this important ques-
tion. Mackey and Milton [213] and earlier Mackey and Glass [212] defined
dynamic disease as one that occurs in an intact physiological control net-
work operating within a range of control parameters that leads to abnormal
dynamics. This is consistent with the definition used by Goldberger and
West [123, 127]. The signature of the abnormality is a change in the quali-
tative dynamics of some observable as one or more parameters change. The
power spectrum of the process is one such measure of the normal operat-
ing state. A number of disease processes appear to be characterized by a
narrowing of the frequency spectrum with a relative decrease in higher fre-
quency components. We have observed a similar loss of ‘spectral reserve’ in
93

60709_8577 -Txts#150Q.indd 93 19/10/12 4:28 PM


94 Dynamics in Fractal Dimensions

cardiac interbeat interval spectra following atropine administration to nor-


mal subjects [128]. Thus, it appears that interference with the autonomic
nervous system leads to a loss of spectral reserve.
A related feature of the frequency spectra of perturbed physiological net-
works is that not only is overall power reduced, but spectral energy may
eventually become confined to a few discrete frequency bands. The discrete
(narrowband) type of frequency spectrum contrasts with the broadband
inverse power-law spectra seen under normal conditions. The shift from a
broadband to a narrowband spectrum dramatically alters the behavior of
the system. Instead of observing physiological variability, one begins to see
highly periodic behavior. The medical literature abounds with examples
of such ‘pathological (usually low frequency) periodicities’. For example,
low-frequency, periodic fluctuations in heart rate and respiration may be a
prominent feature in patients with severe congestive heart failure [122, 125]
as well as in the fetal distress syndrome [238]. This cyclic behavior of res-
piration in very ill cardiac patients has actually been known for several
centuries, and is referred to as the Cheyne-Stokes breathing, as shown in
Figure 3.1. It is also observed in obese persons, and after neural brainstem
lesions. The detection of a loss of spectral reserve and the onset of patho-
logical periodicities in both adults and infants at risk for sudden death
promises to provide a new approach to cardiovascular monitoring.
Furthermore, similar techniques may provide novel ways of monitoring
other networks. For example an inverse power-law spectrum characterizes
the apparently erratic daily fluctuations in counts of neutrophils (a type of
blood cell) in healthy subjects. In contrast, periodic (predictable) fluctu-
ations in neutrophil counts have already been detected in certain cases of
chronic leukemia [125]. These oscillations have periods of between 30 and
70 days depending on the patient. This periodic behavior along with the
fluctuations have been modeled using single deterministic time delay equa-
tions, see for example [213]. Such models are discussed more fully in the
sequel. Spectral analysis of fluctuations in blood counts may provide a use-
ful means of identifying pre-leukemic states as well as patients’ responses
to chemotherapy. Finally, a loss of physiological variability in a variety of
systems appears to be characteristic of the aging process in different organ
systems [123, 221, 352].
Neurological disorders, including epilepsy and movement disorders, have
also been modeled as dynamic diseases in which the role of bifurcation has
been examined, see Rapp et al. [284] for an early review. These authors
point out that in 1932 Gjessing published the first in a series of papers
establishing the correlation between intermittent catatonia (periodic cata-
tonia schizophrenia) and rhythmic changes in the basal metabolic rate.
These variations and the schizophrenic symptoms persisted unless treated

60709_8577 -Txts#150Q.indd 94 19/10/12 4:28 PM


Nonlinear Bio-oscillators 95

ARTERIAL 95
OXYGEN
SATURATION
87

MAX
VENTILATORY
0

72

BEATS/MIN 80
HEART RATE

2 3 4 5 6 7 8
TIME (MINUTES)

FIGURE 3.1. The low frequency periodic fluctuations in the heart rate are compared
with two measure of respiration in very ill cardiac patients. The periodic phenomenon
is referred to as Cheyne-Stokes breathing.

by thyroxin [70]. More biomedical examples are discussed subsequently af-


ter some of the fundamental concepts of nonlinear dynamics are developed.
In all these areas of medical research, there is a common physiological
theme. Complexity is the salient feature shared by all the networks dis-
cussed — a feature that continues to attract more and more attention in
physical networks as well [127, 383]. Most scientists have assumed that
understanding such networks in different contexts, or even understanding
various physiological networks in the same organism, would require com-
pletely different models. The most exciting prospect for the new dynamics
is that it provides a unifying theme to many investigations which up to
now have been considered unrelated.

3.1 Nonlinear Bio-oscillators


In the physical sciences the dynamics of a system are determined by the
equations describing how the observables change in time. These equations
are obtained by means of some general principle, such as the conservation
of energy and/or the conversation of momentum, applied to the system

60709_8577 -Txts#150Q.indd 95 19/10/12 4:28 PM


96 Dynamics in Fractal Dimensions

of interest. The appropriate conservation law follows from a symmetry of


the system which in turn determines a rule by which the system evolves.
If a set of circumstances is specified by an N −component vector X =
(X1 , X2 , · · ·, XN ) then in order to predict the future state of the system
from its present configuration, the investigator must specify a rule for the
system’s evolution. In the physical sciences the traditional strategy is to
construct a set of differential equations. These equations are obtained by
considering each component of the system to be a function of time, then
as time changes so too do the circumstances. If in a short time interval Δt
we can associate an attendant set of changes ΔX = (ΔX1 , · · ·, ΔXN ) as
determined by
ΔX = F(X, t)Δt (3.1)
then in the limit Δt → 0 one writes the differential ‘equations of motion’:
dX
= F(X, t) (3.2)
dt
which is a statement about the evolution of the system in time. If at time
t = 0 we specify the components X(0), that is, the set of circumstances
characterizing the system, and if F(X, t) is an analytic function of its ar-
guments, the evolution of the system is determined by direct integration
of the equations of motion away from an initial state. This is one of the
styles of thought adopted from the physical sciences into the biological and
behavioral sciences [364].
The mathematicians have categorized the solutions to such equations
for the simplest kinds of systems. One way to describe such systems is
by means of geometric constructions in which the solution to an equation
of the above form is depicted by a curve in an appropriate space. The
coordinate axes necessary for such a construction are the continuum of
values that the vector X(t) can assume, each axis being associated with one
component of the vector X. As we saw in the Introduction, this is called a
phase space. Consider a two-dimensional phase space having axes labeled
by the components of the dynamical system X = (X, Y ). A point in the
phase space x = (x, y) gives a complete characterization of the dynamical
system at a point in time. As time proceeds this point traces out a curve
starting from the initial state [X(0), Y (0)] and proceeding to the final state
[X(t), Y (t)] at time t. A trajectory or orbit in phase space traces out the
evolution of the dynamical system. Time is a continuous parameter which
indexes each point along such a solution curve. The field of trajectories
initiated from a set of initial conditions is often referred to as the flow field.
If the asymptotic (t → ∞) flow field converges to a single point in phase
space, this is called a fixed point (or focus) as depicted in Figure 3.2A. If
the flow field from multiple initial conditions still converges to a fixed point

60709_8577 -Txts#150Q.indd 96 19/10/12 4:28 PM


Nonlinear Bio-oscillators 97

FIGURE 3.2. The curve in A depicts a single trajectory spiraling into the orgin, which
is a fixed point. The three curves in B depict how orbits starting from various intial
conditions are drawn into the origin if the intial states are in the basin of attraction for
the fixed point.

those initial states were in the basin of attraction of the focus as suggested
in Figure 3.2B.
If the flow field converges on a single closed curve this is called a limit
cycle and is depicted in Figure 3.3B. Such limit cycles appear as periodic
time series for the variables of interest. The basin of attraction for the limit
cycle can be geometrically internal to the cycle in which case it evolves
outward to intercept it. The basin can also be outside the cycle in which
case the orbit is drawn inward to the cycle.
Nature abounds with rhythmic behavior that closely intertwines the
physical, biological and social sciences. The spinning earth gives rise to
periods of dark and light that are apparently manifest through the circa-
dian rhythms in biology. A large but incomplete list of such daily rhythms
is given by Luce [208]: the apparent frequency in fetal activity variations
in body and skin temperature, the relative number of red and white cells
in the blood along with the rate at which blood coagulates, the production
and breakdown of ATP (adenosine triphosphate), cell division in various
organs, insulin secretion in the pancreas, susceptibility to bacteria and in-
fection, allergies and pain tolerance. No attempt has been made here to
distinguish between cause and effect; the stress is on the observed peri-
odicity in each of these phenomena. The shorter periods associated with
the beating heart and breathing, for example, are also modulated by a
circadian rhythm.

60709_8577 -Txts#150Q.indd 97 19/10/12 4:28 PM


98 Dynamics in Fractal Dimensions

There is a tendency to think of the rhythmic nature of many biological


phenomena, such as the beating heart, breathing, circadian rhythm, etc.
as arising from the dominance of one element of a biosystem over all the
other elements. A logical consequence of this mode of thought is the point
of view that much of the bio-network is passive, taking information from
the dominant element and merely passing it along through the network
to the point of utilization. This perspective was called into question by
mathematical biologists, a substantial number of whom regard the rhythmic
nature of biological processes to be the consequence of a dynamic interactive
nonlinear network, that is to say that biological networks are systemic
[189]. The mathematical models used to support this contention were first
developed in nonlinear dynamics where application to biological oscillations
was championed by Winfree [400, 401], Glass et al. [115], as well as West and
Goldberger [367, 371] among others. The application of nonlinear equations
to describe biorhythms, however, actually dates back to the 1928 work of
van der Pol and van der Mark [344] on the relaxation oscillator.

FIGURE 3.3. A: The phase space is shown for a harmonic oscillator with a few typical
orbits. Each ellipse has a constant energy. The energy of the oscillator is increased as
the system jumps from an ellipse of smaller diameter to one of larger diameter. B: A
single limit cycle is depicted (solid curve). The dashed curves corresponds to transient
trajectories that asymptotically approach the limit cycle for the nonlinear oscillator.

60709_8577 -Txts#150Q.indd 98 19/10/12 4:28 PM


Nonlinear Bio-oscillators 99

Oscillations in biological processes do not in general follow a simple har-


monic variation in either space or time. The usually situation is one in
which the period of oscillation depends on a number of unrelated factors,
some intrinsic to the system but others external to it. Examples of these
factors are the amplitude of the oscillation, the period at which the bio-
logical unit is being driven, internal dissipative processes and fluctuations,
to name a few. In particular, since all biological units are thermodynami-
cally open to the environment they give up energy to their surroundings in
the form of heat, that is, they are dissipative. This regulatory mechanism
helps to maintain the organism at an even temperature. Thus, if a simple
harmonic oscillator is used to realistically simulate an organism undergo-
ing oscillations, it must contain dissipation. It is well known, however, that
the asymptotic trajectory of a dissipative linear oscillator is a stable fixed
point in phase space. The phase space in this case consists of the oscillator
·
displacement X(t) and velocity Y (t) = X(t) as depicted in Figure 3.2. Here
the amplitude of the oscillator excursions become smaller and smaller, due
to dissipation, until eventually it comes to rest.
If a bio-oscillator is to remain periodic, energy must be supplied to the
organism in such a way as to balance the continuous loss of energy due to
dissipation. If such a balance is maintained then the phase space orbit be-
comes a stable limit cycle, that is, all orbits in the neighborhood of this orbit
merge with it asymptotically. However, simple harmonic oscillators do not
have the appropriate qualitative features for describing biological systems.
One of the important properties that linear oscillators lack and which is ap-
parently ubiquitous among biological systems is that of being self-starting.
Left to itself a bio-oscillator spontaneously oscillates without external ex-
citation. One observes that the self-starting or self-regulating character of
bio-oscillators depends on the intrinsic nonlinearity of the organism. Exam-
ples of systems that experimentally manifest this self-regulating behavior
are aggregates of embryonic cells of chick hearts [115], simian cortical neu-
rons [284] and the giant internode cell of the fresh water algae Nitella flexilis
[148] to name a few. The experimental data from these and other examples
are discussed subsequently.
A nonlinear oscillator which is ‘weakly’ nonlinear is capable of oscillating
at essentially a single frequency and can produce a signal that is very low in
harmonic content. Although the output from such an oscillator is sinusoidal
at a single frequency, there are fundamental and crucial differences between
such an oscillator and the classical harmonic oscillator, the latter being a
conservative linear system which is loss-free. The basic difference is that
the nonlinear oscillator can oscillate at one and only one frequency and at
one and only one amplitude, the amplitude and frequency are dependent

60709_8577 -Txts#150Q.indd 99 19/10/12 4:28 PM


100 Dynamics in Fractal Dimensions

on one another for a given configuration of parameters. In contrast, the


amplitude and frequency are independent of one another in the classical
linear oscillator, which can oscillate at any arbitrary level for a given set
of parameter values. These differences are illustrated in the description of
the limit cycle.
The phase plane of a Hamiltonian (loss-free) oscillator is depicted in
Figure 3.3A together with the limit cycle for an oscillator with nonlinear
dissipation depicted in Figure 3.3B. Although there are superficial resem-
blances between these diagrams, there are, in fact, fundamental differences
between these two physical systems. While the linear conservative oscilla-
tor can be described by an infinite family of closed ellipses, as suggested by
the nested form of Figure 3.3A, the nonlinear oscillator approaches a single
limit cycle as seen in Figure 3.3B. This limit cycle is reached asymptotically
whether the initial conditions correspond to an infinitesimal perturbation
near the origin or to a finite perturbation far beyond the limit cycle. In
either case the phase point spirals to the limit cycle, which is a stable final
state. On the other hand, the conservative linear oscillator does not display
this ‘structural stability’. Any perturbation causes it to leave one ellipse
and move to another where it stays; the orbits are neutrally stable.
In linear systems the term equilibrium is usually applied in connection
with conservative forces, with the point of equilibrium corresponding to
the balancing of all forces such that the system stays at rest. The stability
of such an equilibrium state is then defined by the behavior of the system
when it is subject to a small perturbation, that being a small displacement
from the equilibrium state in phase space. Roughly speaking, the terms
stability and instability indicate that after the perturbation is applied the
system returns to the equilibrium state (stable) or that it continues to move
away from it (unstable) or that it does not move at all (neutral stability).
One of the first places where these ideas can be found in a biological context
is in Lotka’s 1925 book on mathematical biology [207].
To set these ideas in a familiar context we adopt the nomenclature that
a bio-oscillator is one that is self-excitatory; regardless of the initial state of
the system it approaches a stable limit cycle providing that no pathologies
arise. This idea of an active system was originally proposed in 1928 by
the electrical engineers van der Pol and van der Mark, using a nonlinear
dynamic equation of the form [344]

d2 V (t)  2  dV (t)
2
+ V (t) − ε2 + ω02 V (t) = 0, (3.3)
dt dt
where V (t) is the voltage, ω0 is the natural frequency of the nonlinear os-
cillator, and ε is an adjustable parameter. In a linear oscillator of frequency
ω0 the constant coefficient of the first-order time derivative determines the

60709_8577 -Txts#150Q.indd 100 19/10/12 4:28 PM


Nonlinear Bio-oscillators 101

stability property of the system. If this coefficient, call it λ, is positive then


the system is asymptotically stable, that is., there is a damping e−λt so
that the oscillator approaches the fixed point V = 0 in phase space. If the
coefficient λ is negative the solution diverges to infinity as time increases
without limit (e|λ|t ). Of course this latter behavior must terminate even-
tually since infinities do not exist in physical or biological systems. In the
worst case stability is lost and eventually other mechanisms come into play
to saturate the secular growth. In the nonlinear system Eq. (3.3) the ‘coef-
ficient’ of the ‘dissipative’ term changes sign depending on whether V 2 (t)
is greater than or less than ε2 . This property leads to a limit cycle behav-
·
ior of the trajectory in the (v, v)-phase space for the system as suggested
by Figure 3.3B. Van der Pol and van der Mark envisioned the application
of this limit cycle paradigm to ‘explain’ a great many phenomena such as
[344]:

...the aeolian harp, a pneumatic hammer, the scratching noise


of a knife on a plate, the waving of a flag in the wind, the hum-
ming noise sometimes made by a water tap, the squeaking of
a door, a neon tube, the periodic recurrence of epidemics and
of economic crisis, the periodic density of an even number of
species of animals living together and the one species serving
as food for the other, the sleeping of flowers, the periodic recur-
rence of showers behind a depression, the shivering from cold,
menstruation and, finally, the beating of the heart.

Although the van der Pal oscillator given by Eq.(3.3) does not have
the broad range of application envisioned by its creators [344, 345], their
comments reveal they understood that these many and varied phenomena
are dominated by nonlinear mechanisms. In this sense their remarks are
prophetic. An example not mentioned by these authors that with a little
thought they would have included in their list and that is walking.

3.1.1 Super Central Pattern Generator (SCPG) model of gait


Traditionally, the legged locomotion of animals is understood through the
use of a central pattern generator (CPG), an intraspinal network of neu-
rons capable of producing a syncopated output [62, 376, 402]. The implicit
assumption in such an interpretation is that a given limb moves in direct
proportion to the voltage generated in a specific part of the CPG. Because
the movement of each limb cyclically repeats itself almost identically its
dynamics can be described by a nonlinear oscillator for each limb partic-
ipating in the locomotion process [61, 380]. Experiments establishing the

60709_8577 -Txts#150Q.indd 101 19/10/12 4:28 PM


102 Dynamics in Fractal Dimensions

existence of a CPG have been done on animals with spinal cord transec-
tions. It has been shown that such animals are capable of walking under
certain circumstances. Walking, for example, by a cat with its brain stem
sectioned rostral to the superior colliculus, is very close to normal, on a flat,
horizontal surface, when a section of the midbrain is electrically stimulated.
Stepping continues as long as a train of electrical pulses is used to drive
the gait cycle. However this is not a simple linear response process since
the frequency of the stepping increases in proportion to the amplitude of
the stimulation and is insensitive to changes in the frequency of the driver.

1.6 slow
stride int. (sec)

1.4

1.2 norm

1 fast

0.8
0 10 20 30 40 50 60
(free pace) time (min)

1.6 slow
stride int. (sec)

1.4

1.2 norm

1 fast

0.8
0 5 10 15 20 25 30
(metronome pace) time (min)

FIGURE 3.4. Single intervals for slow, normal and fast gaits for free walking and
metronome. The time duration of the data collection for each sfree walking eries was
approximately one hour and half that for the metronome data. (From [304] with per-
mission.)

It has been established that the nonlinear analysis of gait data supports
the conjecture made in biomechanics that the CPG in human locomotion
can be modeled as a correlated system of coupled nonlinear oscillators. If
the observed random variations in the stride intervals or normal walking
were related to the chaotic behavior of such nonlinear oscillators, this would
explain the type of multifractal behavior observed in the gait data. The gait
data studied by my colleagues and me [304, 376] depicted in Figure 3.4
were taken from public domain archives Physionet [272] set up by my

60709_8577 -Txts#150Q.indd 102 19/10/12 4:28 PM


Nonlinear Bio-oscillators 103

friend and collaborator Ary Goldberger at Harvard Medical School and


consisted of data sets of stride interval time series for ten healthy young
men walking at slow, normal and fast paces for a period of one hour. The
approximately four thousand data points for each of the ten walkers, in each
of the six modes of walking, was more than sufficient to provide statistically
significant results [304].
Figure 3.4 shows samples of stride interval sequences under different con-
ditions. Each time series is approximately one hour long for natural slow,
normal and fast walking and about 30 minutes long for metronomically
constrained walking for slow, fast and normal walking. Participants in the
study had no history of neuromuscular, respiratory or cardiovascular disor-
ders. They were not taking medications and had a mean age of 21.7 years
(range: 18-29 years); mean height 1.77 ± 0.08 meters and mean weight
71.8 ± 10.7 kg. Subjects walked continuously on level ground around an
obstacle-free, long (either 225 or 400 meters), approximately oval path and
the stride interval was measured using ultra-thin, force sensitive switches
taped inside one shoe. For the metronomic constrained walking, the indi-
viduals were told only once, at the beginning of their walk, to synchronize
their steps with the metronome.
Figure 3.5 shows that stride interval time series for human gait are char-
acterized by strong persistent fractal properties very close to that of 1/f-
noise, h ≈ 0. However, normal gait is usually slightly less persistent than
both slow and fast gait. The slow gait has the most persistent fluctuations
and may present non-stationary properties, h > 0. The slow gait fluctu-
ations may also deviate most strongly from person to person. The higher
values of the Hölder exponents for both slow and fast gait, relative to nor-
mal gait, may be explained as due to a stress condition that increases the
persistency and, therefore, the long-time correlation of the fluctuations. A
careful comparison of the widths of the distributions of Hölder exponents
for the different gaits with the widths for a corresponding monofractal noise
data set of the same length has established that the stride interval of hu-
man gait is only weakly multifractal. However, the multifractal structure
is slightly more prominent for fast and slow gait than for normal gait.
If the pace is constrained by a metronome, beating at the average rate of
the cohort of walkers, the stochastic properties of the stride interval time
series change significantly in a wide range from persistent to antipersistent.
In general, in each case there is a reduction in the long-term memory and an
increase in randomness as the shift of the Hölder exponent histogram and
change in the width of the distribution in Figure 3.5 shows. By averaging
the results for 10 subjects we get: h0,n = −0.37, σ0,n = 0.063; h0,s = −0.48,
σ0,s = 0.066; h0,f = −0.36, σ0,f = 0.059. The figure clearly indicates
that under the constraint of a metronome, the stride interval of human

60709_8577 -Txts#150Q.indd 103 19/10/12 4:28 PM


104 Dynamics in Fractal Dimensions

gait increases its randomness because the distribution of Hölder exponents


is centered more closely to h = −0.5 that is the characteristic value of
Normal or uncorrelated random noise. The data present large variability
in the values of the Hölder exponents from persistent to antipersistent
fluctuations, that is, the exponent spans the entire range of −1 < h < 0.
However, the metronome constraint usually has a relatively minor effect
upon individuals walking normally. Probably, by walking at a normal speed
an individual is more relaxed and he/she walks more naturally. The fast
gait appears to be almost uncorrelated noise while the slow gait presents
a large variability from persistent to antipersistent fluctuations with an
average that is close to random noise.
Parkinson’s and Huntington’s diseases are typical disorders of the basal
ganglia and are associated with characteristic changes in gait rhythm. Be-
cause the neurons of the basal ganglia likely play an important role in
regulating muscular motor-control such as balance and sequencing of move-
ments, it is reasonable to expect that the stride-to-stride dynamics, as well
as the gait cycle duration is affected by these neurodegenerative diseases.
This is seen in the distribution of Hölder exponents for the elderly and
those with Parkinson disease depicted in Figure 3.5.
8
Free pace
6
slow
4
norm
2
fast
0
Histogram -p(h)-

Metronome pace
6
slow
4
norm
2
fast
0
Free pace
6
elderly
4
2 Parkinson

0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
Holder exponent -h-

FIGURE 3.5. Typical Hölder exponent histograms for the stride interval series in the
freely walking and metronome conditions for normal, slow and fast pace and for elderly
and for a subject with Parkinson’s disease. The average properties are discussed in the
text. (From [304] with permission.)

60709_8577 -Txts#150Q.indd 104 19/10/12 4:28 PM


Nonlinear Bio-oscillators 105

Hausdorff et al. [145] established that a long-time correlations of up to


103 strides is detected in the three modes of free walking; results confirmed
by Scafetta et al. [304] using a much different method of analysis. However,
the study of the distribution of the variable fractal dimension allows for
an even richer interpretation of the scaling behavior of the stride rate vari-
ability SRV time series [380]. The time series is determined to be weakly
multifractal, as was determined by earlier analyses. The multifractality
does not strictly invalidate the interpretation of the scaling behavior; that
being that the statistical correlation in the SRV fluctuations over thou-
sands of strides decay in a scale invariant manner. But it does suggest
the scale-invariant decrease in the correlations is more complicated than
was previously believed. The average fractal dimension is determined to
be dependent on the average rate at which an individual walks, but not
monotonically dependent. The narrowness of the interval around the fractal
dimension in the singularity spectrum suggests that this quantity may be
a good quantitative measure of an individual’s dynamical variability. We
suggest the use of the fractal dimension as a quantitative measure of how
well the motor control system is doing in regulating locomotion. Further-
more, excursions outside the narrow interval of fractal dimension values for
apparently healthy individuals may be indicative of hidden pathologies.
As was explained with my friend Nicola Scafetta [305] the discovery that
locomotion is a complex cyclical phenomenon involving both order and
randomness in different amounts suggested the development of a corre-
lated stochastic version of a CPG for the purpose of capturing the fractal
properties of the inter-stride interval sequences. This kind of model was
introduced by Hausdorff et al. [145] and was later extended [16, 147] to de-
scribe the changing of gait dynamics as humans mature from childhood to
adulthood. This stochastic model essentially consists of a random walk on
a correlated chain, where each node of the chain is assumed to be a neural
center firing at a different frequency. This random walk is found to generate
a fractal process. More recently, we [376] developed a super central patter
generator (SCPG) that reproduces both fractal and multifractal properties
of the gait dynamics.
In this subsection I review the major observed patterns in human gait
dynamics and describe SCPG as a model describing human dynamics and
show that two parameters, the average frequency f0 and the intensity A of
the forcing component of the nonlinear oscillator, are sufficient to determine
both the fractal and multifractal variability of human gait under several
conditions.
The multifractal spectrum is analyzed by a methodology introduced by
Struzik [330] and we extended [304] to estimate the local Hölder exponents
of stride interval time series. The multifractal properties of a sequence can

60709_8577 -Txts#150Q.indd 105 19/10/12 4:28 PM


106 Dynamics in Fractal Dimensions

be determined from studying a distribution of Hölder exponents that is


centered at a given mean value h with a standard deviation width denoted
by σ. The exponent h is related to the Hurst exponent by H = h + 1.
The standard deviation width σ is an indicator of the possible multifrac-
tal nature of the time series. A distribution of Hölder exponents can be
approximately fitted by a Normal distribution of the type
 
1 (h − h0 )2
g(h) = √ exp − , (3.4)
2π σ 2 σ2

where the value h0 is often a good approximation to h. Usually, h0 is


slightly larger than h because the distribution of Hölder exponents presents
a slightly positive skewness. The slow, normal and fast gait time series
are determined to have the multifractal time series shown in Figure 3.6
averaged over the cohort group. The distributions shown in this figure is
an average over the cohort.
We [376] introduced the SCPG model of locomotion that governs the
stride interval time series for human gait. The SCPG model incorporates
two separate mechanisms simulating ‘stress’ into the generated stride in-
terval time series. The term stress is intended to denote any mechanism
or cause that induces an alteration of the stride dynamics relative to that
observed for adult normal locomotion. One stress mechanism, that has an
internal origin, increases the correlation of the time series due to the change
in the velocity of the gait from normal to the slower or faster regimes. The
second stress mechanism, has an external origin, and decreases the long-
range time correlation of the sequences as under the frequency constraint
of a metronome. We modeled the time series for walking assuming that
the intensity of the impulses of the firing neural centers regulate only the
inner virtual frequency of a forced van der Pol oscillator. A van der Pol
oscillator is herein adopted because, as explained in the previous subsec-
tion, it is a prototype of relaxation oscillators capable of producing stable
oscillations over limit cycles, which describe the quasi-periodic gait dynam-
ics quite well. The observed stride interval is assumed to coincide with the
actual period of each cycle of the van der Pol oscillator; a period that de-
pends on the unperturbed inner frequency of the oscillator, as well as on
the amplitude and frequency of the forcing function.
We mimic human gait with a single nonlinear oscillator. In the SCPG
model we use a forced van der Pol oscillator with displacement x(t):
  2
ẍ + μ x2 − p2 ẋ + (2πfj ) x = A sin (2πf0 t) . (3.5)

The parameter p controls the amplitude of the oscillations, μ controls the


degree of nonlinearity of the oscillator, fj is the inner virtual frequency of

60709_8577 -Txts#150Q.indd 106 19/10/12 4:28 PM


Nonlinear Bio-oscillators 107

the oscillator during the j th cycle that is related to the intensity of the
j th neural fired impulse, and A and f0 are respectively the strength and
the frequency of the external driver. The frequency of the oscillator would
be f = fj if A = 0. We notice that the nonlinear term, as well as the
driver, induce the oscillator to move on a limit cycle. The actual frequency
of each cycle may differ from the inner virtual frequency fj . We assume
that at the conclusion of each cycle, a new cycle is initiated with a new
inner virtual frequency fj produced by the SCPG model while all other
parameters are kept constant. However, the simulated stride interval is not
1/fj but is given by the actual period of each cycle of the van der Pol
oscillator. We found this mechanisms more interesting than that proposed
by Hausdorff et al. [147] and Ashkenazy et al. [16] who added noise to the
output of each node to mimic biological variability. In fact, we noticed that
the so-called biological noise is naturally produced by the chaotic solutions
of the nonlinear oscillators in the SCPG, here that is the forced van der
Pol oscillator fluctuating over its limit cycle.
We assume that the neural centers of the SCPG may fire impulses with
different voltage amplitudes that would induce virtual frequencies {fi } with
finite-size correlations. Therefore we model the time series of virtual fre-
quencies directly. If the reader is interested in the mathematical details of
SCPG they may be found in the literature [305, 376].

FIGURE 3.6. Histogram and probability density estimation of the Hölder exponents:
slow (star; h0 = 0.046, σ = 0.102), normal (triangle; h0 = −0.092, σ = 0.069), fast
(circle; h0 = −0.035, σ = 0.081) gaits. Each curve is an average over the ten members of
the ten cohorts in the experiment. The fitting curves are Normal functions with average
h0 and standard deviation σ. By changing the gait mode from slow to normal the Hölder
exponents h decreases but from normal to fast they increase. There is also an increase
in the width of the distribution σ by moving from the normal to the slow or fast gait
modes. (From Scafetta et al. [304] with permission.)

60709_8577 -Txts#150Q.indd 107 19/10/12 4:28 PM


108 Dynamics in Fractal Dimensions

The SCPG is used to simulate the stride interval of human gait under a
variety of conditions [305, 376]. We use the average experimentally deter-
mined value of the basic frequency, f0,n = 1/1.1 Hz, so that the average
period of the normal gait is 1.1 second; the frequency of the slow and fast
gait are chosen to be respectively f0,s = 1/1.45 Hz and f0,f = 1/0.95 Hz,
with an average period of 1.45 and 0.95 seconds, respectively, that is similar
to experimentally realized slow and fast human gaits shown in Figure 3.4.
By using the random walk process to activate a particular frequency of the
short-time correlated frequency neural chain, we obtain the time series of
the frequencies {fj } to use in the time evolution of the van der Pol oscilla-
tor. For simplicity, we keep constant the two parameters of the nonlinear
component of the oscillator (3.5), μ = 1 and p = 1. The only parameters
allowed to change in the model are the mean frequency f0 that changes
also the correlation length, and the intensity A of the driver of the van der
Pol oscillator (3.5).
An important question this study raises is which aspects of bipedal lo-
comotion are passively controlled by the biomechanical properties of the
body and what aspects are actively controlled by the nervous system. It
is evident that the rhythmic movements are controlled by both feedfor-
ward and feedback [192]. Thus, there is not a simple answer to the above
question because both the biomechanical properties of the body and the
nervous system are closely entangled and both can contribute to the pe-
culiar variability patterns observed in the data. Whether some degree of
stride variability can occur also in an automated passive model for gait,
for example, a walking robot, is a realistic expectation, in particular if the
robot can adjust its movements according to the environment. However,
human locomotion may be characterized by additional peculiar properties
which emerge from its psychobiological origin that naturally generates 1/f
scaling and long-range power-law correlated outputs [210].
The stride interval of human gait presents a complex behavior that de-
pends on many factors. The interpretation of gait dynamics that emerges
from the SCPG model is as follow: the frequency of walking may be as-
sociated with a long-time correlated neural firing activity that induces a
virtual pace frequency, nevertheless the walking is also constrained by the
biomechanical motor control cycle that directly controls movement and
produces the pace itself. Therefore, we incorporate both the neural firing
activity given by a stochastic CPG and the motor control constraint that
is given by a nonlinear filter characterized by a limit cycle. Therefore, we
model our SCPG such that it is based on the coupling of a stochastic with a
hard-wired CPG model and depends on many factors. The most important
parameters of the model are the short-correlation size that measures the
correlation between the neuron centers of the stochastic CPG; the intensity

60709_8577 -Txts#150Q.indd 108 19/10/12 4:28 PM


Nonlinear Bio-oscillators 109

8
slow
6
norm
p(h)

4
fast
2

0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
(free pace) Holder exponent -h-
8
slow
6
norm
p(h)

4
fast
2

0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4
(metronome pace) Holder exponent -h-

FIGURE 3.7. Typical Hölder exponent histograms for computer-generated stride inter-
val series using the SCPG model in the freely walking and metronome paced conditions
for normal, slow and fast pace. The parameters of the SCPG model were chosen in such
a way as to approximately reproduce the average behavior of the fractal and multifractal
properties of the phenomenological data. The historgrams are fitted with Normal func-
tions The results appear qualitatively similar to those depeicted in Figure 3.6. (From
[305] with permission.)

A of the forcing driving component of the nonlinear oscillator of Eq. (3.5)


and, of course, the mean frequency f0 of the actual pace that distinguishes
the slow, normal and fast gait regimes. The other parameters, γ, ρ, μ and p
may be, to a first-order approximation, fixed by the background conditions.
Walking is also significantly influenced by two different stress mecha-
nisms: (a) natural stress that increases the correlation of the nervous system
and regulates the motion at the changing of the gait regime from a normal
relaxed condition, to a consciously forced slower or faster gait regime; (b)
a psychophysical stress due to the constraint of following a fixed external
cadence such as a metronome. A psychophysical control, like that induced
by a metronome, breaks the long-time correlation of the natural pace and
generates a large fractal variability of the gait regime.
The SCPG model is able to mimic much of the complexity of the stride
interval sequences of human gait under the several conditions of slow, nor-
mal and fast regimes for both walking freely and keeping the beat of a

60709_8577 -Txts#150Q.indd 109 19/10/12 4:28 PM


110 Dynamics in Fractal Dimensions

metronome. The model is based on the assumptions that human locomo-


tion is regulated by both the CNS and by the INS. A network of neurons
produces a fractal output that is correlated according to the level of phys-
iologic stress and this network is coupled to the INS that generates the
rhythmic process of the pace. The combination of the two networks con-
trols walking and the variability of the gait cycle. It is the period of the
gait cycle that is measured in the data sets and the SCPG model faithfully
reproduces the stochastic and fractal characteristics of that phenomeno-
logical data. The correlation length in the SCPG determines the natural
stress discussed in (a), whereas the amplitude of the driver models the
psychological stress of the metronome in (b). Finally, the SCPG correctly
prognosticates that the decrease in average of the long correlation of the
stride interval time series for children and for the elderly or for those with
neurodegenerative diseases can be understood as a decrease of the correla-
tion length among the neurons of the MCS due to neural maturation and
neurodegeneration, respectively.
It should not go without comment that people use the same control
system when they are standing still, maintaining balance, as they do when
they are walking. This observation would lead one to suspect that the
body’s slight movements around the center of mass would have the same
statistics as that observed during walking. These tiny movements are called
postural sway in the literature and have given rise to papers with such
interesting titles as “Random walking during quiet standing” [62]. It has
been determined that postural sway is really chaotic [42], so one might
expect that there exists a relatively simple dynamical model for balance
regulation that can be used in medical diagnosis. Here again the fractal
dynamics can be determined from the scaling properties of postural sway
time series and it is determined that a decrease of postural stability is
accompanied by an increase of fractal dimension.

3.1.2 The cardiac oscillator


This subsection borrows heavily from West et al. [364] just as it did in the
first edition of the book, but it now contains a number of updates involving
research done in the intervening years. Under physiologic conditions, the
normal pacemaker of the heart is the sinoatrial (SA) node — a collection
of cells with spontaneous automaticity located in the right atrium. The
impulse from the SA node spreads through the atrial muscle (triggering
atrial contraction). According to the traditional viewpoint, the depolariza-
tion wave then spreads through the atrioventricular (AV) node (junction)
and down the His-Purkinje system into the ventricles. The fundamental
premise in this model is that the AV node functions during normal sinus

60709_8577 -Txts#150Q.indd 110 19/10/12 4:28 PM


Nonlinear Bio-oscillators 111

rhythm as a passive conduit for impulses originating in the SA node, and


that the intrinsic automaticity of the AV node is suppressed during sinus
rhythm. This view assumes that the AV node does not actively generate
impulses or otherwise influence the SA node [347].
The alternate viewpoint, that of van der Pol and van der Mark, and the
one adopted here, is that the AV node functions as an active oscillator and
not simply as a passive resistive element in the cardiac electrical network
[128, 141, 181, 167]. The AV node having an active role is supported by
the clinical observation that, under certain conditions, the sinus and AV
nodes may become functionally disassociated so that independent atrial
(P) and ventricular (QRS) waves are seen on the electrocardiogram (AV
disassociation). Further, if the SA node is pharmacologically suppressed, or
ablated, then the AV node assumes an active pacemaker role. The intrinsic
rate of this AV nodal pacemaker is about two-thirds that of the SA node
in dogs [181] and in man as well.
In contrast to the traditional passive conduit theory of the AV node,
a nonlinear cardiac oscillator suggests that the SA and AV nodes may
function in an active and interactive way, with the faster firing SA node
appearing to entrain the AV node [364]. This entrainment should be bidi-
rectional, not unidirectional, with the SA node both influencing and being
influenced by the AV node. Previous nonlinear models [141, 167, 181] of the
supraventricular cardiac conduction system did not explicitly incorporate
this bidirectional type of interaction.
To simulate bidirectional SA-AV node interactions, we here adapt a com-
puter model of two coupled nonlinear oscillators first developed by Gollub et
al. [132], to describe trajectory divergence of coupled relaxation oscillators.
The circuit includes two tunnel diodes-electronic components as depicted
in Figure 3.8 with the same type of nonlinear voltage-current relationships
found in physiological pacemakers with hysteresis properties shown in Fig-
ure 3.9. The dynamics of the coupled system can be better visualized if we
consider the two branches of the circuit separately. Consider a single oscil-
lator in isolation, which for an appropriate choice of V0 and resistance R1 ,
an instability drives the circuit into oscillations in which the loop indicted
in Figure 3.9 is continually traversed in a period of order L1 /R1 . The diode
current ID1 (in this case ID1 = I1 ) then has the form of a rising exponential
for low voltage (VL ) and a descending exponential for high voltage (VH ).
The voltage switches between these high and low valves when ID1 attains
the threshold values IL or IH , respectively. The parameter values (Lj /Rj )
of each of the isolated oscillators are set to take into account the intrinsic
difference in rate between the two pacemakers (AV /SA = 2/3).
The two oscillators are coupled together by the conductances Gc ≡ 1/Rc
and G = 1/R. The state of the circuit is defined by a point in the four-

60709_8577 -Txts#150Q.indd 111 19/10/12 4:28 PM


112 Dynamics in Fractal Dimensions

I R

L1 L2

V0
R1 R2

I1 V1 I2 V2

FIGURE 3.8. Analog circuit described by Eqs. (3.6) and (3.7) with tunnel diodes, re-
sistors and inductors. The overall voltage is provided by the battery V0 with the total
current I. (From West et al. [364] with permission.)

dimensional phase space with coordinate axes (ID1 , ID2 , VD1 , VD2 ). The
coupling results in a voltage drop VD1 − VD2 across Rc , producing a current
through each diode dependent on this voltage drop, and can result in in-
duced switching of one oscillator by the other. The time rates of change in
the current through the two diode branches of the circuit are determined
by Kirchhoff’s laws:

dI1 (t)
L1 + (R + R1 )I1 (t) + R2 I2 (t) = VD − VD1 (3.6)
dt
dI2 (t)
L2 + (R + R2 )I2 (t) + R1 I1 (t) = VD − VD2 (3.7)
dt
along with the various currents through the branches of the circuit

I D1 = I 1 + I c (3.8)

I D2 = I 2 − I c (3.9)

Ic = [VD2 − VD1 ] Gc . (3.10)

60709_8577 -Txts#150Q.indd 112 19/10/12 4:28 PM


Nonlinear Bio-oscillators 113

Gollub et al. [133] approximated the current-voltage characteristics of the


diode as shown in Figure 3.9 to be rectangular, so that VD1 = VL (a
constant) as the current increases from IL to IH and VD1 = VH (a constant)
as the current decreases from IH back to IL . However, they include in the
VL and VH the voltage drop across the diode caused by the coupling current
Ic :

VL = |Ic RD | , VH = 0.45V − |Ic RD | (3.11)


where the diode resistance RD is taken to be 5 Ω.

IH
CURRENT (ma)

IL

VL VH
VOLTAGE (v)

FIGURE 3.9. A typical voltage response curve across a diode is shown. The highest
current is IH , the lowest current is IL , the highest voltage is VH and the lowest voltage
is VL . The arrows indicate how the diode operation jumps discontinuously to VH at
constant IH , and to VL at constant IL .

Eqs.(3.6) and (3.7) constitute a coupled feedback system through the


I2 −dependence of the I˙1 equation and the I1 −dependence of the I˙2 equa-
tion. The two oscillators are linearly coupled by means of the resistors R
and Rc , and each one is driven by the voltage difference between the source
and the voltage dropped across the diode introducing the anharmonic effect
of the current-voltage response curve shown in Figure 3.9. Because the tun-
nel diodes are hysteretic (nonlinear) devices, as the current in one of them
increases, the voltage across it remains nearly the same (VL ) until the cur-
rent reaches IH , at which time the voltage suddenly switches to VH (> VL ).

60709_8577 -Txts#150Q.indd 113 19/10/12 4:28 PM


114 Dynamics in Fractal Dimensions

At this point the current begins to decrease again with little or no change
in the voltage until the current reaches the value IL , at which point the
voltage switches back to VL . The cycle then repeats itself. The cycling of
the coupled system is depicted in Figure 3.10 which shows that the sharply
angled regions of the uncoupled hysteresis loops have been smoothed out by
means of the coupling. Here we use the model of Gollub et al. [133] in which
the transition between VL and VH on the upper branch and between VH
and VL on the lower branch of the hysteresis loop is instantaneous, because
of its simplicity. West et al. [364] have generalized this model to mimic the
smooth change from one branch of the hysteresis curve to the other that
is observed in physiological oscillators by replacing the above discontinu-
ity with a hyperbolic tangent function along with a voltage which linearly
increases in magnitude with time at the transition point IH and IL .

0.6
VG
VOLTAGE V1(V2)

0.4

0.2

VL
0.00 IL 0.01 0.02 IG
CURRENT I1(I2)

FIGURE 3.10. The hysteresis cycle of operation across the diode, is depicted. The sharp
changes in voltage shown in Figure 3.9 are here smoothed out by the coupling between
diodes. (From West et al. [364] with permission.)

We have included two distinct types of coupling in our dynamic equa-


tions. The first is through the resistor R since the voltage applied to one
oscillator now depends on the current being drawn by the other one. The
second coupling is through the cross resistor Rc which directly joins the
two diodes. In this latter case the current through the diode is not the
same as that drawn by the inductor in the oscillator, but is modified by
the current through the cross coupling resistor, that is, it depends on the
relative values of V1 (t) and V2 (t).
Let us consider first the dynamics of the two coupled oscillators with only
the R−coupling present. This is accomplished by setting Gc = 0 (Rc = ∞)
in Eq.(3.10) resulting in Ic = 0. The dynamics of the coupled system can
be depicted by the orbits in the reduced phase space (I1 , I2 ) for a certain

60709_8577 -Txts#150Q.indd 114 19/10/12 4:28 PM


Nonlinear Bio-oscillators 115

set of system parameter values. Basically we observe that all four of the
dynamic variables, the two voltage V1 (t) and V2 (t), and the two currents
I1 (t) and I2 (t), are strictly periodic with period T for all applied voltages
V0 at which oscillations in fact occur. A periodic solution to the dynamic
equations Eqs.(3.6) and (3.7) is a closed curve in the reduced phase space
as shown in Figure 3.11. Here, for two periods in one oscillator we have
three in the other so that the coupled frequencies are in the ratio of three
to two. A closed orbit with 2m turns along one direction and of 2n turns in
the orthogonal direction indicate a phase locking between the two diodes
such that one diode undergoes n cycles and the other m cycles in a constant
time interval T for the coupled system. Figure 3.12 shows the time trace
of the voltage across diodes 1 and 2 for this case. We observe the 3:2 ratio
of oscillator frequencies over a broad range of values of V0 .

0.02
I1

0.01

0.00
0.00 0.01 0.02
I2

FIGURE 3.11. The current in diode 1 is graphed as a function of the current through
diode 2. We see that the trajectory forms a closed figure indicating the existence of a limit
cycle (R = 3.2Ω, V0 = 0.32V , R1 = 1.3Ω, L1 = 2.772μH, R2 = 1.4Ω, L2 = 3.732μH).
(From West et al. [364] with permission.)

For an externally applied voltage less than 0.225V the frequency ratio
of the two oscillators becomes phase locked (one-to-one coupling) at a fre-
quency that is lower than the intrinsic frequency of the SA node oscillator,
but faster than that of the AV junction oscillator. In Figure 3.13. A the
output of both oscillators in the coupled system is depicted, with parameter
values such that the uncoupled frequencies are in the ratio of three to two.
In the coupled system, the SA and AV oscillators are clearly one-to-one
phase locked due to their dynamic interaction.

60709_8577 -Txts#150Q.indd 115 19/10/12 4:28 PM


116 Dynamics in Fractal Dimensions

1.0

0.8
VOLTAGE
0.6

0.4

0.2

0.0
TIME

FIGURE 3.12. Voltage pulses are shown as a function of time (dimensionless units) for
SA (solid line) and AV (dashed line) oscillators with parameter values given in Figure
3.11. Note that there are two AV pulses for three SA pulses, that is,a 3:2 phase locking.
(From West et al. [364] with permission.)

To simulate the effects of driving the right atrium at increasing rates


with an external pacemaker (an experiment done on dogs in the laboratory)
[181], an external voltage of variable frequency was applied to the SA node
oscillator branch of the circuit. Externally ‘pacing’ the SA oscillator results
in the appearance of a 3:2 Wenckebach-type periodicity over a initial range
of driving frequencies. Furthermore, when the system is driven beyond a
critical point, a 2:1 ‘block’ occurs with only every other SA pulse being
followed by an AV pulse as shown in Figure 3.13.
While the type of equivalent circuit model given here is not unique, it
does lend support to a nonlinear concept of cardiac conduction. In partic-
ular, the model is consistent with the viewpoint that normal sinus rhythm
involves a bidirectional interaction (one-to-one phase locking) between cou-
pled nonlinear oscillators that have intrinsic frequencies in the ratio of
about 3:2. Furthermore, the dynamics suggest that AV Wenckebach and
2:1 block, which have traditionally been considered purely as conduction
disorders, may at least, under some conditions, relate to alterations in the
nonlinear coupling of these two active oscillators. Apparent changes in con-
duction, therefore, may under certain circumstances be epiphenomena. The
present model demonstrates that abrupt changes (bifurcations) in the phase
relation between the two oscillators occur when the intrinsically faster pace-
maker is driven at progressively higher rates. In the present model, over
a critical range of frequencies, a distinctive type of periodicity is observed

60709_8577 -Txts#150Q.indd 116 19/10/12 4:28 PM


Nonlinear Bio-oscillators 117

A 1:1 SA-AV COUPLING


0.6
SA AV

0.4

0.2

0.0
B 3:2 AV WENCNEBACH
0.6
VOLTAGE

0.4

0.2

0.0
C 2:1 AV BLOCK
0.6

0.4

0.2

0.0
TIME

FIGURE 3.13. Voltage pulses with the same parameter values as in Figure 3.13 V0 =
0.18; (A) 1:1 phase locking persists when the SA node is driven by an external voltage
pulse train with pulse width 0.5 dimensionless time units and period 4.0. (B) Driver
period is reduced to 2.0 with emergence of 3:2 Wenckebach periodicity. (C) Driver period
reduced to 1.5, resulting in a 2:l AV block. Closed brackets denote SA pulse associated
with AV response. Open brackets denote SA pulse without AV response (‘non-conducted
beat’). (From West et al. [364] with permission.)

such that the interval between the SA and AV oscillators becomes progres-
sively longer until one SA pulse is not followed by an AV pulse. This cycle
then repeats itself, analogous to AV Wenckebach periodicity which is char-
acterized by progressive prolongation of the PR interval until a P-wave is
not followed by a QRS-complex. These AV Wenckebach cycles, which may
be seen under a variety of pathological conditions, are also a feature of
normal electrophysiological dynamics and can be induced by driving the
atria with an electronic pacemaker [176].
The findings of both phase-locking and bifurcation-like behavior are par-
ticularly noteworthy in this two oscillator model because they emerge with-
out any special assumptions regarding conduction time between oscillators,
refractoriness of either oscillator to repetitive stimulation or the differential
effect of one oscillators on the other.

60709_8577 -Txts#150Q.indd 117 19/10/12 4:28 PM


118 Dynamics in Fractal Dimensions

The observed dynamics support the contention that the AV junction may
be more than a passive conduit for impulses generated by the sinus node, is
also suggested by Guevara and Glass [141]. The present model is consistent
with the alternative interpretation that normal sinus rhythm corresponds
to one-to-one phase locking (entrainment) of two or more active oscillators,
and does not require complete suppression of the slower pacemaker by the
faster one, as do the passive conduit models. It should be emphasized,
however, that when two active pacemakers become one-to-one phase locked,
the intrinsically slower one may be mistaken as a passive element because
of its temporal relation to the intrinsically faster one. Furthermore, the
model is of interest because it demonstrates marked qualitative changes
in system dynamics, characteristics of AV Wenckebach and 2:1 AV block,
occurring when a single parameter (driving frequency) is varied over some
critical range of values.
Up to this point we have been using the traditional concepts of a limit
cycle to discuss one kind of dynamic process, that is, the beating of the
heart and the occurrence of certain cardiac pathologies. Zebrowski et al.
[409] consider a modification of the van der Pol oscillator by introducing
a potential function with multiple minima. Adjusting parameters in this
potential enables them to more closely model the heart’s dynamic response
to the autonomous nervous system. They explain the oscillator response to
a single pulse as well as to a periodic square-wave, the former producing a
change in phase and the latter an irregular response. In this way they are
able to gain insight into the underlying dynamics of irregular heart rate,
systole, sinus pause and other cardiac phenomena.
Extending this discussion consider models of various other biorhythms
mentioned earlier. I explore certain of the modern concepts arising in non-
linear dynamics and investigate how they may be applied in a biomedical
context that are eventually of value in understanding both erratic ECG
and EEG time series. It is apparent that the classical limit cycle is too
well ordered to be of much assistance in that regard, and so I turn to an
attractor that is a bit strange.

3.1.3 Strange attractors (deterministic randomness)


The name ‘strange attractor’ was given to those attractors on which, unlike
the system discussed in the preceding subsections, the dynamics give rise
to trajectories that are aperiodic. This means that a deterministic equa-
tion of motion gives rise to a trajectory whose corresponding time series
nowhere repeats itself over time; it is chaotic. The term chaotic refers to
the dynamics of the attractor, whereas strangeness refers to the topology
of the attractor. Juxtaposing the words deterministic and chaotic, the for-

60709_8577 -Txts#150Q.indd 118 19/10/12 4:28 PM


Nonlinear Bio-oscillators 119

mer indicating the property of determinableness (predictability) and the


latter that of randomness (unpredictability), usually draws an audience.
The expectation of people is that they will be entertained by learning how
the paradox is resolved. The resolution of the apparent conflict between
the historical and modern view of dynamic systems theory as presented
in classical mechanics, so eloquently stated by Laplace and Poincaré, re-
spectively, and quoted in the Introduction, is that chaos is not inconsistent
with the traditional notion of solving deterministic equations of evolution.
As Ford [99] states:
... Determinism means that Newtonian orbits exist and are
unique, but since existence-uniqueness theorems are generally
nonconstructive, they assert nothing about the character of the
Newtonian orbits they define. Specifically, they do not preclude
a Newtonian orbit from passing every computable test for ran-
domness of being humanly indistinguishable from a realization
of a truly random process. Thus, popular opinion to the con-
trary notwithstanding, there is absolutely no contradiction in
the term ”deterministically random.” Indeed, it is quite reason-
able to suggest that the most general definition of chaos should
read: chaos means deterministically random.
From the point of view of classical statistical mechanics the idea of ran-
domness has traditionally been associated with the weak interaction of an
observable with the rest of the universe. Take for example the steady beat
of the heart, it would have been argued that a heartbeat is periodic and
regular. The beat-to-beat variability that is in fact observed (cf. Chapter
Two) would be associated with changing external conditions such as the
state of exercise, the electrochemical environment of the heart, and so on.
The traditional view requires there to be many (an infinite number) degrees
of freedom that are not directly observed, but whose presence is manifest
through fluctuations. However we now know that in a nonlinear system
with even a few degrees of freedom chaotic motion can be observed [364].
In this subsection I present some examples of nonlinear dynamic net-
works that lead to chaos. First a brief review of the classical work of Lorenz
[205] on a deterministic continuous dissipative system with three variables
is presented. The phase space orbit for the solution to the Lorenz system
is on an attractor, but of a kind on which the solution is aperiodic and
therefore the attractor is strange. We discuss this family of aperiodic so-
lutions and discover that chaos lurks in a phase space of dimension three.
Rössler [298] points out that if oscillation is the typical behavior of two-
dimensional dynamical systems, then chaos, in the same way, characterizes
three-dimensional continuous systems.

60709_8577 -Txts#150Q.indd 119 19/10/12 4:28 PM


120 Dynamics in Fractal Dimensions

Thus, if nonlinearities are ubiquitous then so to must be chaos. This led


Ford to speculate on the existence of a generalized uncertainty principle
based on the notion that the fundamental measures of physics are actually
chaotic. The perfect clocks and meter sticks of Newton are replaced with
‘weakly interacting chaotic substitutes’ so that the act of measurement it-
self introduces a small and uncontrollable error into the quantity being
measured. Unlike the law of error conceived by Gauss, which is based on
linearity and the principle of superposition of independent events, the pos-
tulated errors arising from nonlinearities cannot be reduced by increasing
measurement accuracy. The error (noise) is generated by the intrinsic chaos
associated with physical being.
In his unique style Ford [99] summarizes those speculations in the fol-
lowing way:
Although much, perhaps most, of man’s impressive knowledge
of the physical world is based on the analytic solutions of dy-
namical systems which are integrable, such systems are, metaphor-
ically speaking, as rare as integers on the real line. Of course,
each integrable system is “surrounded” ... by various other sys-
tems amenable to treatment by perturbation theory. But even
in their totality, these systems form only an extremely small
subset of the dynamical whole. If we depart from this small
but precious oasis of analytically solvable, integrable or nearly
integrable systems, we enter upon a vast desert wasteland of un-
differentiated non-integrability. Therein the trackless waste, we
find the nomads: systems abandoned because they failed a qual-
ifying test for integrability; systems exiled for exhibiting such
complex behavior they were resistant to deterministic solution
they were labeled intractable. Of course, we also find chaos in
full residence everywhere...
The modern view of randomness discussed in the Introduction can be
traced back to Poincaré, but the avalanche of contemporary interest dates
from the success of Lorenz to understand the short term variability of
weather patterns and thereby enhance their predictability; subsequently
scientists considered a number of biomedical problems. His (Lorenz) ap-
proach was to represent a forced dissipative geophysical hydrodynamic flow
by a set of deterministic nonlinear differential equations with a finite num-
ber of degrees of freedom. By forcing we mean that the environment pro-
vides a source of energy for the flow field, which in this case is a source of
heat at the bottom of the atmosphere. The dissipation in this flow extracts
energy from the temperature gradient but the forcing term puts energy
back in. For the particular physical problem Lorenz was investigating, the

60709_8577 -Txts#150Q.indd 120 19/10/12 4:28 PM


Nonlinear Bio-oscillators 121

number of degrees of freedom he was eventually able to use was three, by


convention let’s call them X, Y , and Z. In the now standard form these
equations are

dX
= −σX + σY (3.12)

dY
= −XZ + rX − Y (3.13)

dZ
= XY − bZ (3.14)

where σ, r and b are physical parameters. The solutions to this system
of equations can be identified with trajectories in phase space. What is of
interest here are the properties of non-periodic bounded solutions in this
three-dimensional phase space. A bounded solution is one that remains
within a restricted domain of phase space as time goes to infinity.
The phase space for the set of equations Eqs. (3.12)–(3.14), is three-
dimensional and the solution to them traces out a curve Γt (x, y, z) given
by the locus of values of X(t) = [X(t), Y (t), Z(t)] shown in Figure 3.14. We
can associate a small volume V0 (t) = X0 (t)Y0 (t)Z0 (t) with a perturbation
of the trajectory and investigate how this volume of phase space changes
with time. If the original flow is confined to a finite region R then the rate
of change of the small volume with time ∂V0 /∂t must be balanced by the
flux of volume J(t) = V0 (t)Ẋ(t) across the boundaries of R. The quantity
Ẋ(t) in the expression for the flux J represents the time rate of change of
the dynamical variables in the absence of the perturbations, that is, the
unperturbed flow field that can sweep the perturbation out of the region
R. The balancing condition is expressed by an equation of continuity and
in the physics literature is written

∂V0 (t)
+ · J(t) = 0 (3.15)
∂t
or substituting the explicit expression for the flux into Eq.(3.15) and re-
ordering terms yields

1 dV0 (t) · · ·
= ∂x X(t) + ∂y Y(t) + ∂z Z(t) (3.16)
V0 (t) dt
where the total time derivative operator is
d ∂ ·
≡ + X(t) · (3.17)
dt ∂t
and is called the convective or total derivative of the volume. Using the
Lorenz equations of motion for the time derivatives in Eq.(3.16) we obtain

60709_8577 -Txts#150Q.indd 121 19/10/12 4:28 PM


122 Dynamics in Fractal Dimensions

1 dV0 (t)
= − (σ + b + 1) . (3.18)
V0 (t) dt
Equation (3.18) is interpreted to mean that as an observer moves along
with an element of phase space volume V0 (t) associated with the flow field
the volume contracts at a rate (σ + b + 1), that is., the solution to Eq.(3.18)
is

V0 (t) = V0 (t = 0)e−(σ+b+1)t . (3.19)


Hence the volume goes to zero as t → ∞ at a rate which is independent
of the solutions X(t), Y (t) and Z(t) and dependents only on the parameters
σ and b. As pointed out by Lorenz in his seminal work, this does not mean
that each small volume shrinks to a point in phase space; the volume may
simply become flattened into a surface, one with a fractional dimension,
that is, a non-integer dimension between two and three. Consequently the
total volume of the region initially enclosed by the surface R shrinks to
zero at the same rate, resulting in all trajectories become asymptotically
confined to a specific subspace having zero volume and a fractal dimension
[260].
20

−20

40

30

20

10

−10
0
10
20

FIGURE 3.14. The attractor solution to the Lorenz system of equations is depicted in a
three-dimensional phase space (X, Y, Z). The attractor is strange in that it has a fractal
(noninteger) dimension.

60709_8577 -Txts#150Q.indd 122 19/10/12 4:28 PM


Nonlinear Bio-oscillators 123

To understand the relation of this system to the kind of dynamical sit-


uation we were discussing in the preceding section we must study the be-
havior of the system on the limiting manifold to which all trajectories are
ultimately confined. This cannot be done analytically because of the non-
integrable nature of the equations of motion. Therefore, these equations are
integrated numerically on a computer and the resulting solution is depicted
as a curve in phase space for particular values of the parameters σ, b and
r. The technical details associated with the mathematical understanding
of these solutions is available in the literature, see for example Ott [260]
or Eckmann and Ruelle [81] and of course the original discussion of Lorenz
[205].
The strange attractor depicted in Figure 3.14 is not the only solution to
the Lorenz system of equations. This solution was obtained for the param-
eter values σ = 10, b = 8/3, r = 28. If the values σ = 10 and b = 8/3 are
held fixed and r is increased from zero, a wide range of attractors and sub-
sequent dynamic behaviors are obtained. The possible flow patterns make
the transition from stable equilibria independent of initial conditions, to
chaotic attractors that are sensitively dependent on initial conditions, to
‘chaotic transients’ [406] in which, for certain initial conditions, an appar-
ently chaotic trajectory emerges and asymptotically decays into a stable
equilibria. The decay time is a sensitive function of the initial state.
Lorenz, in examining the solution to his equations, deduced that the
trajectory is apparently confined to a surface. Ott [260] commented that
the apparent ‘surface’ must be thin, and it is inside this thin ‘surface’ that
the complicated structure of the strange attractor is embedded. If one were
to pass a transverse line through this surface, the intersection of the line
with the surface would be a set of dimension D with 0 ≤ D ≤ 1. This
fractional dimension indicates that the intersection of the line and surface
is a Cantor set. The structure of the attractor is therefore fractal, and
the stretching and folding of the trajectory discussed earlier is a geometric
property of the attractor.
The behavior in the time series resulting from the dynamics on the mani-
fold in Figure 3.14 is apparent in the associated power spectrum. The spec-
trum is the mean square value of the Fourier transform of a time series,
that is, the Fourier transform of the autocorrelation function. Consider the
solution of one component of the Lorenz system, say X(t); it has a Fourier
transform over a time interval T defined by

T /2
T (ω) ≡ 1
X dtX(t)e−iωt (3.20)

−T /2

60709_8577 -Txts#150Q.indd 123 19/10/12 4:28 PM


124 Dynamics in Fractal Dimensions

and a power spectral density (PSD) given by

 2
 
XT (ω)
SXX (ω) ≡ lim . (3.21)
T →∞ T

In Figure 3.15 is displayed the power spectral densities (PSD) SXX (ω) and
SZZ (ω) as calculated by Fanner et al. [89] using the trajectory shown in
Figure 3.14. It is apparent from the power spectra density using the X(t)
time series that there is no dominant periodic X−component in the dy-
namics of the attractor, although lower frequencies are favored over higher
ones. The power spectral density for the Z(t) time series has a much flat-
ter spectrum overall, but there are a few isolated frequencies at which
more energy is concentrated. This energy concentration would appear as
a strong periodic component in the time trace of Z(t). From these spec-
tra one would conclude that X(t) is non-periodic, but that Z(t) possesses
both periodic and non-periodic components. In fact from the linearity of
the Fourier transform Eq.(3.20) one could say that Z(t) is a superposition
of these two parts:

Z(t) = Zp (t) + Znp (t) (3.22)

The implication of Eq.(3.22) is that the autocorrelation function of Z(t)

CZZ (τ ) = lim Z(t)Z(t + τ (3.23)


t→∞

may be written as the sum of a non-periodic components Znp (t)Znp (t + τ


that provides a background looking much like the spectrum for the X-
component, a periodic component Zp (t)Zp (t + τ that consists of a number
of bumps and a cross-correlation Zp (t)Znp (t + τ decays to zero at t → ∞.
To summarize: we have here a new kind of attractor that is referred to
as ‘strange’ whose dynamics are ‘chaotic’ and whose power spectra density
resulting from the time series of the trajectory has broadband components.
Dynamical systems that are periodic or quasi-periodic have a PSD com-
posed of delta functions, that is, very narrow spectral peaks; non-periodic
systems have broad spectra with no dramatic emphasis of any particular
frequency. It is this broad band character of the PSD that is currently used
to identify non-periodic behavior in experimental data.

60709_8577 -Txts#150Q.indd 124 19/10/12 4:28 PM


Nonlinear Bio-oscillators 125

−2
0.1 0.2 0.3
frequency

−2
0.1 0.2 0.3 0.4 0.5
frequency

FIGURE 3.15. The power spectral densities SXX (ω) and SZZ (ω) are calclated using
the solution for the X, Z−components of the Lorenz equations. (From Farmer et al. [91]
with permission.)

So what does this all mean? In part it means that the dynamics of a
complex network such as the brain or the heart might be random even if its
description can be ‘isolated’ to a few (three or more) degrees of freedom that
interact in a deterministic but nonlinear way. If the system is dissipative,
that is, information is extracted from the network on average, but the
network is open to the environment, so that information is supplied to the
network by means of boundary conditions, then a strange attractor is not
only a possible manifold for the solutions to the dynamic equations; it, or
something like it, may even be probable.
The aperiodic or chaotic behavior of an attractor is subsequently shown
to be a consequence of a sensitivity to initial conditions: trajectories that
are initially nearby exponentially separate as they evolve forward in time
on a chaotic attractor. Thus, as Lorenz observed: microscopic perturbations

60709_8577 -Txts#150Q.indd 125 19/10/12 4:28 PM


126 Dynamics in Fractal Dimensions

(unobservable changes in the initial state of a system) are amplified to affect


macroscopic behavior. This property is quite different from the qualitative
features of non-chaotic attractors. In the latter, orbits that start out near
one another remain close together forever. Thus small errors or perturba-
tions remain bounded and the behavior of individual trajectories remain
predictable.
As Crutchfield et al. [65] point out in their review Chaos the key to
understanding chaotic behavior lies in understanding a simple stretching
and folding operation, which takes place in phase space [cf. Section (1.3)].
Recall that an attractor occupies a bounded region of phase space and that
two initially nearby trajectories on a chaotic trajectory separate exponen-
tially in time. But such a process of separation cannot continue indefinitely.
In order to maintain both these properties the attractor must fold over onto
itself like a taco. Thus although orbits diverge and follow increasingly differ-
ent paths, they eventually come close together again but on different layers
of the fold. As they explain, the orbits on a chaotic attractor are shuffled
by this process of folding, much like a deck of cards is shuffled by a dealer.
The unpredictability or randomness of the orbits on such an attractor is a
consequence of this mixing process. The process of stretching and folding
continues incessantly in the morphogenesis of the attractor, creating folds
within folds ad infinitum. This means that such an attractor has structure
on all scales, that is to say, a chaotic attractor is a geometrically fractal
object. Thus, as we have discussed in the first chapter we would expect a
strange attractor to have a fractal dimension.
A second early example of a dynamic system whose solutions lie on a
chaotic attractor was given by Rössler [297] for a chemical process. He has
in fact provided over half a dozen examples of such attractors [298] that are
well worth studying. It is useful to consider his motivation for constructing
such a variety of chaotic attractors. In large part it was to understand
the detailed effects of the stretching and folding operations in nonlinear
dynamical systems. As discussed in Chapter One these operations mix the
orbits in phase space in the same way a baker mixes bread by kneading it,
that is, rolling it out and folding it over. Visualize a drop of red food coloring
placed on top of a ball of dough. This red spot represents the initially nearby
trajectories of a dynamic system. Now as the dough is rolled out for the
first time the red spot is stretched into an ellipse, which eventually is folded
over. After a sufficiently long time the red blob is stretch and folded many
times, resulting in a ball of dough with alternating layers of red and white.
Crutchfield et al. [65] point out that after 20 such operations the initial blob
has been stretched to more than a million times its original length, and its
thickness has shrunk to the molecular level. The red dye is then thoroughly

60709_8577 -Txts#150Q.indd 126 19/10/12 4:28 PM


Nonlinear Bio-oscillators 127

mixed with the dough, just as chaos thoroughly mixes the trajectories in
phase space on the attractor.
The dynamic equations for Rössler’s [297] three degree of freedom system
is

dX
= −(Y + Z) (3.24)
dt
dY
= X + aY (3.25)
dt
dZ
= b + XZ − cZ (3.26)
dt

where a, b and c are constants. For one set of parameter values, Farmer
et al. [91] referred to the attractor as ‘the funnel’, the obvious reason for
this name is seen in Figure 3.16. Another set of parameter values yields
the ‘simple Rössler attractor’, (cf. Figure 3.17d). Both of these chaotic at-
tractors have one positive Lyapunov exponent. As we mentioned earlier,
a Lyapunov exponent is a measure of the rate at which trajectories sepa-
rate one from the other (cf. Section 3.2). A negative exponent implies the
orbits approach a common fixed point. A zero exponent means the orbits
maintain their relative positions; they are on a stable attractor. Finally, a
positive exponent implies the orbits exponentially separate; they are on a
chaotic attractor. In Figure 3.17 is depicted phase space projections of the
attractor, for various values of the parameters.

FIGURE 3.16. The ‘funnel’ attractor solution to the Rössler equations with parameter
values a = 0.343, b = 1.82 and c = 9.75. (From [297] with permission.)

60709_8577 -Txts#150Q.indd 127 19/10/12 4:28 PM


128 Dynamics in Fractal Dimensions

Equations (3.24)–(3.26) is one of the simplest sets of differential equation


models possessing a chaotic attractor. Figure 3.17 depicts a projection of
the attractor onto the (x, y )-plane for four different values of the param-
eter c. Notice that as c is increased the trajectory changes from a simple
limit cycle with a single maximum (Figure 3.17a), to one with two maxima
(Figure 3.17b) and so on until finally the orbit becomes aperiodic (Figure
3.17d). This is the process of bifurcation where the qualitative behavior of
the dynamics changes continuously as a control parameter is continuously
changed. In the present case this is a period-doubling bifurcation, where
the period of the cycle changes at certain values of the control parameter.
Now I turn my attention to discrete equations.

3.2 Nonlinear Bio-mapping


The modeling strategy adapted in the preceding section was essentially
that found throughout the physical sciences: construct continuous equa-
tions of evolution to describe the dynamics of the physical variable of in-
terest. In physical systems general principles such as the conservation of
energy, the conservation of action, or the conservation of momentum, are
used to construct such equations of motion. When this is not possible then
reasonable physical arguments to construct the equations are employed.
In any event, once the equations of evolution have been specified, proper-
ties of the solutions are examined in great detail and compared with the
known experimental properties of the physical system. It is the last stage,
the comparison with data, that ultimately determines the veracity of the
model dynamics. I followed this procedure in broad outline in the discus-
sion of the two coupled nonlinear oscillators modeling cardiac dynamics. In
that discussion a number of fundamental concepts in nonlinear dynamics
were reviewed that subsequently proved to be useful.
The brand of chaos associated with a continuous strange attractor is
clear so consider a one-dimensional non-invertible nonlinear map. One of
the fascinating aspect of these maps is they appear to be the natural way
to describe the time development of networks in which successive genera-
tions are distinct. Thus, they are appropriate for describing the change in
population levels between successive generations: in biology, where popula-
tions can refer to the number of individuals in a given species or the gene
frequency of a mutation in an evolutionary model; in sociology , where
population may refer to the number of people adopting the latest fad or
fashion; in medicine, where the population is the number of individuals
infected by a contagious disease; and so on. The result of the mathematical
analysis is that for certain parameter regimes there are a large number of

60709_8577 -Txts#150Q.indd 128 19/10/12 4:28 PM


Nonlinear Bio-mapping 129

+14 +14
C = 2.5 C = 3.5
Y

Y
−14 −14
−14 +14 −14 +14
x x
+14 +14
C=4 C=5
Y

−14 −14
−14 +14 −14 +14
x x

FIGURE 3.17. An X − Y phase plane plot of the solution to the Rössler equations with
parameter values a = 0.20 and b = 0.20 at four different values of c indicated in the
graphs.

classes of discrete dynamical models (maps) with chaotic solutions. The


chaos associated with these solutions is such that the orbits are periodic or
erratic in time, and can be related to the chaos observed in the time series
for strange attractors. Whether one describes the system’s dynamics with
a nonlinear map or whether the map arises from a projection of the dy-
namics from a higher dimensional space, they both indicate that one must
abandon the notion that the deterministic nonlinear evolution of a process
implies a predictable result. One may be able to solve the discrete equa-
tions of motion only to find a chaotic solution that requires a distribution
function for making predictions.
The continuous nonlinear differential equations can be used to define
discrete nonlinear mappings. This approach is more intuitive than a formal

60709_8577 -Txts#150Q.indd 129 19/10/12 4:28 PM


130 Dynamics in Fractal Dimensions

14
C=5
xmax (N + 1)

0
0 14
xmax (N)

FIGURE 3.18. Next amplitude maximum plot of the solution to the Rössler equations
for c = 5, a = 0.2 and b = 0.2. Each amplitude of the oscillation of X was plotted against
the preceding amplitude.

mathematical introduction of the same concepts. The nth maximum of say


X(t) in the Rössler attractor in Figure 3.17d can be related to the (n + 1)st
maximum. This relation can be obtained by noting the intersection of the
trajectory in Figure 3.17d to a line inserted transverse to the attractor. In
this way the plot of the maximum shown in Figure 3.18 is obtained. The
curve in this latter figure yields the functional equation

Xn+l = f (Xn ) (3.27)

which is a mapping equation. Figure 3.18 suggests how to replace a con-


tinuous model by one that is discrete.
An alternative description of the evolution of biological networks from
that adopted in Chapter Two; one which emphasizes the difference be-
tween physical and biological networks in a number of cases of interest are
presented in this section. Just as in Section 3.1 the dynamics of a system
characterized by an N −component vector X = (X1 , X2 , ..., XN ) and again
in order to determine the future evolution of the system from its present
state requires a dynamic rule for each of the components. For a great many

60709_8577 -Txts#150Q.indd 130 19/10/12 4:28 PM


Nonlinear Bio-mapping 131

biological and ecological networks the variables are not considered to be


continuous functions of time, but rather as is the case of animal popula-
tions they are considered to be functions of a discrete time index specifying
successive generations. The minimum unit of time change for the dynamic
equations would in this case be given by unity, that is, the change over a
single generation. Thus, the equations of motion instead of being given by
Eq. (3.2) would be of the form

X(n + 1) = F[X(n)] (3.28)


where the changes in the vector X(n) between generation n and n + 1 are
determined by the function F[X(n)]. If at generation n = 0 we specify the
components of X(0), that is, the set of circumstances characterizing the
system, then the evolution of the system is determined by iteration (map-
ping) of the recursion relation Eq.(3.28) away from the initial state. Even
in systems that are perhaps more properly described by continuous time
equations of motion it is thought by many [60], that a discrete time repre-
sentation may be used to isolate simplifying features of certain dynamical
systems.

3.2.1 One-dimensional maps


The evolution equation in a discrete representation is called a map and
the evolution is given by iterating the map, that is, by the repeated appli-
cation of the mapping operation to the newly generated points. Thus, an
iterated equation of the form Xn −→ Xn+1 = f (Xn ) , where f (·) maps
the one-dimensional interval [0, 1] onto itself, is interpreted as a discrete
time version of a continuous dynamical system. The choice of interval [0, 1]
is arbitrary since the change of variables Y = (X − 1)/(b − a) replaces a
mapping of the interval [a, b] into itself by one that maps [0, 1] into itself.
For example, consider the continuous trajectory in the two-dimensional
phase space depicted in Figure 3.19. The intersection points of the orbit
with the X−axis are denoted by X1 , X2 .... The point Xn+1 can certainly
be related to Xn by means of the function f determined by the trajectory.
Thus, instead of solving the continuous differential equations that describe
the trajectory, in this approach one produces models of the mapping func-
tion f and studies the properties of Xn+1 = f (Xn ). Here, as we have said,
n plays the role of the time variables. This strategy has been applied to
models for biological, social, economic, chemical and physical systems. May
[228] has pointed out a number of possible applications of the fundamental
equation for a single variable

Xn+1 = f (Xn ) (3.29)

60709_8577 -Txts#150Q.indd 131 19/10/12 4:28 PM


132 Dynamics in Fractal Dimensions

x
x1 x2 x3 x4

FIGURE 3.19. The spiral is an arbitrary orbit depicting a function y = f (x). The
intersection of the spiral curve with the x-axis defines a set of points x1 , x2 , ...that can
be obtained from a mapping determined by the mapping function f (x).

In genetics, for example, Xn could describe the change in the gene fre-
quency between successive generations; in epidemiology, the variable could
denote the fraction of the population infected at time n; in psychology,
certain learning theories can be cast in the form where Xn is interpreted
as the number of bits of information that can be remembered up to gen-
eration n; is sociology, the iterate might be interpreted as the number of
people having heard a rumor at time n and Eq.(3.29) would then describe
the propagation of rumors in societies of various structures see, for exam-
ple, Kemeny and Snell [183]. The potential applications of such modeling
equations are therefore restricted only by our imaginations.
Consider the simplest mapping, also called a recursion relation, in which
a population Xn of organisms per unit area, on a petri dish in the nth gener-
ation is strictly proportional to the population in the preceding generation
with a proportionality constant μ:

Xn = μXn−1 , n = 1, 2, ... (3.30)

The proportionality constant is given by the difference between the birth


rate and death rate and is therefore the net rate of change of the population.

60709_8577 -Txts#150Q.indd 132 19/10/12 4:28 PM


Nonlinear Bio-mapping 133

Equation (3.30) is quite easy to solve. Suppose that the population has a
level X0 = N0 at the initial generation, then the recursion relation yields
the sequence of relation

X1 = μN0 , X2 = μX1 = μ2 N0 , .. (3.31)


so that in general

Xn = μn N0 , n = 0, 1, 2, ... (3.32)
This rather simple solution already exhibits a number of interesting prop-
erties. First, if the net birth rate μ is less than unity, then we can write
μn = e−nβ where β > 0 so that the population decreases exponentially
between successive generations (note β = − ln μ). This is a reflection of the
fact that with μ < 1, the population of organisms fails to reproduce itself
from generation to generation and therefore it exponentially approaches
extinction:
lim Xn = 0 if μ < 1. (3.33)
n→∞

On the other hand if μ > 1, then we can write μn = ebβ where β (= 1nμ) >
0, so the population increases exponentially from generation to generation.
This is a reflection of the fact that with μ > 1 the population has an excess
at each generation resulting in a population explosion. This is the Malthus
exponential population growth:

lim Xn = ∞ if μ > 1. (3.34)


n→∞

The only value of μ for which the population does not have these extreme
tendencies is μ = 1, when, since the population reproduces itself exactly in
each generation, we obtain the unstable situation:

lim Xn = N0 if μ = 1. (3.35)
n→∞

Of course this simple model is no more valid than the continuous growth
law of Malthus [225], which he used to describe the exponential growth
of human populations. It is curious that the modeling of such growth, al-
though attributed to Malthus did not originate with him. In fact Malthus
was an economist and clergyman interested in the moral implications of
such population growth. His contribution to population dynamics was the
exploration of the consequences of the fact that a geometrically growing
population is always outstripped by a linearly growing food supply, result-
ing in overcrowding and misery. Why the food supply should grow linearly
was never questioned by him. A more scientifically oriented investigator,
Verhulst [348], put forth a theory that mediated the pessimistic view of

60709_8577 -Txts#150Q.indd 133 19/10/12 4:28 PM


134 Dynamics in Fractal Dimensions

Malthus. Verhulst noted that the growth of real populations is not un-
bounded. He argued that such factors as the availability of food, shelter,
sanitary conditions, and so on. all restrict (or at least influence) the growth
of populations. He included these effects by making the growth rate μ a
function of the population level. His arguments allows generalization of the
discrete model to include the effects of limited resources. In particular, he
assumed the birthrate to decrease with increasing population in a linear
way:

μ −→ μ(Xn ) = μ [1 − Xn /Θ] (3.36)


where Θ is the saturation level of the population. Thus the linear recursion
relation Eq. (3.28) is replaced with the nonlinear discrete logistic equation,

Xn+1 = μXn [1 − Xn /Θ] . (3.37)

It is clear that when the population is very far from its saturated level
Xn << Θ the number of people grows exponentially since the nonlinear
term is negligible. However at some point the ratio Xn /Θ is of the order
unity and the rate of population growth is retarded. When Xn = Θ there
are no more births. Biologically the regime Xn > Θ corresponds to a neg-
ative birthrate, or the number of deaths exceeds the number of births, and
so we restrict the region of interpretation of this model to [1 − Xn /Θ] > 0.
Finally, we reduce the number of parameters from two, μ and Θ, to one
by introducing Yn = Xn /Θ the fraction of the saturation level achieved by
the population at generation n. In terms of this ratio variable the recursion
relation Eq. (3.37) becomes the normalized logistic equation

Yn+1 = μYn [1 − Yn ] (3.38)


Segal [314] challenges the readers of his book (at this point in the analy-
sis of this mapping) to attempt and predict the type of behavior manifest
by the solution to Eq. (3.38), for example, are there periodic components
to the solution? Does extinction ever occur? His intent was to alert the
reader to the inherent complexity contained in the deceptively simple look-
ing equation. I examine some of these general properties shortly, but first
let us explore the example a bit more fully. My intent is to introduce the
reader to a number of fundamental dynamical concepts that are useful in
the subsequent study of biomedical phenomenon and their data.
Recall that extinction was the solution to the simple system Eq. (3.30)
when μ < 1. Is extinction a possible solution to the logistic equation? If it
is, then once that state is attained, it must remain unchanged throughout
the remaining generations. Put differently, extinction must be a steady-
state solution of the recursion relation. A steady-state solution is one for

60709_8577 -Txts#150Q.indd 134 19/10/12 4:28 PM


Nonlinear Bio-mapping 135

which Yn = Yn+1 for all n. Let us assume the existence of a steady-state


level Yss of the population such that Eq. (3.38) becomes

Yss = μYss [1 − Yss ] (3.39)


for all n, since in the steady-state Yn+1 = Yn = Yss . Equation (3.39) defines
the quadratic equation
2
Yss + (1/μ − 1) Yss = 0, (3.40)

which has the two roots Yss = 0, and Yss = (1 − 1/μ). The Yss = 0
root corresponds to extinction, but we now have a second steady solution
to the mapping Yss = 1 − 1/μ, which is positive for μ > 1. One of the
questions that is of interest in the more general treatment of this problem
is to determine to which of these steady states the population evolves as
the years go by; to extinction or some finite constant level.
Before we examine the more general properties of Eq.(3.38) and equations
like it, let us use a more traditional tool of analysis and examine the stability
of the two steady states found above. Traditionally the stability of a system
in the vicinity of a given value is determined by perturbation theory. I use
that technique now and write

Yn = Yss + ξn (3.41)

where ξn << Yss so that Eq.(3.41) denotes a small change in the rela-
tive population from its steady-state value. Substituting Eq. (3.41) into
Eq. (3.38) yields

Yss + ξn+1 = μ (Yss + ξn ) [1 − Yss − ξn ] . (3.42)

Then using Eq.(3.39) to eliminate certain terms and neglecting terms


quadratic in ξn results in

ξn+1 = (μ − 2Yss ) ξn (3.43)

as the recursion relation for the perturbation. In the neighborhood of ex-


tinction, the Yss = 0 steady state, Eq. (3.43) reduces to Eq. (3.30) in the
variable ξn rather than Xn . Therefore if 0 < μ < 1 the fixed point Yss = 0
is stable and if μ > 1 the fixed point is unstable. By stable we mean that
ξn → 0 as n → ∞ if 0 < μ < 1 so that the perturbed system returns to
the fixed point, that is, ξn decreases exponentially in n. By unstable we
mean that ξn → ∞ as n → ∞ if μ > 1 so that the perturbation grows
without bound and never returns to the fixed point, that is, the perturba-
tion increases exponentially with n. Of course μ = 1 implies the fixed point
is neutrally stable and neither returns to nor diverges from Yss = 0. It is

60709_8577 -Txts#150Q.indd 135 19/10/12 4:28 PM


136 Dynamics in Fractal Dimensions

also clear that in the unstable case that the condition for perturbation the-
ory eventually breaks down as ξn increases. When this divergence occurs a
more sophisticated analysis than linear perturbation theory is required.
In the neighborhood of the steady state Yss = 1 − 1/μ the recursion
relation specifying the stability condition becomes
ξn+1 = (2 − μ) ξn . (3.44)
The preceding analysis can again be repeated with the result that if 1 >
2 − μ > −1 the fixed point Yss = 1 − 1/μ is stable and implies that
the birthrate is in the interval 1 < μ < 3. The stability is monotonic for
1 < μ < 2, but because of the changes in sign it is oscillatory for 2 < μ < 3.
Similarly the fixed point is unstable for 0 < μ < 1 (monotonic) and μ > 3
(oscillatory).
μ = 2.8 μ = 3.2
1.0 1.0
(a) (b)
x1

x1

0.5 0.5

0 0
0 10 20 0 10 20
μ = 3.53 μ = 3.9
1.0 1.0
(c) (d)
x1

x1

0.5 0.5

t
0 0
0 10 20 0 10 20

FIGURE 3.20. The solution to the logistic map is depicted for various choices of the
control parameter μ. (a) The solution Yn approaches a constant value asymptotically
for μ = 2.8. (b) The solution Yn is a periodic orbit, a 2-cycle, after the initial transient
dies out for μ = 3.2. (c) The solution Yn from (b) bifurcates to a 4-cycle for μ = 3.53.
(d) The solution Yn is chaotic for μ = 3.9.

Following Olsen and Degn [255] I examine the nature of the solutions to
the logistic equation as a function of the parameter μ a bit more closely.
This can be done using a simple computer code to evaluate the iterates Yn .
For 0 < μ ≤ 4 insert an initial value 0 ≤ Y0 ≤ 1 into Eq. (3.38) and generate

60709_8577 -Txts#150Q.indd 136 19/10/12 4:28 PM


Nonlinear Bio-mapping 137

a Y1 , which is also in the interval [0, 1]. This second value of the iterate is
then inserted back into Eq. (3.38) and a third value Y2 is generated; here
again 0 ≤ Y2 ≤ 1. This process of generation and reinsertion constitutes
the dynamic process, which is a mapping of the unit interval into itself in a
two- to-one manner, that is, two values of the iterate at step n can be used
to generate a particular value of the iterate at step n + 1. In Figure 3.20a
we show Yn as a function of n for μ = 2.8 and observe that as n becomes
large (n > 10) the value of Yn becomes constant. This value is a fixed
point of the mapping equal to 1 − 1/μ = 0.643, and is approached from all
initial conditions 0 ≤ Y0 ≤ 1; it is an attractor. Quite different behaviors
are observed for the same initial points but different values of the control
parameter, say when μ = 3.2. In Figure 3.20b we see that after an initial
transient the process becomes periodic, that is to say the iterate alternates
between two values. This periodic orbit is called a 2-cycle. Thus, the fixed
point becomes unstable at the parameter value μ = 3 and bifurcates into
a 2-cycle. Here the 2-cycle becomes the attractor for the mapping. At a
slightly larger value of μ, say μ = 3.53, the mapping settles down into a
pattern in which the value of the iterate alternates between two large values
and two small values (cf. Figure 3.20c). Here again the existing orbit, a
2-cycle, has become unstable at μ = 3.444 and bifurcated into a 4-cycle.
Thus, we sec that as μ is increased a fixed point changes into a 2-cycle, a
2-cycle changes into a 4-cycle, which in turn changes into an 8-cycle and so
on. This process of period doubling is called subharmonic bifurcation since
a cycle of a given frequency ω0 bifurcates into periodic orbits which are
subharmonics of the original orbit, that is, for k bifurcations the frequency
of the orbit is ω0 /2k . The attractor for the dynamic process can therefore
be characterized by the appropriate values of the control parameter μ.
As one might have anticipated, the end point of this period doubling
process is an orbit with an infinite period (zero frequency). An infinite
period implies that the system is aperiodic, that is to say, the pattern of the
values of the iterate does not repeat itself in any finite number of iterations,
or said differently it does not repeat itself in any finite time interval (cf.
Figure 3.20d). We have already seen that any process that does not repeat
itself as time goes to infinity is completely unique and hence is random.
It was this similarity of the mapping to discrete random sequences that
motivated the coining of the term chaotic to describe such attractors. The
deterministic mapping Eq. (3.38) can therefore generate chaos for certain
values of the parameter μ.
Returning now to the more general context it may appear that limit-
ing the present analysis to one-dimensional systems is unduly restrictive;
however, recall that the system is pictured to be a projection of a more
complicated dynamical system onto a one-dimensional subspace (cf. for ex-

60709_8577 -Txts#150Q.indd 137 19/10/12 4:28 PM


138 Dynamics in Fractal Dimensions

ample Figure 3.19). A substantial literature based on the logistic equation


developed in the last quarter of the twentieth century, much of which is
focused on the purely mathematical properties of such mappings. The vast
literature is not of concern here, except insofar as it makes available to us
solutions and insights that can be applied in biology and medicine.
One of the papers on the application of the logistic equation to biological
systems is the remarkable review article of May in which he makes clear the
state of the art in discrete systems up until 1976. In addition he comments:

The review ends with an evangelical plea for the introduction of


these difference equations into elementary mathematics courses,
so that students intuitions may be enriched by seeing the wild
things that simple nonlinear equations can do.

His plea was motivated by the recognition that the traditional math-
ematical tools such as Fourier analysis, orthogonal functions, etc. are all
fundamentally linear and

...the mathematical intuition so developed ill equips the stu-


dents to confront the bizarre behavior exhibited by the simplest
discrete nonlinear systems, ... Yet such nonlinear systems are
surely the rule, not the exceptions, outside the physical sciences.

May ends his article with the following prophetic indictment:

Not only in research, but also in the everyday world of politics


and economics, we would all be better off if more people realized
that simple systems do not necessarily possess simple dynamic
properties.

The temporary focus is on maps (discrete dynamical systems) of interest


that contain a single maximum and that f (X) is monotonically increas-
ing for values of X below this maximum and monotonically decreasing for
values above this maximum. Maps such as these, maps with a single max-
imum, are called non-invertible, since, given Xn+1 there are two possible
values of Xn and therefore the functional relation cannot be inverted. If
the index n is interpreted as the discrete time variable, as done above,
the recursion relation generates new values of Xn forward in time but not
backward in time, see for example Ott [260]. This assumption corresponds
to the reasonable requirement that the dynamic law stimulates X to grow
when it is near zero, but inhibits its growth when it approaches a satura-
tion value. An example of this is provided by the discrete version of the
Verhulst equation for population growth just examined. Equation (3.37)

60709_8577 -Txts#150Q.indd 138 19/10/12 4:28 PM


Nonlinear Bio-mapping 139

has been intensively studied in the physical sciences, usually in the scaled
form Eq. (3.38)and when graphed versus Yn yields the parabolic curve de-
picted in Figure 3.21.
y3

y1

y2

y
y2 y0 y* y1 y3

(a)

y2

y1

y
y0 y1 y2

(b)

FIGURE 3.21. A mapping function with a single maximum is shown. In (a), the iteration
away from the initial point Y0 is depicted. In (b), the convergence to the stationary
(fixed) point Y ∗ is shown.

The mapping operation is one that is accomplished by applying the func-


tion f to a given initial values Y0 to generate the next point, and applied
sequentially to generate the successive images of this point. The point Yn
is generated by applying the mapping f , n times to the initial point Y0 :
Yn = f (Yn−1 ) = f 2 (Yn−2 ) = ·· = f n (Y0 ) . (3.45)
This is done graphically in Figure 3.21a for n = 3 using the rule: starting
from the initial point Y0 a line is drawn to the function yielding the value
Y1 = f (Y0 ) along the ordinate, then from symmetry the same value is
obtained along the abscissa by drawing a line to the diagonal (45◦ ) line.
An application of f to Y1 is then equivalent to dropping a line from the
diagonal to the f −curve to yield Y2 = f (Y1 ) = f [f (Y0 )] = f 2 (Y0 ). The
value Y3 is obtained in exactly the same way from Y3 = f 3 (Y0 ). Thus, the
nth order iterate can be determined by graphical construction.
The intersection of the diagonal with the function f defines a point Y ∗
having the property

60709_8577 -Txts#150Q.indd 139 19/10/12 4:28 PM


140 Dynamics in Fractal Dimensions

1.0

f2(yn)
0.5

0
0 0.5 1.0
yn

(a)

1.0
f2(yn)

0.5

0
0 0.5 1.0
yn

(b)

FIGURE 3.22. The map f with a single maximum in Figure 3.21 yields an f 2 map with
a double maximum. The slope at the point Y ∗ is indicated by the dashed line and is
seen to increase as the control parameter μ is raised in the map from (a) to (b).

Y ∗ = f (Y ∗ ) (3.46)
is called a fixed point of the dynamic equation, that is, Y ∗ is the Yss from
Eq.(3.39). The fixed points correspond to the steady-state solutions of the
discrete equation and for Eq.(3.40) there are Y ∗ = 1 − 1/μ (nontrivial) and
Y ∗ = 0 (trivial). We can see in Figure 3.21b that the iterated points are
approaching the fixed point Y ∗ and reach it as n −→ ∞. To determine if a
mapping approaches a fixed point asymptotically, that is, whether or not
the fixed point is stable, we examine the slope of the function at the fixed
point [60, 197, 228]. The function acts like a curved mirror either focusing
the ray towards the fixed point under multiple reflections or defocusing
the ray away. The asymptotic direction (either towards or away from the
fixed point) is determined by the slope of the function at Y ∗ , which is
depicted in Figure 3.21 by the dashed line and denoted by f  (Y ∗ ), that
is, the (tangent to the curve) derivative of f (Y ) at Y = Y ∗ . As long as
|f  (Y )| < 1 the iterations of the map are attracted to the fixed point, just as
the perturbation ξ approaches zero in Eq.(3.44) near the stable fixed point.

60709_8577 -Txts#150Q.indd 140 19/10/12 4:28 PM


Nonlinear Bio-mapping 141

1 0.5 0.25 0.1 0.05 0.025 0.01 0.005 0.0025


μ∞ − μ

FIGURE 3.23. The bifurcation of the soluton to the logistic mapping as a function of
μ∞ − μ is indicated. The lorarithmic scale was chosen to clearly depict the bifurcation
regions. [60]

Again using the logistic map as an example, we have f  (Y ∗ ) = 2 − μ, so


that the equilibrium point is stable and attracts all trajectories originating
in the interval 0 < Y < 1 if and only if 1 < μ < 3. This is of course the
same result obtained using linear stability theory [cf. Eq. (3.44)] for the
logistic map, but the present argument applies to all one-humped maps.
When the slope of the map f is such that the fixed point becomes un-
stable, that is, when |f  (Y0 )| > 1, then the solution ‘spirals’ out. If the
parameter μ is continuously increased until this instability is reached then
the orbit spirals out until it encounters a situation where Y2∗ = f (Y1∗ )
and Y1∗ = f (Y2∗ ), that is, the orbit becomes periodic. Said differently, the
mapping f has a periodic orbit of period 2 since Y2∗ = f (Y1∗ ) = f 2 (Y2∗ )
and Y1∗ = f (Y2∗ ) = f 2 (Y1∗ ) since Y1∗ and Y2∗ are both fixed points of the
mapping f 2 and not of the mapping f . In Figure 3.22a we illustrate the
mapping f 2 and observe it to have two maxima rather than the single one
of the map f . As the parameter μ is increased further the dimple between
the two maxima increases as do the height of the peaks along with the
slopes of the intersection of f 2 with the diagonal (cf . Figure 3.22b).
For 1<μ<3 the fixed point is stable and Y ∗ is a degenerate fixed point of
f , that is, Y ∗ = f 2 (Y ∗ ). At μ = 3.414 the fixed point becomes unstable
2

and two new solutions to the quadratic mapping emerge. These are the two

60709_8577 -Txts#150Q.indd 141 19/10/12 4:28 PM


142 Dynamics in Fractal Dimensions

intersections of the quadratic map with the diagonal having slopes with
magnitude less than unity, Y1∗ and Y2∗ . The chain rule of differentiation of
the derivative of f 2 at Y1∗ and Y2∗ is the product of the derivatives along
the periodic orbit

f 2 (Y1∗ ) = f  [f (Y1∗ )] f  (Y1∗ ) = f  (Y2∗ ) f  (Y1∗ ) = f 2 (Y2∗ ) (3.47)

so that the slope is the same at both points of the period-2 orbit [197] and in
fact the slope is the same at all k of the values of a period k orbit. This is in
fact a continuous process starting from the stable fixed point Y ∗ when |f  |
< 1; as μ is increased this point becomes unstable at |f  | = 1 and generates

two new stable points with |f 2 | < 1 for a period-2 orbit; as μ is increased

further these points become unstable at |f 2 | = 1 and generates four new

stable points with |f 4 | < 1 for a period-4 orbit. This bifurcation sequence
is tied to the value of the parameter μ. As this parameter is increased the
discrete equation undergoes a sequence of bifurcations from the fixed point
to stable cycles with periods 2, 4, 8, 16, 32...2k . In each case the bifurcation
process is the same as that for the transition from the stable fixed point
to the stable period-2 orbit. A graph indicating the location of the stable
values of Y for a given μ is given in Figure 3.23.

FIGURE 3.24. The same as Figure 3.23 but with a linear scale in μ∞ − μ so that the
hazy region denoting chaos is clear observed. [60]

In Figure 3.23 it is clear that the μ interval between successive bifurca-


tions is diminishing with increasing values so that the ‘window’ of values

60709_8577 -Txts#150Q.indd 142 19/10/12 4:28 PM


Nonlinear Bio-mapping 143

of μ wherein any one cycle is stable progressively diminishes. If we denote


by μk the value of μ where the orbit bifurcates from length 2k−l to 2k then
μk − μk−1
lim = universal constant. (3.48)
k→∞ μk+1 − μk
a result first obtained numerically by Feigenbaum [94]. This result indicates
that a constant μ∞ is being approached by this sequence. This critical pa-
rameter value is a point of accumulation of a period 2k cycles. For the
logistic equation the critical value of this parameter is μ∞ = 3.5700. The
numerical value of μ∞ is dependent on the particular map considered, al-
though the existence of an accumulation point does not depend on the
particular map, and more importantly the universal constant in Eq.(3.48)
has a value 4.69210 ... and is also independent of the specific choice of the
map.
In Figure 3.23 the logarithm of μ is given as the abscissa in order to
emphasize the bifurcation points. In Figure 3.24 this sequence is replotted
linearly in μ. In the latter figure we distinguish from left to right, a stable
fixed point, orbit of period-1; a stable orbit of period-2, then stable orbits
of periods 4, 8 followed by a haze of orbits starting along the line μ∞ , then
another orbit of period 6 then 5, and 3. Collet and Eckmann [60] comment:
“The astonishing fact about this arrangement of stable periodic orbits is its
independence of the particular one-parameter family of maps.” The haze
of points beyond μ∞ consists of an infinite number of fixed points with
different periodicities, along with an infinite number of different periodic
orbits. In addition there are an uncountable number of aperiodic trajecto-
ries (bounded) each of which is associated with a different initial point Y0 .
Two such adjacent initial points generate orbits that become arbitrarily
distant with iteration number; no matter how long the time series gener-
ated by f (Y ) is iterated, the two patterns never repeat. As mentioned, Li
and Yorke [197] have applied the term chaotic to this hazy region where
an infinite number of different trajectories can occur.
Thus, we have arrived at the remarkable fact that a simple discrete deter-
ministic equation can generate trajectories that are aperiodic. In particular,
in order to form a one-dimensional map to exhibit chaotic behavior, it must
be non-invertible. May [228] points out a number of practical implications
of this result. The first being:

...that the apparently random fluctuations in census data for


an animal population need not necessarily betoken either the
vagaries of an unpredictable environment or sampling errors:
they may simply derive from a rigidly deterministic population
growth relationship such as Eq. (3.37).

60709_8577 -Txts#150Q.indd 143 19/10/12 4:28 PM


144 Dynamics in Fractal Dimensions

3.2.2 Two-dimensional maps


In the above discussion we defined a mapping in terms of a projection of a
higher-order dynamic system onto a one-dimensional line. This same defini-
tion can be applied for the intersection of the trajectories of a higher order
dynamic process with a two-dimensional plane. In Figure 3.25 a sketch
of a trajectory in three dimensions is shown, the intersection of the orbit
with a plane defines a set of points that can be obtained by means of the
two-dimensional map:

Xn+1 = f1 (Xn , Yn ) , Yn+1 = f2 (Xn , Yn ) . (3.49)

Here we follow Ott [260] and consider only invertible maps where Eq.(3.49)
can be solved uniquely for Xn and Yn as functions of Xn+1 and Yn+1 :

Xn = g1 (Xn+1 ,Yn+1 ) and Yn = g2 (Xn+1 ,Yn+1 ). (3.50)

If n is the time index then invertibility is equivalent to time reversibility,


so that these maps are reversible in time whereas those in the preceding
discussion were not. The maps in this section are analogous to the Hamil-
tonian dynamic equations discussed in physics and chemistry and not the
dissipative equations leading to the strange attractors such as the Lorenz
model.
The reason for examining higher order maps, such as the two-dimensional
example given by Eq. (3.49) is that under certain conditions these maps
have many of the properties of the so-called strange attractors even though
they are conservative. Thus, discrete equations may be the more natural
way to model dynamical complex networks in the biological and behav-
ioral sciences than are the traditional continuous equations of the physical
science. The connections between these invertible maps and the strange
attractor of Lorenz as well as to the fractal dimension are discussed in this
chapter.
The one-dimensional non-invertible maps were obtained by projecting a
higher order trajectory onto a one-dimensional line. Let us now reverse the
process and expand the space of the non-invertible map from one to two
dimensions by introducing the coordinate Yn in the following way:

Xn+1 = f (Xn ) + Yn (3.51)


Yn = βXn . (3.52)

60709_8577 -Txts#150Q.indd 144 19/10/12 4:28 PM


Nonlinear Bio-mapping 145

X2

A B
C

X1

X3

FIGURE 3.25. An arbitrary trajectory is shown and its intersection with a plane parallel
to the x1 , x3 −plane at x2 = constant are recorded. The points A, B, C,... define a map
as in Figure 3.19. This is the Poincaré surface of section.

Of course, if the map f is non-invertible and β = 0 this pair of equations


collapses back onto the one-dimensional map Eq.(3.38). For any non-zero
μ, however, the map Eq.(3.51) is invertible:

Xn = Yn+1 /β and Yn = Xn+1 − f (Yn+1 /β) . (3.53)


Thus, a non-invertible map is transformed to an invertible one by extending
the phase space. As Ott [260] points out, however, if μ is sufficiently small
the distinction between the invertible two-dimensional map and the non-
invertible one-dimensional map may not be measurable.
Examine the behavior of a small phase space volume as the two- dimen-
sional map is iterated from Xn Yn = Vn to Xn+1 Yn+1 = Vn+1 in analogy
to what was done with the Lorenz model. Recall that for the Lorenz at-
tractor a small phase space volume obtained by perturbing the solutions
of the equations of motion contracted due to dissipation. Here the relation
between the two volumes Vn and Vn+1 is

Vn+1 = JVn (3.54)


where J is the Jacobian of the map:

60709_8577 -Txts#150Q.indd 145 19/10/12 4:28 PM


146 Dynamics in Fractal Dimensions

 
 ∂Xn+1 ∂Xn+1 
 
J ≡ ∂Xn ∂Yn
 (3.55)
 ∂Yn+1
∂Xn
∂Yn+1
∂Yn


Inserting Eqs. (3.51) and (3.52) into Eq. (3.55) we find J = −β so that
using the magnitude of the Jacobian the volume at consecutive times is
given by

Vn+1 = βVn (3.56)


which for an initial volume V1 has the solution

Vn+1 = β n V1 (3.57)

0.4

0.3

0.2

0.1

–0.1

–0.2

–0.3

–0.4

–1.5 –1.0 –0.5 0 0.5 1.0 1.5

FIGURE 3.26. Iterated points of the Henon map for 104 iterations with the parameter
values c = 1.4 and β = 0.2. [260]

and if β < 1 the volume contracts by a factor β at each application of


the map. As in the continuous case this contraction does not imply that
the solution goes over to a point in phase space, but only that it is at-
tracted to some bounded region of dimension lower than that of the initial
phase space. If the dimension of the attractor is non-integer, then the at-
tractor is fractal; see for example Mandelbrot [218] where the observation
that the fractal dimension of a set may or may not be consistent with the
term strange. Follow Eckmann [80] and employ the property that, if all the
points in the initial volume V1 converge to a single attractor, but points
that are arbitrarily close initially separate exponentially in time, then call
that attractor strange. This property of nearby trajectories to exponen-
tially separate in time is called sensitive dependence on initial conditions

60709_8577 -Txts#150Q.indd 146 19/10/12 4:28 PM


Nonlinear Bio-mapping 147

and gives rise to the aperiodic behavior of strange attractors. There ex-
ists however a large variety of attractors which are neither periodic orbits
nor fixed points and which are not strange attractors. All of these seem
to present more or less pronounced chaotic features [80]. Thus, there are
attractors that are erratic but not strange. I will not pursue this general
class here.
As an example of the two-dimensional invertible mapping we first trans-
form the logistic equation into the family of maps Xn+1 = 1−cXn2 with the
parametric identification c = (μ/2 − 1)μ/2 and 0 < c ≤ 2, since 2 < μ ≤ 4
and Xn maps the interval [−1, 1] onto itself. Then using Eqs. (3.51) and
(3.52) we obtain the mapping first studied by Henon [152]

Xn+1 = 1 − cXn2 + Yn (3.58)


Yn+1 = βXn (3.59)
0.21 0.191

0.20 0.190

0.19 0.189

0.18 0.188

0.17 0.187

0.16 0.186

0.15 0.185
0.55 0.60 0.65 0.70 0.625 0.630 0.635 0.640

(a) (b)

0.1895

0.1894

0.1893

0.1892

0.1891

0.1890

0.1889
0.6305 0.6310 0.6315 0.6320
(c)

FIGURE 3.27. a) Enlargement of the boxed region in Figure 3.26, 104 iterations; b)
enlargement of the square in a), 106 iterations; c) enlargement of the square in b),
5×106 iterations. [260]

In Figure 3.26 we have copied the loci of points for the Henon system in
which 104 successive points from the mapping with the parameter values

60709_8577 -Txts#150Q.indd 147 19/10/12 4:28 PM


148 Dynamics in Fractal Dimensions

c = 1.4 and β = 0.2 initiated from a variety of choice of (X0 , Y0 ). Ott [260]
points out that, as the map is iterated, points come closer and closer to the
attractor eventually becoming indistinguishable from it. This, however, is
an illusion of scale.
If the mixed-in region of the figure is magnified one obtains Figure 3.27a
from which a great deal of structure of the attractor can be discerned. If
the boxed region in this latter figure is magnified, then what had appeared
as three unequally spaced lines appear in Figure 3.27b as three distinct
parallel intervals containing structure. Notice that the region in the box
of Figure 3.27a appears the same as that in Figure 3.27b. Magnifying the
boxed region in this latter region we obtain Figure 3.27c, which aside from
resolution is a self-similar representation of the structure seen on the two
preceding scales. Thus, scale invariant, Cantor-set-like structure transverse
to the linear structure of the attractor is observed. Ott [260] concludes that
because of this self-similar structure the attractor is probably strange. In
fact it has been verified by direct calculation that initially nearby points
separate exponentially in time [95, 67], thereby coinciding with at least one
definition of a strange attractor.

3.2.3 The Lyapunov exponent


We have adopted the definition that chaotic systems are those that have
a sensitive dependence on initial conditions. This sensitivity requires, that
orbits initially near to one another exponentially separate as they evolve
forward in time. A computable quantitative measure of the rate at which
orbits separate is the Lyapunov exponent. For a one-dimensional map the
Lyapunov exponent is defined by the slope of the map:
 
1   df (Yn ) 
N
σ = lim ln  (3.60)
N →∞ N
n=1
dY 

where Yn+1 = f (Yn ). Shaw [316] has shown that σ is also the average
information change over the entire interval of iteration. He argues that a
map may be interpreted as a machine that takes a single input Y0 and
generates a string of numbers during the iteration process. If the string
has a pattern such as would arise for an attractor that is a fixed point
or periodic orbit, then after a very short time the machines gives no new
information. On the other hand, if the orbit is chaotic so that the string of
numbers is random, then each iterate is new to the observer, and gives a
new piece of information. Shaw convincingly demonstrated that a chaotic
process is a generator of information. He argues that a negative σ implies
a periodic orbit and the magnitude of σ measures the degree of stability

60709_8577 -Txts#150Q.indd 148 19/10/12 4:28 PM


Nonlinear Bio-mapping 149

of that orbit against perturbations. If an orbit is initiated at a point away


from the periodic orbit, but within its basin of attraction, the initial data
is lost as the orbit damps to its stable values. The parameter σ determines
the rate at which this information is lost to the macroscopic world. If σ
is positive, then it determines the rate of divergence of nearby trajectories
which is the same as the rate of information production; see Oseledec [259].
As an example I take μ = 4 in the logistic equation and define the new
variable  
2
Zn ≡ sin−1 Yn (3.61)
π
so that the logistic map transforms to the ‘tent map’

2Zn 0 ≤ Yn ≤ 0.5
Zn+1 = . (3.62)
2 (1 − Zn ) 0.5 ≤ Yn ≤ 1
From this equation we obtain for the slope of the map to be used in the
defining equation for the Lyauponov coefficient Eq. (3.60)
 
 df (Zn ) 
 
 dZ  = 2 (3.63)

for all Z, so that the Lyapunov coefficient is positive

σ = ln 2 = 0.693 > 0. (3.64)


Since this quantity is invariant under coordinate transformations this proves
that the logistic map with μ = 4 meets our definition of a chaotic dynamical
system, that is, 0.693 bits of information are generated in each iteration. In
fact this mapping is chaotic for all μ > μ∞ = 3.57..since σ > 0 throughout
this range of values.
Let us consider an N −dimensional map, with variables X = (X1 , X2 ,
· · ·, XN ),
Xn+1 = f (Xn ) (3.65)
for which we have a trajectory Xn in this phase space with initial condi-
tion X0 and a nearby trajectory Xn with initial condition X0 +ΔX0 and
||ΔX0 || << ||ΔX0 || . Here the double bars denote the norm of the vector.
The difference between the two trajectories ΔXn defines the tangent vector
un ≡ ΔXn such that Eq. (3.65) can be used to write

∂f
un+1 = f (Xn ) − f (Xn ) ≡
· un + · · ·, (3.66)
∂X
which defines the linearized mapping when all the higher-order terms are
neglected

60709_8577 -Txts#150Q.indd 149 19/10/12 4:28 PM


150 Dynamics in Fractal Dimensions

un+1 = A (Xn ) · un (3.67)


where A is an N × N matrix whose elements are defined by

∂f
A (Xn ) ≡ (3.68)
∂X
so that the map given by Eq.(3.67) is linearized along the trajectory Xn .
Following Nicolis [250] the solution to Eq. (3.67) for a given initial condition
e0 at the nth iteration can be written as

un = Un,n−1 · · · U10 e0 (3.69)


where U is the fundamental solution matrix. The indexing on U indicates
the iteration for which it is the solution to the mapping. Let us interpret
Eq. (3.69) starting with the right-most factor: U10 0 = u1 is the solution
to Eq. (3.67) for the initial condition X0 . The solution u1 is a vector of
length d1 and direction e1 :

U10 0 = d1 
e1 (3.70)
and e1 has a unit norm. Now we apply U21 to  e1 and obtain a vector of
length d2 and direction 
e2 . Finally we can rewrite Eq. (3.69) as the product
of n numbers

un = d1 d2 · · · dn 
en , |
en | = 1 (3.71)
instead of a product of n matrices. The maximal Lyapunov exponent is
then defined as

1
n
1
σ = lim ln ||un || = lim ln dj . (3.72)
n→∞ n n→∞ n
j=1

From the definition of dk =||Uk,k−1 ek−1 || it is evident that ln dk is the


exponential change of the length of e0 during the time interval when the
system moves between the iterates Xk−1 and Xk .
Rather than finding just the maximal Lyapunov exponent we can define
a Lyapunov exponent for each of the N variables that describe the dynamic
system. To do this we note [37] that one can introduce eigenvalues λ.j (n)
of the matrix
1/n
An = [A (Xn ) · · · A (X1 )] (3.73)
where An is defined by Eq.(3.68) and is the Jacobian matrix of f. The
Lyapunov exponents are then given by

60709_8577 -Txts#150Q.indd 150 19/10/12 4:28 PM


Nonlinear Bio-mapping 151

ε1(n) = λ1nε(0)

ε(0) n iterations of F

ε2(n) = λ2nε(0)

FIGURE 3.28. Lyapunov exponents define the average stretching or contraction of trajec-
tories in characteristic directions. Here we show the effects of applying a two-dimensional
mapping to circles of initial conditions. A sufficiently small circle of radius ε (0) is trans-
formed after n iterations into an ellipse with major radius λn 1 ε (0) and minor radius λ2
n

ε (0), where λ1 and λ2 are the Lyapunov exponents for n −→ ∞.

σj ≡ lim ln |λj | . (3.74)


n→∞

These eigenvalues λj are often called the Lyapunov numbers.


Let us consider the example given by Ott [260] and shown in Figure 3.28.
For a two-dimensional map, the Lyapunov numbers are given by λ1 and
λ2 and are interpreted as the average principle stretching factors for a very
small initial circular area of radius ε(0). More formally we can write

 
1/n
λj = lim magnitude jth eigenvalue of [A (Xn , Yn ) · · · A (X1 , Y1 )]
n→∞
(3.75)
where A(X, Y) is the Jacobian matrix of the map:
 
∂f1 (X,Y) ∂f1 (X,Y)
A(X, Y) = ∂X ∂Y . (3.76)
∂f2 (X,Y) ∂f2 (X,Y)
∂X ∂Y

The functions f1 and f2 are the components of the mapping vector f in


Eq. (3.65), and, of course, (X1 , Y1 ), · · ·, (Xn , Yn ) is a sequence generated
by the map. Then the Lyapunov numbers specify the average stretching
rate of nearby points. If the map is to be chaotic, for λ1 > λ2 say, then λ1
> 1, so that the distance between almost nearby points increases in suc-
cessive iterations. If the map is area contracting then λ1 < 1, the distance
between almost nearby points decreases in successive iterations; if it is area
preserving then λ1 = 1 and the distance remains unchanged.

60709_8577 -Txts#150Q.indd 151 19/10/12 4:28 PM


152 Dynamics in Fractal Dimensions

3.3 Measures of Strange Attractors


In broad outline I have attempted to give some indications of how simple
nonlinear dynamic equations can give rise to a rich variety of dynamic
behaviors. The focus has been on the phenomenon of chaos described from
the point of view of mathematics and modeling. Some effort has been made
to put these discussions in a biomedical context, but little or no effort
was made to relate these results to actual data sets. Thus, the techniques
may not appear to be as useful as they could be to the experimentalist
who observes large variations in his/her data and wonders if the observed
fluctuations are due to chaos or are the result of noise. In most biomedical
phenomena there is no reliable dynamical model describing the behavior
of the system, so the investigator must use the data directly to distinguish
noise from chaos; there is no guide telling what the appropriate parameters
are that might be varied. As we mentioned earlier, a traditional method
for determining the dynamical content of a time series is to construct the
power spectrum for the process by taking the Fourier transform of the
autocorrelation function, or equivalently by taking the Fourier transform
of the time series itself and forming its absolute square.
The autocorrelation function provides a way to use the data at one time
to determine the influence of the process on itself at a latter time. It is a
measure of the relation of the value of a random process at one instant of
time, X(t) say, to the value at another instant τ seconds later, X(t + τ ).
If we have a data record extending continuously over the time interval
(−T /2, T /2), then the autocorrelation function can be defined as

T /2
1
Cxx (τ, T ) = X(t)X(t + τ )dt. (3.77)
T
−T /2

Note that for a finite sample length T the integral defines an estimate for
the autocorrelation function

Cxx (τ ) = lim Cxx (τ, T ) . (3.78)


T →∞

In Figure 3.29 a sample history of X(t) is given along with its displaced
time trace X(t + τ ). The point by point product of these two series is given
in Eq. (3.77) and then the average over the time interval (−T /2, T /2) is
taken. A sine wave, or any other harmonic deterministic data set, would
have an autocorrelation function which persists over all time displacements.

60709_8577 -Txts#150Q.indd 152 19/10/12 4:28 PM


Measures of Strange Attractors 153

FIGURE 3.29. The time trace of a random function X(t) versus time t is shown as the
upper curve. The lower curve is the same time trace displaced by a time interval τ . The
product of these two functions when averaged yield an estimate of the autocorrelation
function Cxx (τ ).

Thus, the autocorrelation function can provide a measure of deterministic


data embedded in a random background.
Similar comments apply when the data set is discrete rather than con-
tinuous, as it would be for the mappings in Section 3.2. In the discrete
case I denote the interval between samples as Δ( = T /N ) for N equally
spaced intervals and r as the lag or delay number so that the estimated
autocorrelation function is
N −r
1 
Cxx (rΔ, N ) = Xj Xj+r ; r = 0, 1, ..., m (3.79)
N − r j=1

and m is the maximum lag number. Note that Cxx (rΔ, N ) is analogous to
the estimate of the continuum autocorrelation function and becomes the
true autocorrelation function in the limit N −→ ∞. These considerations
have been discussed at great length by Wiener [395] in his classic book on
time series analysis, and is still recommended today as a text from which
to capture a master’s style of investigation.
The frequency content is extracted from the time series using the auto-
correlation function by applying a filter in the form of a Fourier transform.
This yield the power spectral density

1
Sxx (ω) = e−iωt Cxx (t) dt (3.80)

−∞

60709_8577 -Txts#150Q.indd 153 19/10/12 4:28 PM


154 Dynamics in Fractal Dimensions

of the time series X(t). Equation (3.80) relates the autocorrelation func-
tion to the power spectral density and is known as the Weiner-Khinchine
relation in agreement with Eq.(3.21). One example of its use is provided
in Figure 3.30a where the exponential form of the autocorrelation function
Cxx (t) = e−t/τc used in Figure 3.30b yields a frequency spectrum of the
Cauchy form:

1 τc
Sxx (ω) = (3.81)
π 1 + ω 2 τc2

At high frequencies the spectrum given by Eq.(3.81) is seen to fall-off as


1/ω 2 . Basar [30], among others, has applied these techniques to the anal-
ysis of many medical phenomena including the interpretation of electrical
signals from the brain.
The electrical activity of the brain measured at various points on the
scalp is well known to be quite erratic. It was the dream of the mathemati-
cian Norbert Wiener [397] that the methods of harmonic decomposition
would force the brain to yield up its secrets as a generalized control sys-
tem. In this early approach the aperiodic signal captured in the EEG time
series is assumed to consist of a superposition of independent frequency
modes. This assumption enabled the investigator to interpret the harmonic
content of the EEG signal using the above Fourier methods. This view was
partially reinforced by the work on evoked potentials, discussed in Chap-
ter Four, where a clear pattern in the EGG signal could be reproduced
with specific external stimulations such as auditory tones. In Figure 3.31
a typical set of averaged evoked potentials for a sleeping cat is depicted.
The large initial bump is produced by auditory stimulation in the form of
a step function. The corresponding PSD is depicted in Figure 3.32. Here
again we have an inverse power law in frequency for high frequency. In fact,
it is very close to the ω −2 asymptotic shape of the Cauchy PSD.
As mentioned, a periodic signal in the data shows sharp peaks in the spec-
trum corresponding to the fundamental frequency and its higher harmonics.
On the other hand, the spectrum corresponding to aperiodic variations in
the time series are broadband in frequency with no discernible structure.
In themselves spectral techniques have no way of discriminating between
chaos and noise and are therefore of little value in determining the source
of the fluctuations in a data set. However they were in fact very useful, as
shown in Section 3.1, in establishing the similarities between stochastic pro-
cesses and chaos defined as the sensitive dependence on initial conditions
in a dynamic process.

60709_8577 -Txts#150Q.indd 154 19/10/12 4:28 PM


Measures of Strange Attractors 155

FIGURE 3.30. (a) The autocorrelation function Cxx (τ ) for the typical time traces de-
picted in Figure 3.29 assuming the fluctuations are exponentially correlated in time
[exp(−t/τc )]. The constant τc is the time required for Cxx (t) to decrease by a factor
1/e, this is the decorrelation time. (b) The power spectral density Sxx (ω) is graphed as
a function of frequency for the exponential correlation function with a central frequency
ω0 .

One way in which some investigators have proceeded in discriminating


between chaos and noise is to visually examine time series for period dou-
blings. This is a somewhat risky business, however, and may lead to misin-
terpretations of data sets. Also, period doubling is only one of the possible
routes to chaos in dynamic systems. For example, considerable attention
is again being focused on the possible dynamical mechanisms underlying
cardiac electrical disturbances. The abrupt onset of an arrhythmia appears
to represent a bifurcation from the stable, physiological steady state of nor-
mal sinus rhythm to one involving different frequency modes. Perhaps the
most compelling evidence for the relevance of nonlinear analysis to these
perturbations comes from recent response of period-doubling phenomena
during a variety of induced and spontaneous arrhythmias (see Section 4.3).
The major question guiding future investigations in this case is whether
nonlinear models provide new understanding of the mechanisms of sud-
den cardiac death. The most important fatal arrhythmia is ventricular

60709_8577 -Txts#150Q.indd 155 19/10/12 4:28 PM


156 Dynamics in Fractal Dimensions

GEA

50 μV
MG

RF

IC

HI

0 100 200 300 400


time (msec)

FIGURE 3.31. A typical set of simultaneously recorded and selectively averaged evoked
potentials in different brain nuclei of chronically implanted cats, elicited during the
slow wave sleep stage by an auditory stimulation in the form of step function. Direct
computer-plottings; negativity upwards. [32]

fibrillation, characterized by rapid, apparently erratic oscillations of the


electrocardiogram. The notion that ventricular fibrillation represents a form
of cardiac chaos has been at large for many years. The term ‘chaotic’ to
describe this arrhythmia was first used in a colloquial sense by investi-
gators and clinicians observing the seemingly random oscillations of the
electrocardiogram, which were associated with ineffective, uncoordinated
twitching of the dying heart muscle. This generic use of the term chaos to
describe fibrillation underwent an important evolution in 1964 when Moe
et al. [239] proposed a model of atrial fibrillation as a turbulent cascade
of large waves into small eddies and smaller wavelets, etc. The concept
that ventricular fibrillation represent a similar type of ‘completely chaotic,
turbulent’ process was advanced by Smith and Cohen [322]. Furthermore,
based on previous evidence for 2:1 alternation in the ECG waveform preced-
ing the onset of fibrillation, Smith and Cohen raised the provocative notion
that fibrillation of the heart might follow the subharmonic bifurcation route
to chaos. This speculation − linking nonlinear models of chaotic behavior

60709_8577 -Txts#150Q.indd 156 19/10/12 4:28 PM


Measures of Strange Attractors 157

to the understanding of sudden cardiac death has occasioned considerable


and sustained interest.
10 log [sx(ω)]

GEA
0

0 MG

RF

IC
10 dB

HI

0.1 2 3 5 7 1 2 3 5 710 2 3 5 7100 2 3

Frequency (Hz)

FIGURE 3.32. Mean value curves of the power spectral density functions obtained from
16 experiments during the slow wave sleep stage. Along the abscissa is the logarithm of
frequency ω, along the ordinate is the power spectral density, Sx (ω), in such a way that
the power at 0 Hz is equal to one (or 10log1 = 0). [32]

One approach to testing this hypothesis connecting cardiac pathology to


nonlinear dynamics is by means of spectral analysis of fibrillatory wave-
forms. If fibrillation is a homogeneous turbulent process then it should be
associated with a broadband spectrum with appropriate scaling character-
istics. However, the finding presented by Goldberger et al. [125] in concert
with multiple previous spectral and autocorrelation analyses [10, 252] as
well as electrophysiology mapping data [166, 404] suggest the need to re-
assess this concept of fibrillation as cardiac chaos. Furthermore, spectral
analysis of electrocardiographic data may have more general implications
for modeling transitions from physiological stability to pathological oscilla-
tory behavior in a wide variety of other life−threatening conditions [129].
Stein et al. [329] used a nonlinear predictive algorithm in 1999 on RR
interval data and determined that the dynamics of atrial fibrillation do not
reside on a strange attractor. However, Hou et al. [163] in 2007 used wavelet
analysis of heart beat interval data to establish that atrial fibrillation can
be successfully detected by means of the change in the time-varying fractal

60709_8577 -Txts#150Q.indd 157 19/10/12 4:28 PM


158 Dynamics in Fractal Dimensions

dimension of the HRV time series. They also determined that the complex-
ity of HRV decreases with the onset of atrial fibrillation.
The relatively narrow-band spectrum of fibrillatory signals contrast with
the spectrum of the normal ventricular depolarization (QRS) waveform
which in man and animals shows a wide band of frequencies (0 to > 300
Hz) with 1/f −like scaling, that is, the power spectral density at frequency
f is equal to 1/f α , where α is a positive number. As discussed in Section
2.4 the power-law scaling that characterizes the spectrum of the normal
QRS waveform can be related to the underlying fractal geometry of the
branching His-Purkinje system. Furthermore, a broadband inverse power-
law spectrum has also been identified by analysis of interbeat intervals vari-
ations in a group of healthy subjects, indicating that normal sinus rhythm
is not a strictly periodic state. Important phasic changes in heart rate as-
sociated with respiration and other physiologic control systems account for
only some of the variability in heartbeat interval dynamics; overall, the
spectrum in healthy subjects includes a much wider band of frequencies
with 1/f −like scaling. This behavior is also observed in the EEG time
series data.
It has been suggested that fractal processes associated with scaled, broad-
band spectra are ‘information-rich’. Periodic states, in contrast, reflect
narrow-band spectra and are defined by monotonous, repetitive sequences,
depleted of information content. In Figure 3.33 is depicted the spectrum
of the time series X(t) obtained from the funnel attractor solution of the
equation set Eqs. (3.24)–(3.26). The attractor itself is shown in Figure 3.16.
The spectrum is clearly broadband as was that of the Lorenz attractor, with
a number of relatively sharp spikes. These spikes are manifestations of a
strong periodic components in the dynamics of the funnel attractor. Thus,
the dynamics could easily be interpreted in terms of a number of harmonic
components in a noisy background, but this would be an error. One way
to distinguish between these two interpretations is by means of the infor-
mation dimension of the time series. The dimension decreases as a system
undergoes a transition from chaotic to periodic dynamics. The transition
from healthy function to disease implies an analogous loss of physiologi-
cal information and is consistent with a transition from a wide-band to a
narrow-band spectrum. The dominance of relatively low-frequency periodic
oscillations might be anticipated as a hallmark of the dynamics of many
types of severe pathophysiology disturbances. As pointed out earlier, such
periodicities have already been documented in many advanced clinical set-
tings, including Cheyne-Stokes breathing patterns in heart failure, leukemic
cell production, sinusoidal heart rate oscillations in fetal distress syndrome,
and the ‘swinging heart’ phenomenon in cardiac tamponade.. The highly
periodic electrical fibrillatory activity of the heart, which is associated with

60709_8577 -Txts#150Q.indd 158 19/10/12 4:28 PM


Measures of Strange Attractors 159

ineffective mechanical contraction and sudden death, is perhaps the most


dramatic example of this kind of abnormal spectral periodicity. More subtle
alterations in the spectral features of cardiovascular function have also been
described, including decreased high frequency QRS potentials in some cases
of chronic myocardial infarction in contrast to increased high frequency po-
tentials in healthy subjects in the ‘supraphysiologic’ state of exercise. Ven-
tricular fibrillation may serve, therefore, as a general model for transitions
from broadband stability to certain types of pathological periodicities in
other physiological disturbances.

FIGURE 3.33. The power spectral density for the X(t) time series for the ‘funnel’ at-
tractor depicted in Figure 3.16.

Thus I conclude that more systematic methods for distinguishing be-


tween chaos and noise are desirable and necessary. We turn to some of
those methods now.

3.3.1 Correlational dimension


In the preceding discussion we presented the standard example of an auto-
correlation function having an exponential form. Such an autocorrelation
function could describe a random time series having a memory or correla-
tion time τc . It could not describe a dynamical system having an asymptotic

60709_8577 -Txts#150Q.indd 159 19/10/12 4:28 PM


160 Dynamics in Fractal Dimensions

stationary or periodic state. Similarly it could not describe a nonlinear dis-


sipative dynamical system that has a chaotic attractor. Grassberger and
Procaccia [136] developed a correlational technique by which one can ex-
clude various choices for the kind of attractor on which the dynamics for a
given data set exists. They wanted to be able to say that the attractor for
the data set is not multiply periodic, or that the irregularities are not due
to external noise, etc. They proposed a measure obtained by considering
correlations between points of a time series taken from a trajectory on the
attractor after the initial transients have died away.
Consider the set {Xj , j = 1, 2, · · ·, M ) of points on the attractor taken
from a vector time series X(t), that is, take Xj = X(t + jτ ) where τ is
a fixed time interval between successive measurements. The vector time
series X(t) could be the three components of the Lorenz model, X(t) =
{X(t), Y (t), Z(t)} or those of the Rössler model or even the two components
of the Hénon model. In the latter case the ‘time’ series would already be
discrete and the set of M points could be all the iterates of the map or it
could be a selected subset of the generated points. If the attractor is chaotic
then since nearby trajectories exponentially separate in time, we expect
that most pairs of vectors Xj , Xk j = k are dynamically uncorrelated.
Even though these vectors may appear to be essentially random, they do
all lie on the same attractor and therefore are correlated in phase space.
Grassberger and Procaccia [136] introduced the correlation integral

1 M
C(r) = lim Θ (r − |Xi − Xj |) (3.82)
M →∞ M (M − 1)
i,j=1

= dE r c (r ) (3.83)
0

where Θ(x) is the Heaviside function, = 0 if x ≤ 0 and = 1 if x> 0, and


c(r ) is the traditional autocorrelation function in EEuclidian dimensions:

1 
M
c(r) = lim δ E (r − Xi − Xj ) . (3.84)
M →∞ M (M − 1)
i=j=1

The virtue of the integral function is that for a chaotic or strange attractor
the autocorrelation integral has the power-law form

C(r) ∝ rν (3.85)
and moreover, the ‘correlation exponent’ ν is closely related to the fractal
dimension D and the information dimension σ of the attractor. They argue

60709_8577 -Txts#150Q.indd 160 19/10/12 4:28 PM


Measures of Strange Attractors 161

that the correlation exponent is a useful measure of the local properties of


the attractor whereas the fractal dimension is a purely geometric measure
and is rather insensitive to the local dynamic behavior of the trajectories
on the attractor. The information dimension is somewhat sensitive to the
local behavior of the trajectories and is a lower bound on the Hausdorff
dimension. In fact they observe that in general one has

ν ≤ σ ≤ D. (3.86)
Thus, if the autocorrelation integral obtained from an experimental data set
has the power-law form Eq.(3.85) with ν < E, they argue that one knows
that the data set arises from deterministic chaos rather than random noise,
because noise results in C(r) ∝ rE for a constant autocorrelation function
over the interval r. Note that for periodic sequences ν = 1; for random
sequences it should equal the embedding dimension, while for chaotic se-
quences it is finite and non-integer.
Grassberger and Procaccia establish Eq.(3.85) by the following argu-
ment: If the attractor is a fractal, then the number of hypercubes of edge
length r needed to cover it N (r) is
1
N (r) ∝ (3.87)
rD
as determined in Chapter Two. The number of points from the data set
which are in the j th non-empty cube is denoted nj so that

1  2 N (r) 2 
M
C(r) ∼ lim nj = n (3.88)
M →∞ M 2 M2
j=1

up to O(1), and the angular brackets


denote 2an average over all occupied
cells. By the Schwartz inequality ( n2 ≥ n ):
⎡ ⎤2
N (r)
N (r) 2  1 
C(r) ≥ n = 2 ⎣ nj ⎦
M2 M N (r) j=1

but
N (r)

nj = M
j=1

so that the lower bound of the autocorrelation function is


1
C(r) ≥ = rD . (3.89)
N (r)

60709_8577 -Txts#150Q.indd 161 19/10/12 4:28 PM


162 Dynamics in Fractal Dimensions

Thus, comparing Eq. (3.89) with (3.85) they obtain the inequality

ν≤D (3.90)
so that the correlation dimension is less than or equal to the fractal dimen-
sion.
Grassberger and Procaccia also point out that one of the main advantages
of the correlation dimension ν is the ease with which it can be measured. In
particular it can be measured more easily than either σ or D for cases when
the fractal dimension is large (≥ 3). Just as they anticipated, the measure
ν has proven to be most useful in experimental situations, where typically
high-dimensional systems exist. However, calculating the fractal dimension
in this way does not establish that the erratic time series is generated by a
chaotic attractor. It only proves that the time series is fractal.
To test their ideas they studied the behavior of a number of simple
models for which the fractal dimension is known. In Figure 3.34 is displayed
three of the many calculations they did. In each case the logarithm of the
correlation integral [lnC(r)] is plotted as a function of the logarithm of
a dimensionless length (ln r) which according to the power-law relation
Eq.(3.85) should yield a straight line of positive slope. The slope of the line
is the correlational dimension ν. From these examples it is clear that the
technique successfully predicts the correlational behavior for both chaotic
mappings and differential equations having chaotic solutions.

3.3.2 Attractor reconstruction from data


More often than not the biomedical experimentalist does not have the lux-
ury of a mathematical model to guide the measurement process. What
is usually available are a few partial theories, securely based on assump-
tions often made more for convenience than for reality, and a great deal of
phenomenology. Therefore in a system known to depend on a number of
independent variables it is not clear how many kinds of measurements one
should make. In fact it is often unrealistically difficult to take more than
the measurement of a single degree of freedom. What then can be said
about a complex system given this single time series? Such questions are
relevant, for example, in determining what can be learned about the func-
tioning of the brain using EEG time series; in what can be learned about
the dynamics of epidemics using only the number of people infected with
a disease; in what can be learned about the excitability of single neurons
from the time series of post synaptic pulses; in what can be learned about
biochemical reactions by monitoring a single chemical species and so on. It
turns out that quite a lot can be learned using methods developed in non-
linear dynamics. In particular a method has been devised that enables one

60709_8577 -Txts#150Q.indd 162 19/10/12 4:28 PM


Measures of Strange Attractors 163

to reconstruct a multi-dimensional attractor from the time series of a single


observable. The application of this technique to a number of data sets is
reviewed in Chapter Five, but for the moment I concentrate on explaining
the underlying theory.

(a) (b) 0
Henon map
Logistic map
0 σ = 3.56994

−10

log2 C(I)
log2 C(I)

−10 ν = 0.5000±0.005
νeff = 1.21±0.01

−20
−20

0 10 20 30 40 0 5 10 15 20 25
log2 (I/I0) (I0 arbitary) log2 (I/I0) (I0 arbitary)

(c) 0
Lorenz equations
ν = 2.05±0.01
−5

−10
log2 C(I)

−15
Rabinovich
Fabrikant equations
ν = 2.19±0.01
−20

−25

0 5 10 15
log2 (I/I0) (I0 arbitary)

FIGURE 3.34. (a) The correlation integral for the logistic map at the infinite bifurcation
point μ. = μ∞ = 3.699. The starting point was Y0 = l/2, the number of points was
N = 3 × l04 . (b) Correlation integral for the Hénon map with c = 1.4, β = 0.01
and N = l.5 × l04 . (c) Correlation integrals for the Lorenz equations (dots); for the
Rabinovich-Fabricant equation (open circles). In both cases N = l.5 × l04 and τ = 0.25.
[136, 137]

Packard, Crutchfield, Farmer and Shaw [262] constituted the nucleus of


the Dynamic Systems Collective at the University of California, Santa Cruz
in the late 70’s and early 80’s, and as graduate students were the first inves-
tigators to demonstrate how one reconstructs a chaotic attractor from an
actual data set. They used the time series generated by one coordinate of
the three-dimensional chaotic dynamical system studied by Rössler [298],
that is, Eqs.(3.24)-(3.26) with the parameter values a = 0.2, b = 0.4 and
c = 5.7. The reconstruction method is based on the heuristic idea that
for such a three-dimensional system, any three ‘independent’ time varying
quantities are sufficient to specify the state of the system. The choice of

60709_8577 -Txts#150Q.indd 163 19/10/12 4:28 PM


164 Dynamics in Fractal Dimensions

the three dynamic coordinates X(t), Y (t) and Z(t) is only one of the many
possibilities. They conjectured that; “any such sets of three independent
quantities which uniquely and smoothly label the states of the attractor
are diffeomorphically equivalent.” In English this means that an actual dy-
namic system does not know of the particular representation chosen by
us, and that any other representation containing the same dynamic infor-
mation is just as good. Thus, an experimentalist sampling the values of a
single coordinate need not find the ‘one’ representation favored by nature,
since this ‘one’ may not in all probability exist.
Playing the role of experimentalists the Santa Cruz group sampled the
X(t) coordinate of the Rössler attractor. They then noted a number of
possible alternatives to the phase space coordinates (x, y, z) that could
give a faithful representation of the dynamics using the time series they had
obtained. One possible set was the X(t) time series itself plus two replicas
of it displaced in time by τ and 2τ , that is, X(t), X(t + τ ) and X(t + 2τ ).
Note that implicit in this choice is the idea that X(t) is so strongly coupled
to the other degrees of freedom that it contains dynamic information about
these coordinates as well as itself. A second representation set is obtained
by making the time interval τ an infinitesimal, so that by taking differences
between the variables we obtain X(t), Ẋ(t) and Ẍ(t).

FIGURE 3.35. A two-dimensional projection of the Rössler chaotic attractor (A) is


compared with the reconstruction in the (t, x) plane of the attractor (B) from the time
series X(t). The dashed line indicates the Poincaré surface of section for this attractor.
(From Packard et al. [262].)

60709_8577 -Txts#150Q.indd 164 19/10/12 4:28 PM


Measures of Strange Attractors 165

Figure 3.17d shows a projection of the Rössler chaotic attractor on the


{X(t), X(t+τ )} plane. Figure 3.35 depicts the reconstruction of that attrac-
tor from the sampled X(t) time series in the (x, t) plane. It is clear that the
two attractors are not identical, but it is also clear that the reconstructed
attractor retains the topological characteristics and geometrical form of
the ‘experimental attractor’. One quantitative measure of the equivalence
of the experimental and reconstructed attractors is the Lyapunov exponent
associated with each one. This exponent can be determined by construct-
ing a return map for each of the attractors and then applying the relation
Eq.(3.60).

(a) (b)

τ x(t + τ)
x

t x(t)

(c) (d)
P(N + 1)
x(t + 2τ)

x(t+τ) P(N)

FIGURE 3.36. Attractor from a chemical oscillator. (a) The time series X(t) is the
bromide ion concentration in a Belousov-Zhabatinskii reaction. A time interval τ is
indicated. (b) Plot of X(t) versus X(t + τ ). Dotted line indicates a cut through the
attractor. (c) Cross section of attractor along cut. (d) Poincare return map of cut,
P (N + 1) is the position the trajectory crosses the dotted line as a function of the
crossing position on the previous turn around the attractor. (From Roux et al [299] with
permission.)

A return map is obtained by constructing a Poincaré surface of section.


In this example of an attractor projected onto a two-dimensional plane, the
Poincaré surface of section is the intersection of the attractor with a line
transverse to the attractor. This intersection is indicated by the dashed line

60709_8577 -Txts#150Q.indd 165 19/10/12 4:28 PM


166 Dynamics in Fractal Dimensions

in Figure 3.35B and the measured data are the sequence of values {Xn }
denoting the crossing of the line by the attractor in the positive direction.
These data are used to construct a next amplitude plot in which each
amplitude Xn+1 is plotted as a function of the preceding amplitude Xn . It
is possible for such a plot to yield anything from a random spray of points to
a well defined curve. If in fact we find a curve with a definite structure then
it may be possible to construct a return map for the attractor. For example,
the oscillating chemical reaction of Belousov and Zhabotinskii was shown
by Simoyi et al. [321] to be describable by such a one-dimensional map. In
Figure 3.36 we indicate the return map constructed from the experimental
data [321].
Simoyi et al. [321] point out that there are 25 or so distinct chemicals in
the Belousov-Zhabotinskii reaction, many more than can be reliably mon-
itored. Therefore there is no way to construct the twenty-five dimensional
phase space X(t) = {X1 (t), X2 (t), · · ·X25 (t)} from the experimental data.
Instead they use the embedding theorems of Whitney [393] and Takens
[333] to justify the monitoring of a single chemical species, in this case the
concentration of the bromide ion, for use in constructing an m−dimensional
phase portrait of the attractor {X(t), X(t + τ ), · · ·X[t + (m − 1)τ ]} for suf-
ficiently large m and for almost any time delay τ . They find that for their
experimental data m = 3 is adequate and the resulting one-dimensional
map as depicted in Figure 3.36, provided the first example of a physical
system with many degrees of freedom that can be so modeled in detail.
Let us now recap the technique. We assume that the system of interest,
can be described by m variables, where m is large but unknown, so that
at any instant of time there is a point X(t) = [X1 (t), X2 (t), · · ·, Xm (t)] in
an m−dimensional phase space that completely characterizes the system.
This point moves around as the system evolves, in some cases approaching
a fixed point or limit cycle asymptotically in time. In other cases the motion
appears to be purely random and one must distinguish between a system
confined to a chaotic attractor and one driven by noise. In experiments, one
often only records the output of a single detector, which selects one of the N
components of the system for monitoring. In general the experimentalist
does not know the size of the phase space since the important dynamic
variables are usually not known and therefore s/he must extract as much
information as possible from the single time series available, X1 (t) say. For
sufficiently long times t one uses the embedding theorem to construct the
sequence of displaced time series {X1 (t), X1 (t+τ ), ..., X1 [t+(m−1)τ ]}. This
set of variables has been shown to have the same amount of information as
the d−dimensional phase point provided that m ≥ 2d + 1. Thus, as time
goes to infinity, we can build from the experimental data a one-dimensional
phase space X1 (t), a two-dimensional phase space with axes {X1 (t), X1 (t +

60709_8577 -Txts#150Q.indd 166 19/10/12 4:28 PM


Measures of Strange Attractors 167

τ )}, a three-dimensional phase space with axes {X1 (t), X1 (t + τ ), X1 (t +


2τ )}, and so on. The condition on the embedding dimension m, that is,
m ≥ 2d + 1, is often overly restrictive and the reconstructed attractor does
not require m to be so large.
Grassberger and Procaccia [136, 137] extended their original method,
being inspired by the work of Packard et al. [262] and Takens [333], to
the embedding procedure just described. Instead of using the Xj data set
discussed previously, they employ the m−dimensional vector

ξ (tj ) = {X1 (tj ), X1 (tj + τ ), ..., X1 [tj + (m − 1)τ ]} (3.91)


from m−copies of the original time series X1 (t). The m−dimensional cor-
relation integral is

1 M
Cm (r) = lim Θ (r − |ξ (ti ) − ξ (tj )|) , (3.92)
M →∞ M (M − 1)
i,j=1

which for a chaotic (fractal) time series again has the power-law form

Cm (r) ∼ rνm (3.93)


where

lim νm = ν (3.94)
m→∞
and ν is again the correlation dimension. In Figure 3.34 the results for the
Lorenz model with m = 3 is depicted where X1 (t) ≡ X(t); the power-law
is still satisfactory being in essential agreement with the earlier result.

3.3.3 Chaotic attractors and false alarms


The correlational integral of Grassberger and Procaccia was devised to de-
termine the dimensionality of an attractor from time series data, assuming
that such an attractor does in fact exist. It has been pointed out by Osborne
and Provenzale [258] that this has not been how the correlation dimension
has been used in the analysis of experimental data sets. The procedure has
been to apply the embedding procedure to a measured time series from a
dissipative system, and if the evaluation of Eq. (3.92) yields a finite value
for the correlation dimension, then the system is thought to be describable
by deterministic dynamics. Further, if the value of ν is low and non-integer
then the dynamics are argued to be governed by a strange attractor and
are therefore chaotic. This logical fallacy has been particularly apparent in
the analysis of geophysical data sets [see for example Nicolis and Nicolis
[249] and Fraedrich [100]].

60709_8577 -Txts#150Q.indd 167 19/10/12 4:28 PM


168 Dynamics in Fractal Dimensions

One reason for the misapplication of the correlation dimension is the


recognition that for a stochastic process the correlation integral diverges as
a power law with the power-law index νm being given by the embedding
dimension. This situation arises because a stochastic or random time series
is believed to completely fill the available volume whereas a chaotic time
series is restricted to an attractor of finite dimension d ≥ ν lower than
the embedding dimension m as m becomes large. This widely held belief
is based on the example of white noise for the random time series, but
has in fact been shown by a number of investigators not to be a general
result [258, 335]. The crux of the matter is that the Grassberger-Procaccia
measure determines if the time series is fractal or not, but not the cause of
its being fractal. While it is true that a low-dimensional chaotic attractor
generates a fractal time series, so too do other multiple-scale processes.
For example a scalar wave scattered from a fractal surface itself becomes
a fractal times series [39], or the cardiac pulse traversing the fractal His-
Purkinje condition system results in a fractal times series [123] as discussed
in Chapter Two.
Osborne and Provenzale [258] calculate a finite and well defined value for
the correlation dimension for a class of random noises with inverse power-
law spectra. The time series they consider is given by the discrete Fourier
representation


M/2
1/2
X (tj ) = [S (ωk ) Δωk ] cos (ωk tj + φk ) ; j = 1, 2, · · ·, M, (3.95)
k=1

where ωk = 2πk/M Δt, Δt is the sampling interval and M is the number


of data points in the time series. The time series X(tj ) is random if the set
of phases {φk } is uniformly distributed on the interval (0, 2π). In this case
S (ωk ) is the power spectrum of the time series denoting the way in which
energy is distributed over the frequencies contributing to the series.
As we discussed in Chapter Two a fractal process is free of a characteristic
time scale and is expected to have an inverse power-law spectrum [258].
Thus, Osborne and Provenzale investigated the properties of the times
series Eq. (3.95) with

C
S (ωk ) = (3.96)
ωkα

where C > 0 is chosen to yield a unit variance for the times series and
α > 0. Such time series are said to be ‘colored’ noise and have generated a
great deal of interest [242, 383].

60709_8577 -Txts#150Q.indd 168 19/10/12 4:28 PM


Measures of Strange Attractors 169

Osborne and Provenzale calculated X(t) = (X1 (t), X2 (t), ..., Xm (t)} for
fifteen different values of the embedding dimension m = 1, 2, ...15 for spe-
cific values of α. The correlation function Eq.(3.92) is then calculated for
each value of m and lnCm (r) is graphed versus lnr in Figure 3.37. The
slope of these curves yields νm from Eq. (3.93)

lnCm (r) = νm ln r + constant. (3.97)


This value of νm is then plotted versus the embedding dimension in the
associated figure. If the values of νm saturate for increasing m then from
Eq. (3.94) we obtain the value of the correlation dimension. One would
have expected that since X(t) is a stochastic process that no saturation
value exists, but this is seen not to be the case as α increases.
100

10−1

10−2
Cm (r)

10−3

10−4

10−5
10−3 10−2 10−1 100 101 102
r

(a)
10

6
νm

0
0 2 4 6 8 10 12 14 16
m

(b)

FIGURE 3.37. (a) The fifteen correlation functions Cm (r) for a spectral exponent α =
1.0, and (b) the correlation dimension νm versus the embedding dimension m for this
case. No saturation is evident in this case.

In Figure 3.37a is depicted fifteen correlation functions Cm (r) for m =


l, 2, ...15 with α = 1.0 in Eq. (3.95). This corresponds to a white noise
spectrum. The straight line region of the figure yields νm from Eq. (3.97).

60709_8577 -Txts#150Q.indd 169 19/10/12 4:28 PM


170 Dynamics in Fractal Dimensions

Figure 3.37b shows νm versus m, where no saturation is evident as one


would have expected. In Figure 3.38a the fifteen values of the correlation
function are depicted for α = 1.75. Again scaling is seen to be present from
the straight line region of the graph, and as shown in Figure 3.38b the
values of νm do saturate for large m to the value ν ≈ 2.66. As they point
out: “This traditionally unexpected result thus implies that a finite value
for the correlation dimension ν may be found even for non-deterministic,
random signals.” Thus, by repeating this analysis for a number of values
α they find a quantitative relation between ν and α. This relation ν(α) is
shown in Figure 3.38, where we see that for α ≥ 3 the correlation function
saturates at a value of unity. From these results it is clear that for white
noise there is no correlation dimension, but that is not true in general for
random processes with inverse power-law spectra.

100

10−1

10−2
Cm (r)

10−3

10−4

10−5
10−3 10−2 10−1 100 101 102
r

(a)
5

3
νm

0
0 2 4 6 8 10 12 14 16
m

(b)

FIGURE 3.38. Dimension νm versus m. The correlation dimension saturates at a value


ν ≈ 2.66.

60709_8577 -Txts#150Q.indd 170 19/10/12 4:28 PM


Measures of Strange Attractors 171

As we discussed in Chapter Two a fractal stochastic process is self-affine


so that if we consider an increment of X(t):

ΔX (τ ) = X (t + τ ) − X (t) (3.98)
and scale the time interval τ by a constant Γ

ΔX (Γτ ) = ΓH X (τ ) (3.99)
where H is the scaling exponent, 0 < H ≤ 1. Now if we generate a self-
similar trajectory from the time series Eq. (3.95) in an m−dimensional
phase space, each component has the same scaling exponent H. The fractal
dimension of the trajectory generated by the colored noise is then given by
Mandelbrot [217],
d = min(1/H, m). (3.100)

6
ν

0
0 1 2 3 4 5 0
α

FIGURE 3.39. The correlation dimension ν versus the spectral exponent α. The corre-
lation dimension turns out to be a well defined, monotonically decreasing function ν(α)
of the spectral exponent a for this class of random noises.

Thus, for 0 < H < 1 the trajectory is a fractal curve since its fractal
dimension strictly exceeds its topological dimension DT = 1.
Osborne and Provenzale numerically verify the relation Eq. (3.100) for
the colored noise time series. Using the scaling relation Eq. (3.99) they
evaluate the average of the absolute value of the increment in the process
X(t):

|ΔX (Γτ )| = ΓH |ΔX (τ )| (3.101)


The average is taken over the fifteen realizations of the stochastic process
used earlier as well as over time. If in fact the process is self-affine then

60709_8577 -Txts#150Q.indd 171 19/10/12 4:28 PM


172 Dynamics in Fractal Dimensions

a plot of ln |ΔX (Γτ )| versus ln Γ should be a straight line with slope


H. In Figure 3.40 is shown three straight line curves corresponding to
α = 1.0, 1.75 and 2.75 with the slope values of H = 0.1, 0.39 and 0.84
respectively. The fractal dimensions D = 1/H in these three cases are
D = 10, 2.56 and 1.19, respectively. In Figure 3.41 the values of D are
depicted for those of the spectral exponent α used in Figure 3.39 and are
compared with the theoretical value D = 1/H. The agreement is seen to
be excellent from which we conclude that the random paths with inverse
power-law spectra are self-affine fractal curves.
102
<|X(t+λΔt)−X(t)|>

100

10−2

(a)

10−4
10−4 10−2 100 102 104
λ

102
<|X(t+λΔt)−X(t)|>

100

10−2

(b)

10−4
10−4 10−2 100 102 104
λ

102
<|X(t+λΔt)−X(t)|>

100

10−2

(c)

10−4
10−4 10−2 100 102 104
λ

FIGURE 3.40. The three straight lines correspond to α = 1.0, 1.75and 2.75 with the
slope values from Eq. (3.101) given by H = 0.1, 0.39 and 0.84, respectively.

Panchev [264] established a relation between the index of the structure


function of a time series with an inverse power-law spectrum and the spec-
tral exponent, which in the present case yields α = 2H + 1. Thus, the
fractal dimension of a stochastic trajectory generated by a colored noise

60709_8577 -Txts#150Q.indd 172 19/10/12 4:28 PM


Summary and perspective 173

process with an inverse power-law spectrum is given by


2
D= . (3.102)
α−1
Since 0 < H < 1 the inverse power law index is in the interval 1 < α <
3. For α > 3 the Hausdorff dimension of the trajectory is equal to its
topological dimension and the curve is no longer fractal. For 0 ≤ α ≤ 1 the
scaling exponent is zero and D has an infinite value, that is, the traditional
expectation for stochastic processes is realized.
A number of other interesting conclusions can be reached regarding the
statistical properties of the time series Eq. (3.95) with the spectrum Eq.
(3.96) [258].
10

D = 1/H

6
ν

0
0 1 2 3 4 5 6
α

FIGURE 3.41. The fractal dimension D, determined as the inverse of the scaling ex-
ponent, versus the spectral exponent α.The solid and the dashed lines are theoretical
relationaships for ‘perfect’ and truncated power-law spectra, respectively.

3.4 Summary and perspective


A number of scientists [57] have demonstrated that the stability of hier-
archal biological networks is a consequence of the interactions among the
elements of the network. Furthermore, there is an increase in stability re-
sulting from the nesting of networks within networks − organelles into
cells, cells into tissue, tissues into organs and so on up from the micro-
scopic to the macroscopic. Each network level confers additional stability

60709_8577 -Txts#150Q.indd 173 19/10/12 4:28 PM


174 Dynamics in Fractal Dimensions

on the overall fractal structure. The fractal nature of the network suggests
a basic variability in the way networks are coupled together. For example,
the interaction between cardiac and respiratory cycles is not constant, but
adapts to the physiologic challenges being experienced by the body.
Modeling the adaptation of gait to various conditions was considered by
extending the traditional central pattern generator (CPG) to include corre-
lated stochastic processes to produce the super or stochastic central pattern
generator (SCPG). Walking is thought to be a consequence of the two-way
interaction between the neural networks in the central nervous system plus
the intraspinal nervous system on one side and the mechanical periphery
consisting of bones and muscles on the other. That is, while the muscles
receive commands from the nervous system, they also send back sensory
information that modifies the activity of the central neurons. The coupling
of these two networks produces a complex stride interval time series that
is characterized by particular symmetries including fractal and multifrac-
tal properties that depend upon several biological and stress constraints.
It has been shown that: (a) gait phenomenon is essentially a rhythmic cy-
cle that obeys particular phase symmetries in the synchronized movement
of the limbs; (b) the fractal and multifractal nature of the stride interval
fluctuations become slightly more pronounced under faster or slower paced
frequencies relative to the normal paced frequency of a subject; (c) the
randomness of the fluctuations is higher for children than for adults and
increases if subjects are asked to synchronize their gait with the frequency
of a metronome or if the subjects are elderly or suffering from neurodegen-
erative disease. In this chapter the SCPG model, which is able to reproduce
these known properties of walking, was briefly reviewed.
The description of a complex network consists of a set of dynamical ele-
ments, whatever their origin, together with a defining set of relations among
those elements; the dynamics and the relations are typically nonlinear. A
physiologic network may be identified as such because it performs a spe-
cific function, such as breathing or walking, but each of these functional
networks is part of a hierarchy that together constitutes the living human
body. Consequently the human body may be seen as a network of networks,
each separate network such as the beating heart is complex in its own right,
but also contributes to a complex network of interacting networks. This is
what the physician has chosen to understand, even though the neurosur-
geon specializes in the brain and the cardiologist focuses on the heart, they
both must interpret the signals from the network to determine the influence
of the other networks on their specialization.
The allometry relations discussed in Chapter Two have been tied to
chaos [111]. The connection was made by Ginzburg et al. [111] noting that
May and Oster [229] suggested the likelihood of population extinction is

60709_8577 -Txts#150Q.indd 174 19/10/12 4:28 PM


Summary and perspective 175

increased when the rate of population growth is sufficiently high so as to


produce an instability, specifically chaos. This observation indicates that
there is a practical upper limit to the rate at which a population can grow
and Ginzburg et al. [111] refer to this value of the growth rate as the
May threshold. They argue that the existence of the May threshold implies
that the lifetime reproductive rate should be independent of body size.
This same size independence of the growth rate had been used earlier by
Andresen et al. [7] to argue that organisms produce entropy at the same
intrinsic rate and thus fulfill a necessary condition for maximum efficiency.
This possible interrelation among entropy production, efficiency and chaos
is taken up again subsequently.
It has been demonstrated that the irregular time series observed in such
disciplines as economics, chemical kinetics, physics, language, physiology,
biology and so on, are at least in part due to chaos [194, 374]; dynamics on
a strange attractor. Technically, chaos, the dynamic concept discovered and
developed by Poincaré, is a sensitive dependence on initial conditions of the
solutions to a set of nonlinear, deterministic, dynamical equations. Prac-
tically, chaos implies that the solutions to such equations look erratic and
may pass all the traditional tests for randomness, even though the equa-
tions themselves are deterministic. Therefore, if random time series are
thought of as complex, then the output of a chaotic generator is complex.
However, we know that systems as simple as a one-dimensional, quadratic
map can generate a chaotic sequence. Thus, using the traditional definition
of complexity, it would appear that chaos implies the generation of com-
plexity from simplicity. Chaos is ubiquitous; all biological networks change
over time, and because they are nonlinear, they can, in principle, manifest
chaotic behavior at some level of description.
A deterministic nonlinear network, with only a few dynamical variables,
has chaotic solutions and therefore can generate random patterns. Thus, the
same restrictions on what can be known and understood about a network
arise when there are only a few nonlinear dynamical elements as when there
are a great many dynamical elements; but the reasons for the uncertainty
are very different. One process generates noise, the unpredictable influence
of the environment on the system of interest. Here the environment is
assumed to have an infinite number of elements, all of which are unknown,
but they are coupled to the system of interest and perturb it in a random,
that is, unknown, way [198]. By way of contrast, chaos is a consequence of
the nonlinear, deterministic interactions in an isolated dynamical system,
resulting in erratic behavior of, at most, limited predictability. Chaos is
an implicit property of a nonlinear dynamical network, whereas noise is a
property of the environment in contact with the network of interest. Chaos
can therefore be controlled and predicted over short time intervals, whereas

60709_8577 -Txts#150Q.indd 175 19/10/12 4:28 PM


176 Dynamics in Fractal Dimensions

noise can neither be predicted nor controlled, except perhaps through the
way the system is coupled to the environment.
The above distinction between chaos and noise highlights one of the
difficulties in formulating unambiguous measures of the dynamic properties
of complex phenomena. Since noise cannot be predicted or controlled it
might be viewed as being complex, thus, systems with many degrees of
freedom manifest randomness and may be considered complex. On the
other hand, systems with only a few dynamical elements, when they are
chaotic, might be considered simple. In this way the idea of complexity is ill
posed, because very often chaos and noise are indistinguishable, so whether
the system has a few variables (simple?) or many variables (complex?) is
not known. Consequently, because noise and chaos are often confused with
one another new measures for their discrimination have been developed
[34, 180], such as the correlation and fractal dimension, as well as the
attractor reconstruction technique (ART).

60709_8577 -Txts#150Q.indd 176 19/10/12 4:28 PM


Chapter 4
Statistics in Fractal Dimensions

The description of a complex network consists of a set of dynamical el-


ements, whatever their origin; the elements could be people, computers,
biological cells, bronchial airways, and so on. These elements are connected
by means of a defining set of relations; the dynamics and the relations
are typically nonlinear. A physiologic network may be identified through
the specific function it performs, such as breathing or walking, but each
of these functional networks is part of the hierarchy that together consti-
tutes the living human body. Consequently the human body may be seen
as a network of networks, each separate network being complex in its own
right, but contributing to a complex network of interacting networks. This
is what the physician has chosen to understand, even though the neuro-
surgeon specializes in the brain and the cardiologist focuses on the heart,
they both interpret the signals from their respective network to determine
the influence of the other networks on their area of specialization.

4.1 Complexity and Unpredictability


It is not very useful to list the properties associated with the complexity of
a network, since any list of traits of complexity would be arbitrary and id-
iosyncratic [98, 357]. Therefore instead of such a list we propose the working
definition of a complex phenomenon as being a network with complicated
177

60709_8577 -Txts#150Q.indd 177 19/10/12 4:28 PM


178 Statistics in Fractal Dimensions

and intricate features, having both the characteristics of randomness and


order. Implicit in this definition is the notion that order alone is simple and
randomness alone can also be simple, it is the two together that constitutes
complexity in a physiologic network, as I argue below.
The most subtle concept entering into the discussion of complexity is the
existence and role of randomness. From one perspective the unpredictabil-
ity associated with randomness has to do with the large number of elements
in a network [214]. A large number of variables may be a sufficient, but it is
not a necessary condition for randomness and loss of predictability. As dis-
cussed in Chapter One having only a few dynamical elements in a network
does not insure predictability or knowability. It has been demonstrated that
the irregular time series observed in such disciplines as economics, chemical
kinetics, physics, language, physiology, biology and so on, are at least in
part due to chaos [194, 374]. Practically, chaos implies that the solutions
to nonlinear deterministic dynamical equations look erratic and may pass
all the traditional tests for randomness, even though the equations them-
selves are deterministic. Therefore, if random time series are considered to
be complex, then the output of a chaotic generator is complex. However,
we know that systems as simple as a one-dimensional, quadratic map can
generate a chaotic sequence. Thus, using the traditional definition of com-
plexity, it would appear that chaos implies the generation of complexity
from simplicity. This is part of Poincaré’s legacy of paradox [275]. Another
part of his legacy is the fact that chaos is a generic property of nonlinear
dynamical systems, which is to say chaos is ubiquitous; all physiologic sys-
tems change over time, and because they are nonlinear, in principle they
manifest chaotic behavior at some level of description.
A deterministic nonlinear system, with only a few dynamical variables,
has chaotic solutions and therefore can generate random patterns. The
same restrictions knowing and understanding a network when there are
only a few nonlinear dynamical elements as when there are a great many
dynamical elements, but for very different reasons. I refer to random pro-
cesses generated by many variables as noise, the unpredictable influence of
the environment on the network of interest. Here the environment is as-
sumed to have an infinite number of elements, all of which are unknown,
but they are coupled to the network of interest and perturb it in a random,
that is, unknown, way [198]. By way of contrast, chaos is a consequence of
the nonlinear, deterministic interactions in an isolated dynamical system,
resulting in erratic behavior of limited predictability. Chaos is an implicit
property of a nonlinear dynamical system, whereas noise is a property of
the environment in contact with the system of interest. Chaos can there-
fore be controlled and predicted over short time intervals, whereas noise

60709_8577 -Txts#150Q.indd 178 19/10/12 4:28 PM


Complexity and Unpredictability 179

can neither be predicted nor controlled, except perhaps through the way
the system is coupled to the environment.
The above distinction between chaos and noise highlights one of the
difficulties of formulating unambiguous measures of the dynamic properties
of complex phenomena. Since noise cannot be predicted or controlled it
might be viewed as being complex, thus, systems with many degrees of
freedom manifest randomness and may be considered complex. On the
other hand, systems with only a few dynamical elements, when they are
chaotic, might be considered simple. In this way the idea of complexity
is ill posed, because very often we cannot distinguish between chaos and
noise, so it cannot be known if the network has a few variables (simple?)
or many variables (complex?). Consequently, because noise and chaos are
often confused with one another in data a new approach to the definition
of complexity is required.
In early papers on systems theory it was argued that the increasing com-
plexity of an evolving system can reach a threshold where the system is so
complicated that it is impossible to follow the dynamics of the individual
elements, see for example, Weaver [357]. At this point new properties often
emerge and the new organization undergoes a completely different type of
dynamics. The details of the interactions among the individual elements are
substantially less important, at this point, than is the ‘structure’, the ge-
ometrical pattern, of the new aggregate. This is self-aggregating behavior.
Increasing the number of elements beyond this point, or alternatively in-
creasing the number of relations among the existing elements, often leads to
a complete ‘disorganization’ and the stochastic approach becomes a viable
description of the system behavior. If randomness (noise) is now consid-
ered as something simple, as it is intuitively, one has to seek a measure of
complexity that increases as the number of variables increases, reaches a
maximum where new properties may emerge, and eventually decreases in
magnitude in the limit of the system having an infinite number of elements,
where thermodynamics properly describes the system. Thus, a system is
simple when its dynamics are regular and described by a few variables; sim-
ple again when its dynamics are random and described by a great many
variables; but somewhere between these two extremes its dynamics are
complex, being a mixture of regularity (order) and randomness (disorder).

4.1.1 Scaling Measures


Consider an unknown dynamic variable Z(t) that satisfies a relation of the
form

Z(bt) = aZ(t) (4.1)

60709_8577 -Txts#150Q.indd 179 19/10/12 4:28 PM


180 Statistics in Fractal Dimensions

As seen earlier such relations are referred to as scaling and can be solved
in the same way differential equations are solved. Practically one usually
guesses the form of a solution, substitutes that guess into the equation and
see if it works. I do that here and assume a trial solution of the form used
earlier

Z(t) = A(t)tμ (4.2)


where A(t) is an unknown function and μ is an unknown exponent and
they are assumed to be independent of one another. Substituting Eq.(4.2)
into the scaling relation yields

A(bt)(bt)μ = aA(t)tμ (4.3)


resulting in the separate equations

bμ = a and A(bt) = A(t) (4.4)


Thus, the power-law index is related to the scaling parameters by

ln a
μ= (4.5)
ln b
and the real coefficient function is periodic in the logarithm of time with
period ln b and consequently can be expressed in terms of a Fourier expan-
sion


A(t) = An ei2πn ln t/ ln b . (4.6)
n=−∞

Recall that this was the same function obtained in fitting the bronchial tube
data to the average diameter. In the literature Z(t) is called a homogeneous
function [374].
The homogeneous function Z(t) is now used to define the scaling observed
in the moments of an experimental time series with long-time memory. The
second moment of a homogeneous stochastic process X(t) having long-time
memory is given by  
X(bt)2 = b2H X(t)2 (4.7)
where the brackets denote an average over an ensemble of realizations of the
fluctuations in the time series. Like the Weierstrass function which repeats
itself at smaller and smaller scales we see that the characteristic measure
of a time series, the second moment, has the same scaling property. This
implies that the scaling index can be related to the fractal dimension as
done in Section 2.2 so that

60709_8577 -Txts#150Q.indd 180 19/10/12 4:28 PM


Complexity and Unpredictability 181

H =2−D (4.8)
relating the exponent H to the fractal dimension D.
This same process has a different scaling for the stationary covariance

C(τ ) = X(t + τ )X(t) , (4.9)


which if normalized by the variance becomes the autocorrelation function.
The covariance is stationary because it is independent of the time t and
depends only on the time difference τ.The scaling for the covariance is given
by

C(bτ ) = b2H−2 C(τ ) (4.10)


and note that this differs from the scaling determined for the mean-square
increments considered in Section 2.2.
Finally, the power spectral density for this time series is given by the
Fourier transform of the covariance

1
S(ω) = e−iωt C(t)dt (4.11)

−∞

and has the scaling property determined by substituting Eq.(4.10) under


the integral

1 C(bt) S(ω/b)
S(ω) = e−iωt 2H−2 dt = 2H−1 . (4.12)
2π b b
−∞

The more familiar equivalent form for the scaling of the power spectral
density is

S(bω) = b1−2H S(ω). (4.13)


The solutions to each of these three scaling equations are of precisely the
algebraic forms implied by Eq. (4.1) with the modulation amplitude fixed
at a constant.
The above renormalization scaling yields a mean-square signal level that
increases nonlinearly with time as

X(t)2 ∝ t2H (4.14)

and the exponent H is a real constant, often called the Hurst exponent, after
Mandelbrot identification of the Civil engineer who first used this scaling.
In a simple random walk model of such a process the steps of the walker

60709_8577 -Txts#150Q.indd 181 19/10/12 4:28 PM


182 Statistics in Fractal Dimensions

are statistically independent of one another and H = 1/2, corresponding


to classical diffusion. This scaling behavior is also manifest in the power
spectrum, which is an inverse power law in frequency
1
S(ω) ∝ . (4.15)
ω 2H−1
For H = 1 this spectrum corresponds to 1/f -noise, a process that is found
in multiple physiologic phenomena.
Together these three properties, the algebraic increase in time of the
mean-square signal strength, the inverse power law in time of the covari-
ance and the inverse power law in frequency of the spectrum, are typically
observed in anomalous diffusion. These properties are usually the result
of long-time memory in the underlying statistical process. Beran discusses
these power-law properties of the spectrum and covariance, as well as a
number of other properties involving long-time memory, for discrete time
series [38]. However, there is another interpretation of anomalous diffusion
in terms of the statistical distribution, which we take up in the sequel.

4.2 Fractal Stochastic Dynamics


The best physical model is the simplest one that can ‘explain’ the avail-
able experimental time series, with the fewest number of assumptions. Al-
ternative models are those that make predictions and which can assist
in formulating new experiments that can discriminate between different
hypotheses. The simplest model that incorporates both deterministic dy-
namics and statistics is a simple random walk, which in its simplest form
provides a physical picture of diffusion, that is, a dynamic variable with
Normal statistics in time. Diffusive phenomena scale linearly in time and
generalized random walks including long-term memory also scale, but they
do so nonlinearly in time, as in the case of anomalous diffusion. Fractional
diffusion operators are used to incorporate memory into the dynamics of
a diffusive process and leads to fractional Brownian motion, among other
things. The continuum form of these fractional operators is discussed sub-
sequently.
The continuous limit of a simple random walk model leads to a stochastic
dynamic equation, first discussed in physics in the context of diffusion by
Paul Langevin. The random force in the Langevin equation, for a simple
dichotomous process with memory, leads to a diffusion variable that scales
in time and has a Normal probability density. A long-time memory in
such a random force is shown to produce a non-Normal probability density
for the system response. Finally I show that physiologic time series are not

60709_8577 -Txts#150Q.indd 182 19/10/12 4:28 PM


Fractal Stochastic Dynamics 183

monofractal, but have a fractal dimension that changes over time. The time
series are multifractal and as such they have a spectrum of dimensions.

4.2.1 Simple Random Walks


The variable of interest is defined as Xj where j = 0, 1, 2, . . . indexes the
time step and in the simplest model a step is taken in each unit increment of
time. The operator B lowers the index by one unit such that BXj = Xj−1
so that a simple random walk can be written

(1 − B)Xj = ξj , (4.16)
where ξj is +1 or –1 and is selected according to to the random process
of flipping a coin. The solution to this discrete equation is given by the
position of the walker after N steps, the sum over the sequence of steps


N
X (N ) = ξj (4.17)
j=1

and the total number of steps can be interpreted as the total time t over
which the walk unfolds, since we have set the time increment is one. For N
sufficiently large the central limit theorem determines the statistics of the
dynamic variable X(t) to be Normal
⎡ ⎤
2
1 ⎣ x
pN (x) = $
exp −
⎦ . (4.18)
2
2 2 X (N )
2π X (N )

Assuming the random steps are statistically independent ξj ξk = ξj2 δjk
we have for the second moment of the diffusion variable in the continuum
limit

 
N

X(t)2 = ξj ξk = N ξ 2 −→ 2Dt. (4.19)
j,k=1

Thus, in the continuum limit the second moment increases linearly


with time and in direct proportion to the strength of the fluctuations as
measured by the diffusion coefficient. In this case the probability density
becomes the familiar Normal distribution for Einstein diffusion
 
1 x2
p(x, t) = √ exp − . (4.20)
4πDt 4Dt

60709_8577 -Txts#150Q.indd 183 19/10/12 4:28 PM


184 Statistics in Fractal Dimensions

Of particular interest here is the scaling property of the Normal distri-


bution Eq.(4.20). One can sample this statistical process at any level of
resolution and still observe a zero-centered Normal process. The time se-
ries obtained by sampling the process at every time τ , or at every time
bτ , where b > 0, would be statistically indistinguishable. This is the scale
invariance of Brownian motion. This scaling property is manifest by writing

1 x
p(x, t) = F δ , (4.21)
tδ t

with δ = 1/2 so that the distribution for the random variable λ1/2 X(λt)
is the same as that for X(t). This scaling relation establishes that the
random irregularities are generated at each scale in a statistically identical
manner, that is, if the fluctuations are known in a given time interval they
can be determined in a second larger time interval by scaling. This is the
property is used in the next section to construct a data processing method
for nonlinear dynamic processes.
The simplest generalization of this model is to make each step dependent
on the preceding step in such a way that the second moment given by
Eq.(4.14) has H = 12 and corresponds to anomalous diffusion. A value
of H < 12 is interpreted as an anti-persistent process in which a walker’s
step in one direction is preferentially followed by a reversal of direction. A
value of H > 12 is interpreted as a persistent process in which a walker’s
step in one direction is preferentially followed by another step in the same
direction. This interpretation of anomalous diffusion in terms of random
walks would be compatible with the concept of environmental noise where
the environment forces the step in each time interval.
In a complex system the response X(t) is expected to depart from the
totally random condition of the simple random walk model, since such
fluctuations are expected to have memory and correlation. In the physics
literature anomalous diffusion has been associated with phenomena with
long-time memory such that the covariance is

C(t, t ) = X(t)X(t ) ∝ |t − t | .
β
(4.22)

Here the power-law index is given by β = 2H − 2 as indicated by Eq.(4.10).


Note that the two-point covariance depends only on the time difference,
and consequently the underlying process is stationary. The covariance is
an inverse power law in time implying that the correlation between pairs of
points decreases in time with increasing time separation because 0 ≤ H ≤
1. This interpretation of anomalous diffusion would be compatible with the
concept of environmental noise.

60709_8577 -Txts#150Q.indd 184 19/10/12 4:28 PM


Fractal Stochastic Dynamics 185

These power-law properties of the spectrum and the autocorrelation func-


tion, as well as a number of other properties involving long-time memory,
are discussed for discrete time series by Beran [38].

4.2.2 Fractional random walks and scaling


The concept of fractional differences is most readily introduced through
the shift operator introduced in the previous subsection. Following Hosking
[162] we define a fractional difference process as
α
(1 − B) Xj = ξj , (4.23)
and the exponent α is not an integer. As it stands Eq.(4.23) is just a formal
definition without content. To make this equation usable I must tell you
how to represent the operator acting on Xj and this is done using the
binomial expansion [162, 374]. The inverse operator in the formal solution
of Eq.(4.23)

Xj = (1 − B)−α ξj (4.24)
has the binomial series expansion
∞ 
 
−α
(1 − B)−α =
k
(−1) B k . (4.25)
k
k=0

Expressing the binomial coefficient as the ratio of gamma function, in


the solution Eq.(4.24), we obtain after some algebra [374, 376]

 ∞
Γ (k + α)
Xj = B k ξj = Θk ξj−k . (4.26)
Γ (k + 1) Γ (α)
k=0 k=0

The solution to the fractional random walk is clearly dependent on fluc-


tuations that have occurred in the remote past; note the time lag k in
the index on the fluctuations and the fact that it can be arbitrarily large.
The extent of the influence of these distant fluctuations on the system
response is determined by the relative size of the coefficients in the se-
ries.
Using Stirling’s approximation on the gamma functions determines
the size of the coefficients in Eq.(4.26) as the fluctuations recede into the
past, that is, as k −→ ∞
k+α−1
(k + α − 1)
Θk ≈ α ∝ k α−1 (4.27)
k k (α − 1)

60709_8577 -Txts#150Q.indd 185 19/10/12 4:28 PM


186 Statistics in Fractal Dimensions

since k >> α. Thus, the strength of the contributions to Eq.(4.26) decrease


with increasing time lag as an inverse power law asymptotically in the time
lag as long as α < 1/2.
The spectrum of the time series Eq. (4.26) is obtained using its
discrete Fourier transform
π
1 ω e−ikω dω
Xj = X (4.28)

−π

in the discrete convolution form of the solution Eq.(4.26) to obtain

X  ω
ω = Θ ξω (4.29)
yielding the power spectrum
       
  2   2   2
S (ω) = X ω = 
ξ ω  Θω  . (4.30)

The strength of the fluctuations are assumed to be constant, that is, to be


independent of frequency. On the other hand, the Fourier transform of the
strength parameter is given by


ω = 1
Θ Θk eikω = α (4.31)
(1 − eiω )
k=0

so that rearranging terms in Eq.(4.31) and substituting that expression into


Eq.(4.30) yields
1
S (ω) ∝ 2α (4.32)
(2 sin [ω/2])
for the spectrum of the fractional-differenced white noise process. In the
low-frequency limit we therefore obtain the inverse power-law spectrum
1
S (ω) ∝ . (4.33)
ω 2α
Thus, since the fractional-difference dynamics are linear the system re-
sponse is Normal, the same as for the white noise process. However, whereas
the spectrum of fluctuations is flat, since it is white noise, the spectrum of
the system response is inverse power law. From these analytic results we
conclude that Xj is analogous to fractional Brownian motion. The analogy
is complete if we set α = H − 1/2 so that the spectrum Eq.(4.33) can be
expressed as
1
S (ω) ∝ 2H−1 . (4.34)
ω

60709_8577 -Txts#150Q.indd 186 19/10/12 4:28 PM


Fractal Stochastic Dynamics 187

Taking the inverse Fourier transform of the exact expression Eq.(4.32)


yields the autocorrelation coefficient

Xj+k Xj Γ (1 − α) 2α−1
rk =
≈ k
2 Γ (α)
|Xj |
Γ (1.5 − H) 2H−2
= k (4.35)
Γ (H − 0.5)

as the lag time increases without limit k −→ ∞.


The probability density function (pdf ) for the fractional-difference dif-
fusion process in the continuum limit p(x, t) satisfies the scaling condition
given by Eq.(4.21) where δ = H = α − 1/2. The deviation from ordinary
statistical mechanics, and consequently the manifestation of complexity, is
indicated by two distinct quantities. The first indicator is the scaling pa-
rameter δ departing from the ordinary value δ = 0.5, which it would have
for a simple diffusion process. But for fractional Brownian motion the value
of the scaling index can be quite different. A second indicator of the devi-
ation from ordinary statistical mechanics is the function F (y) in Eq.(4.21)
departing from the conventional Normal form. The scaling index is usu-
ally determined by calculating the second moment of a time series. This
method of analysis is reasonable only when F (y) has the Normal form, or
some other distribution with a finite second moment. If the scaling condi-
tion Eq.(4.21) is realized, it is convenient to measure the scaling parameter
δ by the method of Diffusion Entropy Analysis (DEA) that, in principle,
works independently of whether the second moment is finite or not. The
DEA method affords many advantages, including that of being totally in-
dependent of a constant bias. However, before reviewing the DEA method
I examine another way in which the diffusion variable may scale, which is
to say, another mechanism to generate long-time memory.

4.2.3 Physical/physiological models


A theoretical Langevin equation is generally constructed from a Hamil-
tonian model for a simple dynamical system coupled to the environment.
The equations of motion for the coupled system are manipulated so as to
eliminate the degrees of freedom of the environment from the dynamical
description of the system. Only the initial state of the environment remains
in the Langevin description, where the random nature of the driving force
is inserted through the choice of distribution of the environment’s initial
state. The simplest Langevin equation for a dynamical system open to the

60709_8577 -Txts#150Q.indd 187 19/10/12 4:28 PM


188 Statistics in Fractal Dimensions

environment has the form


dX(t)
+ λX(t) = ξ(t) (4.36)
dt
where ξ (t) is a random force, λ is a dissipation parameter and there ex-
ists a relation connecting the two called the fluctuation-dissipation relation
[198, 385]. Of course Eq.(4.36) cannot be completely interpreted until the
statistical properties of the fluctuations are specified and for this the en-
vironment of the system must be known. The random driver is typically
assumed to be a Wiener process, that is, to have Normal statistics and no
memory.
When the system dynamics depends on what occurred earlier, that is, the
environment has memory, Eq.(4.36) is no longer adequate and the Langevin
equation must be modified. The generalized Langevin equation takes this
memory into account through an integral term of the form

dX(t)
+ dt K(t − t )X(t ) = ξ(t)
dt
0

where the memory kernel replaces the dissipation parameter and the
fluctuation-dissipation relation becomes generalized

K(t − t ) = ξ (t) ξ (t ) . (4.37)

Both these Langevin equations are monofractal if the fluctuations are


monofractal, which is to say, the time series given by the trajectory X(t) is
a fractal random process if the random force is a fractal random process.
However, neither of these models is adequate for describing multifractal
statistical processes. A number of investigators have developed multifractal
random walk models to account for the multiple fractal character of various
physiological phenomena and here I introduce a variant of those discussions
based on the fractional calculus [378]. One generalization of the Langevin
equation incorporates memory into the system’s dynamics and has the
formally simple form

α t−α
0 Dt [X(t)] + λα X(t) = X(0) + ξ (t) (4.38)
Γ (1 − α)
where one form of the fractional operator can be interpreted in terms of
the integral

1 g(t )dt
α
0 Dt [g(t)] ≡ 1−α . (4.39)
Γ (α) (t − t )
0

60709_8577 -Txts#150Q.indd 188 19/10/12 4:28 PM


Fractal Stochastic Dynamics 189

This definition of a fractional operator is not unique, various forms are cho-
sen to emphasize different properties of the system being modeled [378].
Equation (4.38) is mathematically well defined, and strategies for solving
such equations have been developed by a number of investigators, particu-
larly in the book by Miller and Ross [236] that is devoted almost exclusively
to solving such equations when the index is rational and ξ(t) = 0. Here we
make no such restriction and consider the Laplace transform of Eq.(4.38)
to obtain
X(0)sα−1 %
ξ(s)
%
X(s) = α + α (4.40)
λ +s α λ + sα
whose inverse Laplace transform is the solution to the fractional differential
equation. Inverting Laplace transforms such as Eq. (4.40) is non-trivial and
an excellent technique that overcomes many of these technical difficulties,
implemented by Nonnenmacher and Metzler [251], involve the use of Fox
functions. For our purposes fractional derivatives can be thought of as a
way of incorporating the influence of the past history of a process into
its present dynamics. There has been a rapidly growing literature on the
fractional calculus in the past decade or so, particularly in the description
of the fractal dynamical behavior of physiological time series. We do not
have the space to develop the mathematical background for this formalism
and its subsequent use in physiology so I merely give a few examples of its
use and refer the reader to the relevant literature [376].
The formal solution to the fractional Langevin equation is expressed in
terms of the Laplace transform which can be used to indicate how the mem-
ory influences the dynamics. Recall that the fluctuations were assumed to
be zero centered, so that taking the average over an ensemble of realizations
of the fluctuations yields

X(0)sα−1
%
X(s) = α . (4.41)
λ + sα
The solution to the average fractional relaxation equation is given by the
series expansion for the standard Mittag-Leffler function [376]

 k kα
(−1) (λt)
X(t = X(0)Eα (−(λt)α ) = X(0) (4.42)
Γ (1 + kα)
k=0

which in the limit α −→ 1 yields the exponential function

lim X(t) = X(0)e−λt (4.43)


α→1

as it should, since under this condition Eq. (4.38) reduces to the ordinary
Langevin relaxation rate equation.

60709_8577 -Txts#150Q.indd 189 19/10/12 4:28 PM


190 Statistics in Fractal Dimensions

FIGURE 4.1. The solid curve is the Mittag-Leffler function, the solution to the frac-
tional relaxation equation. The dashed curve is the stretched exponential (Kohlrausch-
Williams-Watts Law) and the dotted curve is the inverse power law (Nutting Law).

The Mittag-Leffler function has interesting properties in both the


short-time and the long-time limits. In the short-time limit it yields the
Kohlrausch-Williams-Watts Law from stress relaxation in rheology given
by
α
lim Eα (−(λt)α ) = e−(λt) (4.44)
t→0

also known as the stretched exponential. In the long-time limit it yields the
inverse power law, known as the Nutting Law,

1
lim Eα (−(λt)α ) = (4.45)
t→∞ (λt)α

Figure 4.1 displays the general Mittag-Leffler function as well as the two
asymptotes, the dashed curve being the stretched exponential and the dot-
ted curve the inverse power law. What is apparent from this discussion
is the long-time memory associated with the fractional relaxation process,
being inverse power law rather than the exponential of ordinary relaxation.
Returning now to the Laplace transform of the solution to the generalized
Langevin equation we can express the inverse Laplace transform of the first
term on the rhs of Eq. (4.40) in terms of the Mittag-Leffler function as
just found for the homogeneous case. The inverse Laplace transform of the

60709_8577 -Txts#150Q.indd 190 19/10/12 4:28 PM


Fractal Stochastic Dynamics 191

second term is the convolution of the random force and a stationary kernel.
The kernel is given by the series

 zk
Eα,β (z) = , α, β > 0 (4.46)
Γ (β + kα)
k=0

which is the generalized Mittag-Leffler function. The function defined by


Eq. (4.46) reduces to the usual Mittag-Leffler function when β = 1, so that
both the homogeneous and inhomogeneous terms in the solution to the
fractional Langevin equation can be expressed in terms of these series.
The explicit inverse of Eq. (4.40) yields the solution [376]

t
X(t) = X(0)Eα (−(λt) ) + α
(t − t )α−1 Eα,α (−(λt )α )ξ (t ) dt . (4.47)
0

In the case α = 1, the Mittag-Leffler function becomes the exponential, so


that the solution to the fractional Langevin equation reduces to that for
an Ornstein-Uhlenbeck process

t

X(t) = X(0)e −λt
+ e−λ(t−t ) ξ (t ) dt (4.48)
0

as it should. The analysis of the autocorrelation function of Eq. (4.47) can


be quite daunting and so we do not pursue it further here, but refer the
reader to the literature [376]. A somewhat simpler problem is the fractional
Langevin equation without dissipation.
The solution to the generalized Langevin equation with the dissipation
set to zero can be used to evaluate the second moment of the process


t t
2 1 dt1 dt1
[X(t) − X(0)] = 2 1−α 1−α ξ (t1 ) ξ (t2 ) .
Γ (α) (t − t1 ) (t − t1 )
0 0
(4.49)
Recalling that the fluctuations are delta correlated in time with strength
2D therefore yields

2Dt2α−1
2
[X(t) − X(0)] = 2 (4.50)
(2α − 1) Γ (α)

The time dependence of the second moment Eq. (4.50) agrees with that
obtained for anomalous diffusion if we make the identification 2H = 2α−1,

60709_8577 -Txts#150Q.indd 191 19/10/12 4:28 PM


192 Statistics in Fractal Dimensions

where, since the fractional index is less than one for 1/2 ≥ H > 0. Conse-
quently, the process described by the dissipation-free fractional Langevin
equation is anti-persistent.
This anti-persistent behavior of the time series was observed by Peng
et al. [267] for the differences in time intervals between heart beats. They
interpreted this result, as did a number of subsequent investigators, in terms
of random walks with H < 1/2. However, we can see from Eq. (4.50) that
the fractional Langevin equation without dissipation is an equally good,
or one might say an equivalent, description of the underlying dynamics.
The scaling behavior alone cannot distinguish between these two models,
what is needed is the complete statistical distribution and not just the
time-dependence of the central moments.
The formal solution to this fractional Langevin equation is

t
1 ξ (t ) dt
X(t) − X(0) = 1−α
Γ (1 − α) (t − t )
0

that can be expressed in terms of the integral kernel

t
X(t) − X(0) = Kα (t − t ) ξ (t ) dt . (4.51)
0

As mentioned earlier the form of this relation for multiplicative stochastic


processes and its association with multifractals has been noted in the phe-
nomenon of turbulent fluid flow [308], through a space, rather than time,
integration kernel.
The random force term on the right-hand side of Eq.(4.51) is selected to
be a zero-centered, Normal random variable and therefore to scale as [34]

ξ (λt) = λH ξ (t) (4.52)

where the Hurst exponent is in the range 0 < H ≤ 1. In a similar way the
kernel in Eq.(4.51) is easily shown to scale as

Kα (λt) = λα Kα (t), (4.53)

so that the solution to the fractional Langevin equation scales as

X(λt) − X(0) = λH+α [X(t) − X(0)] . (4.54)

In order to make the solution to the fractional Langevin equation a mul-


tifractal assume that the parameter α is a random variable. To construct

60709_8577 -Txts#150Q.indd 192 19/10/12 4:28 PM


Fractal Stochastic Dynamics 193

the traditional measures of multifractal stochastic processes calculate the


q th moment of the solution by averaging over both the random force and
the random parameter α to obtain

λ(q+1)H λqα |X(t) − X(0)|


q q
|X(λt) − X(0)| =
q
= |X(t) − X(0)| λρ(q) . (4.55)

The scaling relation Eq. (4.55) determines the q th order structure function
exponent ρ (q). Note that when ρ (q) is linear in q the underlying process is
monofractal, whereas, when it is nonlinear in q the process is multifractal,
because the structure function can be related to the mass exponent [282]:

ρ (q) = 2 − τ (q) . (4.56)


Consequently ρ (0) = H and τ (0) = 2 − H, as it should because of the
well known relation between the fractal dimension and the global Hurst
exponent D (0) = 2 − H.
To determine the structure function exponent we make an assumption
about the statistics of the parameter α. Latka and I [377] made the as-
sumption that the statistics were Lévy stable and consequently obtained
for the mass exponent
μ
ρ (q) = (q + 1) H − b |q| (4.57)

Therefore the solution to the fractional Langevin equation corresponds to


a monofractal process only in the case μ = 1 and q > 0, otherwise the
process is multifractal. The remaining discussion is restricted to positive
moments.
Thus, when the memory kernel in the fractional Langevin equation is
random, the solution consists of the product of two random quantities giv-
ing rise to a multifractal process. This is Feller’s subordination process. I
apply this approach to the SRV time series data and observe, for the statis-
tics of the multiplicative exponent given by Lévy statistics, the singularity
spectrum as a function of the positive moments shown by the points in
Figure 4.2. The solid curve in this figure is obtained from the analytic form
of the singularity spectrum

f (q) = 2 − H − (μ − 1) bq μ (4.58)

which is determined by substituting Eq.(4.57) into the equation for the


singularity spectrum, through the relationship between exponents. It is
clear from Figure 4.2 that the data are well fit by the solution to the
fractional Langevin equation with the parameter values μ = 1.45 and b =

60709_8577 -Txts#150Q.indd 193 19/10/12 4:28 PM


194 Statistics in Fractal Dimensions

0.1, obtained through a mean-square fit of Eq.(4.58) to the SRV time series


data.
The nonlinear form of the mass exponent in Figure 4.3a, the convex form
of the singularity spectrum f (h) in Figure 4.3b and the fit to f (q) in Figure
4.2, are all evidence that the interstride interval time series is multifractal.
This analysis is further supported by the fact that the maxima of the
singularity spectra coincide with the fractal dimensions determined using
the scaling properties of the time series using the allometric aggregation
technique.

FIGURE 4.2. The singularity spectrum for q > 0 obtained through the numerical fit to
the human gait data. The curve is the average over the ten data sets obtained in the
experiment. (From [377] with permission.)

Of course, different physiologic processes generate different fractal time


series because the long-time memory of the underlying dynamical processes
can be quite different. Physiological signals, such as cerebral blood flow
(CBF), are typically generated by complex self-regulatory systems that
handle inputs with a broad range of characteristics. Ivanov et al. [209] es-
tablished that healthy human heartbeat intervals, rather than being fractal,
exhibit multifractal properties and uncovered the loss of multifractality for
a life-threatening condition of congestive heart failure. West et al. [377] sim-
ilarly determined that CBF in healthy humans is also multifractal and this
multifractality is severely narrowed for people who suffer from migraines.
Migraine headaches have been the bane of humanity for centuries, af-
flicting such notables as Caesar, Pascal, Kant, Beethoven, Chopin and

60709_8577 -Txts#150Q.indd 194 19/10/12 4:28 PM


Fractal Stochastic Dynamics 195

Napoleon. However, its etiology and pathomechanism have to date not been
satisfactorily explained. It was demonstrated [377] that the characteristics
of CBF time series significantly differs between that of normal healthy in-
dividuals and migraineurs. Transcranial Doppler ultrasonography enables
high-resolution measurement of middle cerebral artery blood flow velocity.
Even though this technique does not allow us to directly determine CBF
values, it helps to clarify the nature and role of vascular abnormalities
associated with migraine. In particular we present the multifractal proper-
ties of human middle cerebral artery flow velocity, an example of which is
presented below in Figure 4.4.

FIGURE 4.3. (a) The mass exponent as a function of the q−moment obtained from
a numerical fit to the partition function using (3.4.10) for a typical walker. (b) The
singularity spectrum f (h) obtained from a numerical fit to the mass exponent and its
derivative using (3.4.9) for a typical walker. (From [377] with permission.)

The dynamical aspects of cerebral blood flow regulation were recognized


by Zhang et al. [408]. Rossitti and Stephenson [296] used the relative dis-
persion, the ratio of the standard deviation to mean, of the middle cerebral
artery flow velocity time series to reveal its fractal nature; a technique

60709_8577 -Txts#150Q.indd 195 19/10/12 4:28 PM


196 Statistics in Fractal Dimensions

closely related to the allometric aggregation method introduced earlier.


West et al. [377] extended this line of research by taking into account the
more general properties of fractal time series. They showed that the beat-
to-beat variability in the flow velocity has a long-time memory and is per-
sistent with an average scaling exponent of 0.85 ± 0.04, a value consistent
with that found earlier for HRV time series. They also observed that cere-
bral blood flow was multifractal in nature.
In Figure 4.5 we compare the multifractal spectrum for middle cerebral
artery blood flow velocity time series for a healthy group of five subjects and
a group of eight migraineurs [377]. A significant change in the multifractal
properties of the blood flow time series is apparent. Namely, the interval for
the multifractal distribution on the local scaling exponent is greatly con-
stricted. This is reflected in the small value of the width of the multifractal
spectrum for the migraineurs 0.013, which is almost three times smaller
than the width for the control group 0.038 for both migraineurs with and
without aura. The distributions are centered at 0.81, the same as that of
the control group, so the average scaling behavior would appear to be the
same.

FIGURE 4.4. Middle cerebral artery flow velocity time series for a typical healthy sub-
ject.

However, the contraction of the spectrum suggests that the underlying


process has lost its flexibility. The biological advantage of multifractal pro-
cesses is that they are highly adaptive, so that in this case the brain of
a healthy individual adapts to the multifractality of the interbeat inter-
val time series. Here again disease, in this case migraine, may be associ-
ated with the loss of complexity and consequently the loss of adaptability,

60709_8577 -Txts#150Q.indd 196 19/10/12 4:28 PM


Fractal Stochastic Dynamics 197

thereby suppressing the normal multifractality of cerebral blood flow time


series. Thus, the reduction in the width of the multifractal spectrum is
the result of excessive dampening of the cerebral blood flow fluctuations
and is the manifestation of the significant loss of adaptability and overall
hyperexcitability of the underlying regulation system. West et al. [377] em-
phasize that hyperexcitability of the CBF control system seems to be phys-
iologically consistent with the reduced activation level of cortical neurons
observed in some transcranial magnetic simulation and evoked potential
studies.
Regulation of CBF is a complex dynamical process and remains relatively
constant over a wide range of perfusion pressure via a variety of feedback
control mechanisms, such as metabolic, myogenic, and neurally mediated
changes in cerebrovascular impedance respond to changes in perfusion pres-
sure. The contribution to the overall CBF regulation by different areas of
the brain is modeled by the statistics of the fractional derivative parameter,
which determines the multifractal nature of the time series. The source of
the multifractality is over and above that produced by the cardiovascular
system.

0.9

0.8
f(h)

0.7

0.6

0.5
0.5 0.6 0.7 0.8 0.9 1 1.1
h
(a)

0.9

0.8
f(h)

0.7

0.6

0.5
0.5 0.6 0.7 0.8 0.9 1 1.1
h
(b)

FIGURE 4.5. The average multifractal spectrum for middle cerebral blood flow time
series is depicted by f (h). (a) The spectrum is the average of ten time series measure-
ments from five healthy subjects (filled circles). The solid curve is the best least-squares
fit of the parameters to the predicted spectrum. (b) The spectrum is the average of 14
time series measurements of eight migraineurs (filled circles). The solid curve is the best
least-squares fit to the predicted spectrum. (From [377] with permission.)

60709_8577 -Txts#150Q.indd 197 19/10/12 4:28 PM


198 Statistics in Fractal Dimensions

The multifractal nature of CBF time series is here modeled using a frac-
tional Langevin model. The scaling properties of the random force are again
implemented in the memory kernel to obtain Eq. (4.54) as the scaling of the
solution to the fractional Langevin equation. Here the q−moment of the
solution is calculated and the statistics are assumed to be Normal rather
than the more general Lévy. Consequently the quadratic function for the
singularity spectrum becomes
2
(h − H)
f (h) = f (H) − (4.59)

and is obtained from Eq. (4.58) by setting μ = 2 and b = 2σ. The mode of
the spectrum is located at f (H) = 2 − H with h = H.
It seems that the changes in the cerebral auto-regulation associated with
migraine can strongly modify the multifractality of middle cerebral artery
blood flow. The constriction of the multifractal to monofractal behavior of
the blood flow depends on the statistics of the fractional derivative index.
As the distribution of this parameter narrows down to a delta function, the
nonlocal influence of the mechanoreceptor constriction disappears. On the
other hand, the cerebral auto-regulation does not modify the monofractal
properties characterized by the single global Hurst exponent, presumably
that produced by the cardiovascular system.

4.3 Physiologic Time Series


Herein the allometry relation considered in Section 2.3 is extended to in-
clude measures of time series. In this extended view Y is the variance and
X is the average value of the quantity being measured. The fact that these
two central measures of the time series satisfy an AR implies that the
underlying time series is a fractal random process. This is a consequence
of the fact that the relative dispersion scales as well. The correlation of
time series data is here determined by systematically grouping the data
set {Xj }, j = 1, ..., N , into higher and higher aggregates of the original
data and calculating the mean and variance at each level of aggregation.
Consider the j th data element of an aggregation of n−adjacent data points

(n)

n−1
Xj = Xnj−k . (4.60)
k=1

In terms of these aggregated data the average is defined by


[N/n]
(n) 1  (n) (1)
X ≡ X = nX (4.61)
[N/n] j=1 j

60709_8577 -Txts#150Q.indd 198 19/10/12 4:28 PM


Physiologic Time Series 199

so that the average of the n−stage aggregated data can be expressed in


terms of the original average. For example, when n = 3 each value of the
new data element, defined by Eq. (4.60), consists of the sum of three non-
overlapping original data elements, and the number of new data elements
is given by [N/3], where the brackets denote the closest integer value. The
variance, for a monofractal random time series, is similarly given by [34]
(1)
V arX (n) = n2H X , (4.62)
where the superscript (1) on the average variable indicates that it is deter-
mined using all the original data without aggregation and the superscript
(n) on the variable indicates that it was determined using the aggregation
of n−adjacent data elements. Here again H is the Hurst exponent. Thus,
solving Eq. (4.61) for n and substituting this value into Eq. (4.62) yields
the AR
(n)b
V arX (n) = aX (4.63)
with the allometry coefficient given by the theoretical value
(1)1−b
a=X (4.64)
and the allometry exponent by

b = 2H . (4.65)
It is well established [34] that the exponent in a scaling equation such as
Eq.(4.63) is related to the fractal dimension D of the underlying time series
by D = 2 − H, so that
D = 2 − b/2. (4.66)
A simple monofractal time series therefore satisfies the power-law relation
of the AR form with theoretically determined parameters.

4.3.1 Heart Rate Variability (HRV)


Does self-similar scaling contribute to the regulation of a complex process
such as heart rate variability? I can obtain a crude measure of heart-rate
variations by feeling my pulse. With a casual observation, the pulse rate
may feel quite even, but on closer inspection it clearly is not a strictly reg-
ular event. For example, an increase in pulse rate is noted with inspiration
(and a decrease with expiration). These oscillations are called phasic or res-
piratory arrhythmia. Other more subtle variations in heart rate have been
detailed by means of spectral decomposition through which oscillations at

60709_8577 -Txts#150Q.indd 199 19/10/12 4:28 PM


200 Statistics in Fractal Dimensions

other frequencies have been correlated with physiologic temperature and


blood pressure control mechanisms.
However, such periodic changes account for only a small part of the over-
all fluctuations in heart rate. To measure this variability more comprehen-
sively the heart beat should be measured over a long period of observation
when the subject is going about his or her daily activities, unencumbered by
the restrictions of a controlled environment. This kind of analysis was per-
formed by [186], who performed power spectral analyses of heart rate time
series data obtained from ambulatory healthy subjects wearing a portable
ECG device. Remarkably, the spectra for the heart rates in the healthy
subjects were very similar and showed an inverse power-law pattern with a
superimposed peak corresponding to the respiratory frequency. Thus, heart
rate variability shows an inverse power law suggesting the type of scaling
behavior noted in a variety of other physiological contexts. Here we analyze
heartbeat data using the technique just discussed.
The allometry aggregation approach has been applied to a number of
data sets implementing the method of linear regression analysis on the
logarithm of the variance and the logarithm of the average value. Conse-
quently all the processed data from self-similar dynamical systems would
appear as straight lines on log-log graph paper. For example, the variability
in the heart’s interbeat intervals is called heart rate variability (HRV) and
according to the task force formed by the Board of the European Society
of Cardiology and the North American Society of Pacing and Electrophys-
iology appears to satisfy this criteria [161]. In Figure 4.6 the logarithm
of the standard deviation is plotted versus the logarithm of the average
value for the HRV time series typical of a healthy young adult male. At
the left-most position the graphed point indicates the standard deviation
and average using all the data. Moving from left to right the next graphed
point is constructed from the time series with two nearest-neighbor data
elements added together and the procedure is repeated moving right until
the right-most graphed point has twenty nearest-neighbor data elements
added together. The solid line is the best linear representation of the scal-
ing and intercepts most of the points with a positive slope of 0.76. We can
see that the slope of the HRV data is midway between the dashed curves
depicting an uncorrelated random process (slope = 1/2) and one that is
deterministically regular (slope = 1).
Phenomena obeying scaling relations, such as shown for the HRV time se-
ries data in Figure 4.6 are said to be self-similar. The fact that the standard
deviation and average values change as a function of aggregation number
implies that the magnitudes of these quantities depend on the size of the
ruler used to measure the time interval. Recall that this is one of the defin-
ing characteristics of a fractal curve; the length of a fractal curve becomes

60709_8577 -Txts#150Q.indd 200 19/10/12 4:28 PM


Physiologic Time Series 201

FIGURE 4.6. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the heartbeat interval time series for a young adult male, using
sequential values of the aggregation number. The solid line is the best fit to the aggre-
gated data and yelds a fractal dimension D = 1.24 midway between the curve for a
regular process and that for an uncorrelated random process as indicated by the dashed
curves. (From [381] with permission.)

infinite as the size of the ruler used to measure it goes to zero. The depen-
dence of the average and standard deviation of the ruler size, for a given
time series, implies that the statistical process is fractal and consequently
defines a fractal dimension for the HRV time series as given by Eq.(4.66).
These results are consistent with those first obtained by Peng et al. [267]
for a group of ten healthy subjects having a mean age of 44 years, using ten
thousand data elements for each subject. They concluded that the scaling
behavior observed in HRV time series is adaptive for two reasons: firstly
that the long-time correlations constitutes an organizing principle for highly
complex, nonlinear processes that generate fluctuations over a wide range
of time scales; secondly, the lack of a characteristic scale helps prevent ex-
cessive mode-locking that would restrict the functional responsiveness of
the organism.
The sinus node (the heart’s natural pacemaker) receives signals from the
autonomic (involuntary) portion of the nervous system which has two major
branches: the parasympathetic, whose stimulation decreases the firing rate
of the sinus node, and the sympathetic, whose stimulation increases the
firing rate of the sinus node pacemaker cells. These two branches are in
a continual tug-of-war on the sinus node, one decreasing and the other

60709_8577 -Txts#150Q.indd 201 19/10/12 4:28 PM


202 Statistics in Fractal Dimensions

increasing the heart rate and it is this tug-of-war that produces the HRV
time series in healthy subjects. We emphasize that the conclusions drawn
here are not from this single figure or set of data presented; these are
only representative of a much larger body of work. The conclusions are
based on a large number of similar observations [381] made using a variety
of data processing techniques, all of which yield results consistent with
the scaling of the HRV time series indicated in Figure 4.6. The heartbeat
intervals do not form an uncorrelated random sequence; instead the analysis
suggests that the HRV time series is a statistical fractal, indicating that
heartbeats have a long-time memory. The implications of this long-time
memory concerning the underlying physiological control system is taken
up later.
The global Hurst exponent determines the properties of monofractals,
but as previously stated there exists a more general class of heterogeneous
signals known as multifractals, which are made up of many interwoven sub-
sets with different local Hurst exponents h. The local and global exponents
are only equal for infinitely long time series, in general the Hurst expo-
nent h and the fractal dimension D = 2 − h are independent quantities.
The statistical properties of the interwoven subsets may be characterized
by the distribution of fractal dimensions f (h). In general, time series have
a local fractal exponent h that varies over its course. The multifractal or
singularity spectrum describes how the local fractal exponents contribute
to such time series. A number of investigators have used the singularity
spectrum to demonstrate that HRV time series are multifractal [171, 381].
The multifractal character of HRV time series further emphasizes the
non-homeostatic physiologic variability of heartbeats. Longer time series
than the one presented here clearly show a patchiness associated with the
fluctuations; a patchiness that is usually ignored in favor of average values
in traditional data analysis. This clustering of the fluctuations in time can
be symptomatic of the scaling with aggregation observed in Figure 4.6 or if
particularly severe it can be indicative of multifractality. However, due to
limitations of space, we do not further pursue the multifractal properties of
time series here, but refer the interested reader to the literature [374, 381].

4.3.2 Breath rate variability (BRV)


The second physiologic exemplar of variability is the dynamics of breath-
ing; the apparently regular rising and falling of your chest as you sit quietly
reading this book. To understand the dynamics of breathing consider the
evolutionary design of the lung and how closely that design is tied to the
way in which the lungs function. It is not be accident that the cascading
branches of the bronchial tree become smaller and smaller in the statisti-

60709_8577 -Txts#150Q.indd 202 19/10/12 4:28 PM


Physiologic Time Series 203

cally fractal manner discussed. Nor is it good fortune alone that ties the
dynamics of our every breath to this biological structure. I argued that,
like the heart, the lung is made up of fractal processes, some dynamic and
others now static. However, both the static and dynamic processes lack a
characteristic scale and the simple argument given in Section 2.2 establishes
that such a lack of scale has evolutionary advantage.
Respiration is, in part, a function of the lungs, whereby the body takes in
oxygen and expels carbon dioxide. The smooth muscles in the bronchial tree
are innervated by sympathetic and parasympathetic fibers, much like the
heart, and produces contractions in response to stimuli such as increased
carbon dioxide, decreased oxygen and deflation of the lungs. Fresh air is
transported through some twenty generations of bifurcating airways of the
lung, during inspiration, down to the alveoli in the last four generations of
the bronchial tree. At this tiny scale there is a rich capillary network that
interfaces with the bronchial tree for the purpose of exchanging gases with
the blood.
Szeto et al. [332] made an early application of fractal analysis to fe-
tal lamb breathing. The changing patterns of breathing in seventeen fetal
lambs and the clusters of faster breathing rates, interspersed with period
of relative quiescence, suggested to them that the breathing process was
self-similar. The physiological property of self-similarity implies that the
structure of the mathematical function describing the time series of inter-
breath intervals is repeated on progressively shorter time scales. Clusters
of faster rates were seen within the fetal breathing data, what Dawes et
al. [74] called breathing episodes. When the time series were examined on
even finer time scales, clusters could be found within these clusters, and the
signature of this scaling behavior emerged as an inverse power-law distribu-
tion of time intervals. Consequently, the fractal scaling was found to reside
in the statistical properties of the fluctuations and not in the geometrical
properties of the dynamic variable.
In parallel with heart rate, the variability of breathing rate using breath-
to-breath time intervals is called breathing rate variability (BRV). An ex-
ample of BRV time series data on which a scaling calculation is based is
shown in Figure 4.7. Because the heart rate is higher than the respiration
rate, in the same measurement epoch there is a factor of five more data for
HRV than there is for BRV time series. The BRV data were collected under
the supervision of Dr. Richard Moon, the Director of the Hyperbaric Lab-
oratory at Duke Medical Center. West et al. [379] applied the aggregation
method to the BRV time series and obtained the typical results depicted in
Figure 4.7. The logarithms of the aggregated standard deviation and aggre-
gated average were determined in the manner described earlier. Note that
we stop the aggregation at ten data elements because of the small number

60709_8577 -Txts#150Q.indd 203 19/10/12 4:28 PM


204 Statistics in Fractal Dimensions

of data in the breathing sequence. The solid curve is the best least-square
fit to the aggregated BRV data and has a slope of 0.86; the scaling in-
dex. The scaling index and fractal dimension obtained from this figure are
consistent with the results obtained by other researchers.

FIGURE 4.7. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the breathing interval time series for a healthy senior citizen,
using sequential values of the aggregation number. The solid line is the best fit to the
aggregated data and yelds a fractal dimension D = 1.14 between the curve for a regular
process and that for an uncorrelated random process as indicated by the dashed curves.

Such observations regarding the self-similar nature of breathing time se-


ries have been used in medical settings to produce a revolutionary way
of utilizing mechanical ventilators. Historically ventilators have been used
to facilitate breathing after an operation and have a built-in constant fre-
quency of ventilation. Mutch et al. [244] challenged the single-frequency
ventilator design by using an inverse power-law spectrum of respiratory
rate to drive a fractally variable ventilator. They demonstrated that this
way of supporting breathing produces an increase in arterial oxygenation
over that produced by conventional control-mode ventilators. This com-
parison indicates that the fractal variability in breathing is not the result
of happenstance, but is an important property of respiration. A reduction

60709_8577 -Txts#150Q.indd 204 19/10/12 4:28 PM


Physiologic Time Series 205

in variability of breathing reduces the overall efficiency of the respiratory


system.
Altemeier et al. [6] measured the fractal characteristics of ventilation
and determined that not only are local ventilation and perfusion highly
correlated, but they scale as well. Finally, Peng et al. [268] analyzed the
BRV time series for 40 healthy adults and found that under supine, resting,
and spontaneous breathing conditions, the HRV time series scale. This
result implies that human BRV time series, like HRV time series, have
long-time correlations across multiple time scales and therefore breathing
is a fractal statistical process.

4.3.3 Stride rate variability (SRV)


Walking is one of those things done without much thought. However the
regular gait cycle is no more regular than the normal sinus rhythm or
breathing rate just discussed. The subtle variability in the stride char-
acteristics of normal locomotion were first discovered by the nineteenth
century experimenter Vierordt [349], but his findings were not further de-
veloped for over 120 years. The variability he observed was so small that
the biomechanical community historically considered these variations to
be uncorrelated random fluctuations. In practice this means that the fluc-
tuations in gait were thought to contain no information about the un-
derlying motorcontrol process. The follow-up experiments to quantify the
degree of irregularity in walking were finally done by Hausdorff et al. [146]
and involved observations of healthy individuals, as well as observations of
subjects having certain neurophysiologic diseases that affect gait and the
elderly. Additional experiments and analyses were subsequently done by
West and Griffin [375, 380], which both verified and extended the earlier
results.
Human gait is a complex process, since the locomotor system synthesizes
inputs from the motor cortex, the basal ganglia and the cerebellum, as
well as feedback from vestibular, visual and proprioceptive sources. The
remarkable feature of this complex phenomenon is that although the stride
pattern is stable in healthy individuals, the duration of the gait cycle is
not fixed. Like normal sinus rhythm in the beating of the heart, where the
interval between successive beats changes, the time interval for a gait cycle
fluctuates in an erratic way from step to step. The gait studies carried out
to date concur that the fluctuations in the stride-interval time series exhibit
long-time inverse power-law correlations indicating that the phenomenon
of walking is a self-similar fractal activity.
One definition of the gait cycle or stride interval is the time between
successive heel strikes of the same foot [146]. An equivalent definition of

60709_8577 -Txts#150Q.indd 205 19/10/12 4:28 PM


206 Statistics in Fractal Dimensions

the stride interval uses successive maximum extensions of the knee of either
leg [375]. The stride interval time series for a typical subject has variation
on the order of 3-4%, indicating that the stride pattern is very stable. The
changes in stride interval is called stride rate variability (SRV). It is the
statistical stability of SRV that historically led investigators to decide that
not much could go wrong by assuming the stride interval is constant and the
fluctuations are merely biological noise. The experimental data fluctuations
around the mean gait interval, although small, are non-negligible because
they indicate an underlying complex structure and it was shown that these
fluctuations cannot be treated an uncorrelated random noise.

FIGURE 4.8. The logarithm of the standard deviation is plotted versus the logarithm
of the average value for the SRV time series for a young adult male, using sequential
values of the aggregation number. All the data elements are used for the graphed point
at the lower left and 20 data elements are aggregated in the last graphed point on the
upper right. The solid line is the best fit to the aggregated SRV data and yelds a fractal
dimension D = 1.30 midway between the extremes for a regular process and that for an
uncorrelated random process as indicated by the dashed curves.

Using SRV time series of 15 minutes duration I apply the allometry


aggregation approach to determine the scaling index from the time series
as shown in Figure 4.8. The slope of the data curve is 0.70, midway between
the two extremes of regularity and uncorrelated randomness. So, as in the
cases of HRV and BRV time series, we again find the erratic physiological

60709_8577 -Txts#150Q.indd 206 19/10/12 4:28 PM


Summary and Viewpoint 207

time series to represent a random fractal process. In the SRV context, the
implied clustering indicated by a slope greater than the random dashed
line, means that the intervals between strides change in clusters and not in
a uniform manner over time. This result suggests that the walker does not
smoothly adjust his/her stride from step to step. Rather, there are a number
of steps over which adjustments are made followed by a number of steps over
which the changes in stride are completely random. The number of steps
in the adjustment process and the number of steps between adjustment
periods are not independent. The results of a substantial number of stride
interval experiments support the universality of this interpretation.

4.4 Summary and Viewpoint


This chapter has been concerned with the now firmly established obser-
vation that uncorrelated Normal statistics do not accurately capture the
complex erratic fluctuations invariably seen in time series from physiologic
networks. Limitations of space has made it necessary to restrict the discus-
sion relating complexity to unpredictability but it is apparent that fractal
scaling is observed in the correlations of physiologic fluctuations and in
their statistical distributions as well. The existence of such scaling entails
a new kind of modeling; one that does not rely on continuous differential
equations. Consequently the fractional calculus was introduced into both
deterministic and stochastic modeling of physiologic phenomena.
Understanding the underlying stochastic dynamics of fractal phenomena
is key to understanding how to intervene in a complex dynamic network
in order to achieve a desired result. Without such insight intervention can
produce unintended consequences due to the loss of control. Such control is
one of the goals of medicine, in particular, understanding and controlling
physiological networks in order to insure their proper operation. We dis-
tinguish between homeostatic control and allometric control mechanisms.
Homeostatic control is familiar and has as its basis a negative feedback
character, which is both local and instantaneous. Allometric control, on
the other hand, is a relatively new concept that can take into account
long-time memory, correlations that are inverse power law in time, as well
as, long-range interactions in complex phenomena as manifest by inverse
power-law distributions in the system variable. Allometric control intro-
duces the fractal character into otherwise featureless random time series to
enhance the robustness of physiological networks by introducing nonlinear
dynamics and the fractional calculus into the control of the networks.
A complex phenomenon characterized by a fractal time series can be de-
scribed by a fractal function. Such a function is known to have divergent

60709_8577 -Txts#150Q.indd 207 19/10/12 4:28 PM


208 Statistics in Fractal Dimensions

integer-valued derivatives. Consequently the traditional control theory, in-


volving as it does integer-valued differential equations, cannot be used to
determine how feedback is accomplished. However the fractional derivative
(α) of a fractal function of fractal dimension D yields a new fractal function
with fractal dimension D + α. Therefore it seems reasonable that one strat-
egy for modeling the dynamics and control of such complex phenomena is
through the application of the fractional calculus. The fractional calculus
has been used to model the interdependence, organization and concinnity
of complex phenomena ranging from the vestibulo-oculomotor system, to
the electrical impedance of biological tissue to the biomechanical behavior
of physiologic organs, see, for example Magin [215] for an excellent reviews
of such applications and West et al. [378] for an interpretation of the for-
malism.
We can relate the allometric aggregation approach to this recently devel-
oped branch of control theory involving the fractional calculus. The gener-
alization of control theory to include fractional operators enables the de-
signer to take into account memory and hereditary properties; properties
that are traditionally neglected in integer-order control theory, such as in
the traditional picture of homeostasis. Podlubny [277] has recently shown
that if ‘reality’ has the dynamics of a fractional-differential equation, then
attempting to control reality with an integer-order feedback, leads to ex-
tremely slow convergence, if not divergence, of the system output. On the
other hand, a fractional-order feedback, with the indices appropriately cho-
sen, lead to rapid convergence of output to the desired signal. Thus, one
might anticipate that dynamic physiologic systems with scaling proper-
ties, since they can be described by fractional dynamics [378], would have
fractional-differential control systems. We have referred to such control in
the past as allometric control [381].
It is not merely a new kind of control that is suggested by the scaling of
physiologic time series. Scaling also suggests that the historical notion of
disease, which has the loss of regularity at its core, is inadequate for the
treatment of dynamical diseases. Instead of loss of regularity, we identify
the loss of variability with disease [130], so that a disease not only changes
an average measure, such as heart rate, which it does in late stages, but is
manifest in changes in HRV at very early stages. Loss of variability implies
a loss of physiologic control and this loss of control is reflected in the change
of fractal dimension, that is, in the scaling index of the corresponding time
series [130, 245].
The well being of the body’s network of networks is measured by the
fractal scaling properties of the various dynamic networks and such scaling
determines how well the overall harmony is maintained. Once the perspec-
tive that disease is the loss of complexity has been adopted, the strategies

60709_8577 -Txts#150Q.indd 208 19/10/12 4:28 PM


Summary and Viewpoint 209

presently used in combating disease must be critically examined. Life sup-


port equipment is one such strategy, but the tradition of such life support
is to supply blood at the average rate of the beating heart, to ventilate the
lungs at their average rate and so on. So how does the new perspective
regarding disease influence the traditional approach to healing the body?
Alan Mutch applied the lessons of fractal physiology to point out that
blood flow and ventilation are delivered in a fractal manner in both space
and time in a healthy body. However, he argues, during critical illness,
conventional life support devices deliver respiratory gases by mechanical
ventilation or blood by cardiopulmonary bypass pump in a monotonously
periodic fashion. This periodic driving overrides the natural áperiodic oper-
ation of the body. Mutch speculates that these devices result in the loss of
normal fractal transmission and consequently life support winds up doing
more damage the longer they are required and become more problematic
the sicker the patient [245]. In this perspective the loss of complexity is the
loss of the body as a cohesive whole; the body is reduced to a disconnected
set of organ systems.
One of the traditional views of disease is what Tim Buchman calls the
“fix-the-number” imperative [50]. He argues that if the bicarbonate level
is low then give bicarbonate; if the urine output is low then administer a
diuretic; if the bleeding patient has a sinking blood pressure then make
the blood pressure normal. He goes on to say, that such interventions are
commonly ineffective and even harmful. For example, sepsis − which is a
common predecessor of multiple organ dysfunction syndrome (MODS) −
is often accompanied by hypocalcaemia; where in controlled experimental
conditions, administering calcium to normalize the laboratory value in-
creases mortality. As a consequence as I observed elsewhere [381] one’s first
choice of options, based on an assumed simple linear causal relationship
between input and output as in homeostasis, is probably wrong.
The empirical evidence overwhelmingly supports the interpretation of
the time series analysis that fractal stochastic processes describe complex
physiologic phenomena. Furthermore, the fractal nature of these time series
is not constant in time but change with the vagaries of the interaction
of the system with its environment and therefore these phenomena are
often weakly multifractal. The scaling index or fractal dimension marks
a physiologic network’s response and can be used as an indicator of the
network’s state of health. Since the fractal dimension is also a measure
of the level of complexity, the change in fractal dimension with disease
suggests a new definition of disease as a loss of complexity, rather than the
loss of regularity [130, 381].

60709_8577 -Txts#150Q.indd 209 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd 210 19/10/12 4:28 PM


Chapter 5
Applications of Chaotic Attractors

In Chapter Two through Chapter Four I attempted to develop some rather


difficult mathematical concepts and techniques in a way that would make
their importance self-evident in a biomedical context. In the present chapter
I emphasize a single method, the attractor reconstruction technique (ART),
and briefly review how it has been applied to problems of biomedical inter-
est and argue for its continued refinement and application in these areas.
The list of examples contained in this chapter is representative not exhaus-
tive. ART is important because it provides a way to painlessly extract great
amounts of modeling information from data. The attractor that is recon-
structed from the data is shown to clearly distinguish between uncorrelated
noise and chaos, and since the ways to control networks contaminated by
such noise are quite different from those manifesting fluctuations due to
low-order nonlinear interactions, being able to distinguish between the two
can be crucial. When such a chaotic attractor can be reconstructed from
a time series it explicitly shows the number of variables required to faith-
fully model the phenomenon of interest. Epidemiology is used to begin the
review of these activities.

211

60709_8577 -Txts#150Q.indd 211 19/10/12 4:28 PM


212 Applications of Chaotic Attractors

5.1 The Dynamics of Epidemics


As pointed out by Schaffer and Kott [306] discussions over the relative
importance of deterministic and stochastic processes in regulating the in-
cidence of disease have divided students of population dynamics. These
authors show that much of the contention is more apparent than real, and
is a consequence of how certain data are processed. Spectral analysis has
been a traditional tool for discriminating between these two contributors to
a given time series, for example, those in Figure 5.1. However, even though
some systems are completely deterministic their spectra may be very broad
and consequently indistinguishable from random noise [64]. In other cases
the spectrum can have a few sharp peaks superimposed on a broadband
background. These peaks can be interpreted as phase coherence in the
system dynamics [91]. Thus it is not possible by spectral means alone to
distinguish deterministic (chaotic) dynamics from periodic motion contam-
inated by uncorrelated noise. Moreover, calculating a correlation dimension
is also not sufficient to distinguish a chaotic time series from one that is
correlated noise. The correlation dimension indicates that the time series is
fractal, but in itself it cannot determine the cause of the lack of a character-
istic time scale. It could be correlations in a random process or it could be a
chaotic attractor underlying the system dynamics. To distinguish between
these options Schaffer and Kott applied ART to epidemiological data sets.
There are a number of models that partition a population into a set of
categories and describe the development of an epidemic by means of differ-
ential equations involving the interactions of the members of one category
with those in another. The state variables are the number of individuals
in each of the assigned categories: susceptibles (S); exposed (E) (infected);
infectious (I) and recovered (R) (immune) [9, 76, 204]. These four state
variables give rise to the SEIR model for epidemics [307]:
dS(t)
= m [1 − S(t)] − bS(t)I(t) (5.1)
dt
dE(t)
= bS(t)I(t) − (m + a) E(t) (5.2)
dt
dI(t)
= aE(t)−)m + g)I(t) (5.3)
dt
The fourth variable has been eliminated from this description by assum-
ing that the total population is kept constant. Here, m−1 is the average
life expectancy, a−1 is the average latency period, and g −1 is the average
infectious period. The contact rate b is the average number of susceptibles
contacted yearly per infective. In 2004 Korobeinikov and Maini [188] proved
the global stability of the SEIR model under quite general conditions.

60709_8577 -Txts#150Q.indd 212 19/10/12 4:28 PM


The Dynamics of Epidemics 213

Measles
6000
New York
4000 City

2000

Chicken Pox
2000 New York City
Monthly notification

1000

Mumps
2000 New York City

1000

Measles
2000 Baltimore

1000

’30 ’35 ’40 ’45 ’50 ’55 ’60 ’65 ’70


Year

FIGURE 5.1. The monthly reported cases of measles, chicken pox and mumps in New
York and measles in Baltimore in the periods 1928 to 1972 inclusive. [204]

Most epidemiologists work with this model, or a variant of it whether


they believe in chaos or not. Traditionally, they have examined simple reg-
ular solutions to these models. It is not difficult to choose parameter values
to produce a two year low/high cycle that resembles the New York City
history of measles from 1945 to 1963 as discussed by Pool. The regular-
ity in these solutions do not faithfully represent the variability in the data,
however, so that epidemiologists often introduce noise to randomize things.
Schaffer and colleagues have demonstrated that the introduction of noise
is not necessary to produce irregular infection patterns. For particular val-
ues of the parameters in the SEIR model they have produced computer
simulations of measles epidemics closely resembling those seen in the New
York City data. Before assigning values to the parameters m, a, g and b and
solving the set Eq. (5.1)–(5.3) I turn to the analysis of the data.
The number of cases of measles shown in Figure 5.1 are taken from Lon-
don and Yorke [204] and are those reported monthly by physicians for the
cities of New York and Baltimore for the years 1928 to 1963. Not all cases
were reported because reporting was voluntary, so that Yorke and London
estimate that the reported cases are between a factor five and seven below

60709_8577 -Txts#150Q.indd 213 19/10/12 4:28 PM


214 Applications of Chaotic Attractors

15000 0.92

0 −2.57
(a) (b)

0.71
6000

0 −2.46

(c) (d)

FIGURE 5.2. Epidemics of measles in New York and Baltimore. Left: The numbers of
cases reported monthly by physicians from 1928 to 1963. Right: Power spectra (from
[306] with permission).

the actual number. In the spectra given in Figure 5.2 a number of peaks are
evident superimposed on a noisy background. The most prominent peak co-
incides with a yearly cycle with most cases occurring during the winter. The
secondary peaks at 2 and 3 years are obtained by an appropriate smooth-
ing of the data. These data were also plotted using ART as phase plots of
N (t), N (t + τ ), N (t + 2τ ) when N is the number of cases per month and
τ is a two to three month shift in the time axis. Figures 5.3 and 5.4 show
the phase portraits obtained using the smoothed data. Schaffer and Kott
point out that for both New York and Baltimore most of the trajectory
traced out by the data lies on the surface of a cone with its vertex near
the origin. They conclude by inspection of these figures that the attrac-
tor is an essentially two-dimensional object embedded in three dimensions.
This estimate is made more quantitative using the method of Grassberger
and Procaccia [136, 137] to calculate the correlation dimension. Figure 5.5
depicts the dimension asymptotes to a value of approximately 2.5 as the
embedding dimension is increased to five.
Return now to the SEIR model of epidemics. For measles in the large
cities of rich countries, m−1 = 102 , a−1 = 10−1 , and g −1 = 10−2 . As given
by Eq.(5.1)-(5.3) the solution to the SEIR model as determined by the
value of the rate of infection Q:

60709_8577 -Txts#150Q.indd 214 19/10/12 4:28 PM


The Dynamics of Epidemics 215

ba
Q= . (5.4)
[(m + a)(m + g)]
If Q < 1 the disease dies out; if Q > 1, it persists at a constant level and
is said to be endemic. At long times neither of these solutions captures
the properties of the attractors shown in Figures 5.3 and 5.4, that is, the
observation of recurrent epidemics is at variance with the predictions of the
SEIR model as formulated above.

FIGURE 5.3. Reconstructed trajectory for the New York data (smoothed and interpo-
lated). The motion suggests a unimodall-D map in the presence of noise. a-d. The data
embedded in three dimensions and viewed from different perspectives. (From [306] with
permission,)

To study the effect of seasonality Schaffer [306] replaces the contact rate
b in Eq. (5.1) and (5.2) with the periodic function

b
b(t) = . (5.5)
[1 + cos (2πt)]

For this form of the contact rate the solution to the SEIR-model has
period-doubling bifurcations leading to chaos [15, 311, 312]. Schaffer uses
ART along with the number of exposed and infectives to obtain the results
shown in the top row of Figure 5.6. In this figure the attractors generated

60709_8577 -Txts#150Q.indd 215 19/10/12 4:28 PM


216 Applications of Chaotic Attractors

by the data are compared with that produced by the SEIR model. The
resemblance among them is obvious. The second row depicts the attractors
as seen from above. From this perspective the essential two-dimensional na-
ture of the flow field is most evident. Poincaré sections are taken by plotting
the intersection of the attractor with a transverse line drawn through each
of the three attractors. It is seen that these sections are V-shaped half lines,
demonstrating that the flow is confined to a nearly two-dimensional conical
surface. A one-dimensional map was constructed by plotting the sequential
intersecting points against one another yielding the nearly single humped
maps shown in the final row of Figure 5.6. These maps for the New York
and Baltimore measle data depict a strong dependence between consecu-
tive intersections. When a similar analysis was made of the chicken pox and
mumps data no such dependence was observed, that is, the plot yielded a
random spray of points.

FIGURE 5.4. Reconstructed trajectory for the Baltimore data. The 1-D map is very
steep and compressed. Order of photographs as in Figure 5.3.

The failure of the chicken pox and mumps data to yield a low-dimensional
attractor in phase space lead Schaffer and Kott to investigate the effects of
uncorrelated noise on a known deterministic map. The measure they used
to determine the nature of the attractor was the one-dimensional map from
the Poincaré surface of section. They argued that the random distribution

60709_8577 -Txts#150Q.indd 216 19/10/12 4:28 PM


The Dynamics of Epidemics 217

m=1
−2 3
log C(r)

m=2
3 4 2

D
5
−4 1

0
−4 −2 0 0 1 2 3 4 5
log r σ

FIGURE 5.5. Estimating the fractal dimension for measles epidemics in New York. Left:
The correlation integral C(r) plotted against the length scale r for different embeddings
m of the data. Right: Slope of the log-log plot against embedding dimension (from [306]
with permission).

of points observed in the data could be the result of a map of the form
Xn+1 = (1 + Zn )F (Xn ) (5.6)
where F (Xn ) is the mapping function and Zn is a discrete random variable
with Normal statistics of prescribed mean and variance. They showed that
the multiplicative noise Zn could totally obscure the underlying map F (Xn )
when the dynamics are periodic. However as the system bifurcates and
moves towards chaos the effect of the noise is reduced, becoming negligible
when chaos is reached. Thus they conclude: “..that whereas noise can easily
obscure the underlying determinism for systems with simple dynamics, this
turns out not to be the case if the dynamics are complex.” This result is at
variance with the earlier interpretation of Bartlett [29] that the observed
spectrum for measles resulted from the interaction between a stochastic
environment and weakly damped deterministic oscillations. Olsen and Degn
[255] support the conclusions of Schaffer and Kott, stating:
The conclusion that measles epidemics in large cities may
be chaotic due to a well defined, albeit unknown mechanism is
also supported by the analysis of measles data from Copenhagen
yielding a one-dimensional humped map almost identical to the
ones found from the New York and Baltimore data.
Hence we have seen that ART is not only useful when the data yield
low-dimensional attractors, but also has utility when it does not. That is

60709_8577 -Txts#150Q.indd 217 19/10/12 4:28 PM


218 Applications of Chaotic Attractors

to say that some of the ideas in nonlinear dynamics conjoined with the
older concepts of stochastic equations, can explain why certain data sets
do not yield one-dimensional maps. These insights become sharper through
subsequent examples.
In order not to leave the impression that this interpretation of the data
is uniformly accepted by the epidemiological community we mention the
criticism of Aron [15] and Schwartz [312]. Much of the debate centers on
the contact rate parameter, which because it varies through the year, must
be estimated indirectly. Aron contends that the models are extremely sen-
sitive to parameters such as the contact rate, and the variation in these pa-
rameters over 30 to 40 years could produce the fluctuations [276]. Schwartz
cautions against the over-use of such simplified models as the SEIR, since it
does not yield quantitative agreement with the real world situation. Pool’s
Science article gives a clear exposition of the state of the debate as it stood
two decades ago.

FIGURE 5.6. Measles epidemics real and imagined. Top row: Orbits reconstructed from
the numbers of infective individuals reponed monthly with three-point smoothing and
interpolation with cubic splines [307]. Time lag for reconstructions indicated in photos.
Middle row: Orbits viewed from above (main part of the figures) and sliced with a plane
(vertical line) normal to the paper. Poincaré sections shown in the small boxes at upper
left. Bottom row. One of the Poincare sections magnified (left) and resulting 1-D map
(right). In each case, 36 years of data are shown. Left column: data from New York City.
Middle column: data from Baltimore. Right column: SEIR equations with parameters
as in Figure 5.3 save b1 = 0.28 (from [306] with permission).

60709_8577 -Txts#150Q.indd 218 19/10/12 4:28 PM


The Dynamics of Epidemics 219

The difficulty in testing the above ideas has to do in part with the lack
of experimentally controlled data to test the underlying parameters. Some
recent attempts to clarify one of the underlying issues focused on the fre-
quency of contacts between individuals in a social gathering, which is im-
portant since the spread of infectious diseases is strongly dependent on
these patterns of individual contacts. Stehle et al. [328] point out there
are few empirical studies available that provide estimates of the number
and duration of contacts between individuals. In their study the number
and duration of individual contacts at a two-day medical conference were
recorded using radiofrequency identification devices. The distribution of
the number of contacts versus contact duration is depicted in Figure 5.7. It
is clear from the figure that the duration of contact times has a long tail,
which is to say, that the average time does not characterize the distribution
very well.

FIGURE 5.7. Distribution of the contact duration between any two people at the con-
ference on a log-log scale. The mean duration was 49 seconds, with a standard deviation
of 112 seconds. From [328].

Stehle et al. [328] assessed the role of data-driven dynamic contact pat-
terns between the 405 individuals participating in the study in shaping the
spread of a simulated epidemic in a population using various extensions of
the SEIR model. They used both the dynamic network of contacts defined
by the collected data, and two aggregated versions of such networks, to

60709_8577 -Txts#150Q.indd 219 19/10/12 4:28 PM


220 Applications of Chaotic Attractors

assess the role of the time varying aspects of the data. This is an exciting
application of the understanding of dynamic complex networks to epidemi-
ology. The broad distributions of the various network characteristics re-
ported in this study were consistent with those observed in the interaction
networks of two previous conferences [55, 169]. This study emphasizes the
effect of contact heterogeneity on the dynamics of communicable diseases
and showed that the rate of new contacts is a very important parameter
in modeling the spread of disease. However they found that increasing the
complexity of the model did not always increase the accuracy of the model.
Their analysis of a detailed contact network and a simplified version of the
same network generated very similar results. These results invite further
exploration to determine their generality.

5.2 Chaotic Neurons


The accepted theory of the generation and propagation of the excitation of
nerve and muscle cells involves electrochemical processes localized in the
membranes of those cells. The movement of the nerve pulse corresponds
to the movement of small ions. Nerve excitation is transmitted throughout
the nerve fiber which itself is part of the nerve cell known as a neuron. The
neuron is in most respects quite similar to other cells in that it contains
a nucleus and cytoplasm. It is distinctive in that long, threadlike tendrils
emerge from the cell body, and those numerous projections branch out into
still finer extensions. These tendrils are the dendrites that form a branching
tree of ever more slender threads not unlike the fractal trees discussed
earlier. One such thread does not branch and often extends for several
meters even though it is still part of a single cell. This is the axon which
is the nerve fiber in the typical nerve [17, 351]. Excitations (depolarization
waves) in the dendrites essentially travel toward the cell body in a living
system whereas in the axon they always travel away from the cell body.
In 1852 the German physician-physicist Helmholtz first measured the
speed of the nerve impulse by stimulating a nerve at different points and
recording the time it took for the muscle to which it was connected to
respond . It was not until half a century later that Bernstein worked out
the membrane theory of excitation.
It has been known for some time that the activity of a nerve is always
accompanied by electrical phenomena. Whether it is external excitation of
a nerve or the transmission of a message from the brain, electrical impulses
are observed in the corresponding axon. As pointed out by Kandel [179],
because of the difficulty in examining patterns of interconnections in the
human brain, there has been a major effort on the part of neurologists

60709_8577 -Txts#150Q.indd 220 19/10/12 4:28 PM


Chaotic Neurons 221

to develop animal models for studying how interacting systems of neurons


give rise to behavior . There appears, for example, to be no fundamental
differences in structure, chemistry or function between the neurons and
their interconnections in man and those of a squid, a snail or a leech. How-
ever neurons do vary in size, position, shape, pigmentation, firing patterns
and the chemical substances by which they transmit information to other
cells. Here we are most interested in the differences in the firing patterns
taken, for example, from the abdominal ganglion of aphysia. As Kandel
[179] points out certain cells are normally ‘silent’ where others are sponta-
neously active. As shown in Figure 5.8 some of the active ones fire regular
action potentials, or nerve impulses, and others fire in recurrent brief bursts
or pulse trains. These different patterns result from differences in the types
of ionic currents generated by the membrane of the cell body of the neurons.
A2

10 SECONDS

R9

10 SECONDS

R15

10 SECONDS

L10

50 SECONDS

FIGURE 5.8. Firing patterns of identified neurons in Aplysia’ s abdominal ganglion are
portrayed. R2 is normally silent, R3 has a regular beating rhythm, Rl5 a regular bursting
rhythm and LlO an irregular bursting rhythm. LlO is a command cell that controls other
cells in the sytem. (From [179] with premission.)

The rich dynamic structure of the neuron firing patterns has lead to their
being modeled by nonlinear dynamical systems. In Figure 5.8 the normally

60709_8577 -Txts#150Q.indd 221 19/10/12 4:28 PM


222 Applications of Chaotic Attractors

silent neuron can be viewed as a fixed point of a dynamic system. The


periodic pulse train is suggestive of a limit cycle and finally, the erratic
bursting of random wave trains is not unlike the time series generated
by certain chaotic attractors. This spontaneous behavior of the individual
neurons may be modified by driving the neurons with external excitations.
This is done subsequently. It is also possible that the normal activity can
be modified through changes in internal control parameters of the isolated
system.
Rapp et al. [284] speculate that transitions among fixed point, periodic
and chaotic attractors by varying system control parameters may be ob-
served clinically in failures of physiological regulation. The direction of the
transition is still the source of some controversy, as mentioned before with
regard to the heart, it remains unresolved whether certain pathologies are
a transition from normally ordered periodic behavior to abnormal chaos, or
from normally chaotic behavior to abnormal periodicity. In the earlier car-
diac context I support the latter position [127, 367, 368]. There also seems
to be evidence accumulating in a number of other contexts, the present one
included, to support the view that the observed rich dynamic structure in
normal behavior is a consequence of chaotic attractors, and the apparent
rhythmic dynamics are the phase coherence in the attractors. Rapp et al.
[284] present experimental evidence that spontaneous chaotic behavior does
in fact occur in neurons.
In their study Rapp et al. recorded the time between action potentials
(interspike intervals), of spontaneously active neurons in the precentral
and postcentral gyri (the areas immediately anterior and posterior to the
central fissure) of the squirrel monkey brain. The set of measured interspike
intervals {tj }, j = 1, 2, ..., M , was used to define a set of vectors Xj =
(ti , ti+l , · · · , tj+m−l ) in an m−dimensional embedding space. These vectors
are used to calculate the correlational integral of Grassberger and Procaccia
[136, 137] discussed earlier.
To determine the correlational dimension from the interspike interval
data a scaling region in the correlation integral must be determined. Of ten
neurons measured, three were clearly described by low-dimensional fractal
time series, two were ambiguous, and five could be modeled by uncorrelated
random noise. Rapp et al. [284] drew the following two conclusions from
this study:

1. ... the spontaneous activity of some simian cortical neu-


rons, at least on occasion, may be chaotic;
2. ... irrespective of any question of chaos, the dimension
of the attractor governing the behavior can, at least for some
neurons for some of the time, be very low.

60709_8577 -Txts#150Q.indd 222 19/10/12 4:28 PM


Chaotic Neurons 223

For these last neurons we have the remarkable result that as few as three
or four variables may be sufficient to model the neuronal dynamics if in fact
the source of their fractal nature is a low-dimensional attractor. It would
have been reckless to anticipate this result, but we now see that in spite
of the profound complexity of the mammalian central nervous system the
dynamics of some of its components may be describable by low-dimensional
dynamic systems. Thus, even though we do not know what the dynamic
relations for these neurons systems might be, the fact that they do mani-
fest such relatively simple dynamical behavior, bodes well for the eventual
discovery of the underlying dynamic laws.
The next level of dynamic complexity still involving only a single neuron
is its response when subjected to stimulation. This is a technique that was
mature long before nonlinear dynamics was a defined concept in biology.
I review some of the studies here because it is clear that many neurons
capable of self-sustained oscillations are sinusoidally driven as part of the
hierarchal structure in the central nervous system. The dynamics of the
isolated neuron, whether periodic or chaotic, may well be modified through
periodic stimulation. This has been found to be the case.
Hayashi et al. [148] were the first investigators to experimentally show
evidence of chaotic behavior in a self-sustained oscillations of an excitable
biological membrane under sinusoidal stimulation. The experiments were
carried out on the giant internodal cell of the fresh water algae Nitellajlex-
ilis. A sinusoidal stimulation, A cosr (ω0 t)+B, was applied to the internodal
cell which was firing repetitively. The DC outward current B was applied
in order to stably maintain the repetitive firing which was sustained for
40 minutes. In Figure 5.9 the repetitive firing under the sinusoidal cur-
rent stimulation is shown. In Figure 5.9a the firing current is seen to be
one-to-one phase locked to the stimulating current.
The phase plot of segmented peaks is shown in Figure 5.10a, where the
stroboscopic mapping function is observed to converge on a point lying
along a line of unit slope. In Figure 5.9b we see that the firing of the neuron
has become aperiodic losing its entrainment to the stimulation. This in
itself is not sufficient to establish the existence of a low-dimensional chaotic
attractor. Additional evidence is required. The authors obtain this evidence
by constructing the mapping function between successive maxima of the
pulse train. For an uncorrelated random time series this mapping function
is just a random spray of points, whereas for a chaotic time series this
function is well defined. The mapping of sequential peaks depicted in Figure
5.10b reveals a single-valued mapping function. The slope of this function
is less than −1 at its intersection with the line of unit slope. The lines in
Figure 5.10b clearly indicate that the mapping function admits of a period
three solution. Hayashi et al. [148] then invoked a theorem due to Li and

60709_8577 -Txts#150Q.indd 223 19/10/12 4:28 PM


224 Applications of Chaotic Attractors

−20

mV
−40

−60

4
μA/cm2

2
0

(a)
1 sec

−20
mV

−40

−60

4
μA/cm2

2
0

(b)

FIGURE 5.9. Entrainment and chaos in the sinusoidally stimulated internodal cell of
Nitella. (a) Repetitive firing (upper curve) synchronized with the periodic current stim-
ulation (lower curve). (b) Non-periodic response to periodic stimulation. (From Hayashi
et al. [148] with permission.)

Yorke [197] that states: “period three implies chaos.” They subsequently
show that entrained, harmonic, quasiperiodic and chaotic responses of the
self-sustained firing of the Nitella internodal cell occur for different values
of the amplitude and frequency of the periodic external force [149]. These
same four categories of responses were obtained by Matsumoto et al. [226]
using a squid giant axon.
The above group [148] also investigated the periodic firing of the Onchid-
ium giant neuron under sinusoidal stimulation (the pacemaker neuron from
the marine pulmonate mollusk Onchidium verraculatum). The oscillatory
response does not synchronize with the sinusoidal stimulation, but is in-
stead aperiodic. The trajectory of the oscillation is shown in Figure 5.11
and it is clearly not a single closed curve but a filled region of phase space.
This region is bounded by the trajectory of the larger action potentials.

60709_8577 -Txts#150Q.indd 224 19/10/12 4:28 PM


Chaotic Neurons 225

−30

Vn+1 = F(Vn) (mV)


−40

−50

−60

−70

−80
−80 −70 −60 −50 −40 −30
Vn (mV)

(a)

−30
Vn+1 = F(Vn) (mV)

−40

−50

−60

−70

−80
−80 −70 −60 −50 −40 −30
Vn (mV)

(b)

FIGURE 5.10. (a) and (b) are the stroboscopic transfer function obtained from Figure 5.9
(a) and (b) respectively. The membrane potential at each peak of the periodic stimulation
was plotted against the preceding one. Period three is indicated graphically by arrows
in (b). (From Hayashi et al. [149] with permission.)

Here again the stroboscopic mapping function is useful for characterizing


the type of chaos that is evident in Figure 5.11. The single-humped mapping
function is shown in Figure 5.12 and is quite similar to the one observed in
Figure 5.10b for a different neuron. Again the maps allows for period three
orbits and therefore chaos [149]. Further studies by this group indicate that
due to the form of the one-dimensional map the transition to chaos occurs
through intermittency.
Now that we have such compelling experimental evidence that the basic
unit of the central nervous system has such a repertoire of dynamic re-
sponses it is reasonable to ask if the solutions to any models have these
features. In the case of epidemics we observed that the SEIR model did
capture the essential features found in the data. It has similarly been de-
termined by Aihara et al. [1] that the numerical solutions to the periodically

60709_8577 -Txts#150Q.indd 225 19/10/12 4:28 PM


226 Applications of Chaotic Attractors

dV/dt (V/sec)

−5
−60 0 60
V (mV)

FIGURE 5.11. The trajectory of the non-periodic oscillation. The trajectory is filling
up a finite region of the phase space. The oscillation of the membrane potential was
differentiated by the differentiated circuit whose phase did not shift in the frequency
region below 40 Hz. (From Hayashi et al. [148] with permission.)

forced Hodgkin-Huxley equations also give rise to this array of dynamic


responses. The Hodgkin-Huxley equations for the membrane potential dif-
ference V is

dV  
= I − g Na m3 h (V − VNa ) − g K n4 (V − VK ) − g L (V − VL ) /C
dt
(5.7)
where the g j ’s are the maximal ionic conductances and the Vj ’s are the
reversal potentials for j =sodium (Na ), potassium (K) and leakage current
component (L); I is the membrane current density (positive outward); C is
the membrane capacitance; m is the dimensionless sodium activation; h is
the dimensionless sodium inactivation and n is the dimensionless potassium
activation. The functions m, h and n satisfy their own rate equations that
depend on V and the temperature, but there is no reason to write these
down here; see for example, Aihara et al. [1].

60709_8577 -Txts#150Q.indd 226 19/10/12 4:28 PM


Chaotic Neurons 227

Vn (mV)
50
(d)
150°
30

10

−10
−10 10 30 50
FIGURE 5.12. Stroboscopic transfer function of the chaotic response to periodic current
stimulation in the Onchidium giant neuron. The arrows indicate period three. (From
Hayashi et al. [149] with permission.)

There was good agreement found between the time series of the exper-
imental oscillations in the membrane potential of the periodically forced
squid axon by Matsumoto et al. [226] and those obtained in the numeri-
cal study by Aihara et al. The latter authors determined that there were
two routes to chaos followed by the Hodgkin-Huxley equations: succes-
sive period doubling bifurcations and the formation of the intermittently
chaotic oscillation from subharmonic synchronization. The former route
had previously been analyzed by Rinzel and Miller [293] for the autonomous
Hodgkin-Huxley equations, whereas the present discussion focusses on the
non-autonomous system. Aihara et al. [1] reach the conclusion:
Therefore, it is expected that periodic currents of various
forms can produce the chaotic responses in the forced Hodgkin-
Huxley oscillator and giant axon. This implies that neural sys-
tems of nonlinear neural oscillators connected by chemical and
electrical synapses to each other can show chaotic oscillations
and supply macroscopic fluctuations to the biological brain.

60709_8577 -Txts#150Q.indd 227 19/10/12 4:28 PM


228 Applications of Chaotic Attractors

5.3 Chemical Chaos


Chemistry forms the basis of all biomedical phenomena. Hess and Markus
[156] point out that in biochemistry, oscillating dynamics play a prominent
role in biological clock functions, in inter-cellular and intracellular signal
transmission, and in cellular differentiation. Certain solutions to the pe-
riodically forced Hodgkin-Huxley equation, that describe the chemically
driven membrane potential in neurons, are chaotic. In chemical reactions
there are certain species called reactants that are continuously converted
to other species called products. In complex reactions there are often other
species around, called intermediaries, whose concentration both increase
and decrease during the course of the primary reaction. In simple react-
ing systems subject to diffusion the reactants, products and intermediaries
normally approach a spatially uniform state, that is, a state in which each
species concentration approaches a different constant value in the reacting
mixture. In the type of reaction considered by Rössler [298] it was assumed
that the chemical mixture is well stirred at all times so the reaction is in-
dependent of where in the spatial mixture it occurs. That is to say that the
effects of spatial diffusion are removed from the total rate of change of the
reactant concentration and oscillations become possible. Such oscillating
reactions were widely studied in past decades [155, 254, 283].
In the Belousov-Zhabotinskii (BZ) reaction, mentioned earlier, the bi-
furcation behavior is clearly observed. In Figure 5.13 the transition from
a steady state, of the ‘constant’ concentration of bromide ions and cerium
ions, that persists for over 600 seconds, proceeds to a periodic state. A read-
able discussion of this reaction for the nonspecialist is given by Field [97],
wherein he points out that the control parameter (bifurcation parameter
μ) is the amount of BrCH(COOH)2 in the reacting vessel. It is surprising
that the amplitude of the oscillating concentration did not start out small
and gradually increase. Bifurcation theory offers an explanation as to why
the oscillations appear at their full amplitude rather than increasing grad-
ually. The steady state remains locally stable until the control parameter
exceeds a critical value at which point the steady state becomes unstable
and makes a transition to a periodic state. This sudden change in behavior
is characteristic of bifurcations in systems governed by nonlinear kinetics
laws and evolving biological systems [97, 334].
Simoyi et al. [321] conducted experiments on the BZ reaction in a well-
stirred reactor as a function of the flow rate of the chemicals through the
reactor. In Figure 5.14 is depicted the observed bromide-ion potential time
series for different values of the flow rate (the flow rate is the control pa-
rameter in this experiment). They, as well as Roux et al. [299], used the

60709_8577 -Txts#150Q.indd 228 19/10/12 4:28 PM


Chemical Chaos 229

embedding theorems [333, 393] to justify reconstruction of the dynamic


attractor from the single bromide-ion concentration [97].
Concentration of Br-(M)

1×10-1 0
3 −1
[Ceiv]
2 log
1

[Ceiv]
[Ceiii]
1×10-1 [Ceiii] −2
3 5
2 2

log
6
1×10-1
3 log [Br-]
2 3 4
1×10-1
0 150 300 450 600 750 900 1050 1200
Seconds

FIGURE 5.13. The Belousov-Zhabotinskii (BZ) reaction is the most fully understood
chemical reaction that exhibits chemical organization. The general behavior of this re-
action as the concentrations of bromide and cerium ions oscillate. (From [97] with per-
mission.)

Thus, for sufficiently high values of the control parameter (flow rate) the
attractor becomes chaotic. In Figure 5.15a is depicted a two-dimensional
projection of the three-dimensional phase portrait of the attractor with the
third axis normal to the plane of the page. A Poincaré surface of section is
constructed by recording the intersection of the attractor with the dashed
line to obtain the set of data points {Xn }. The mapping function shown
in Figure 5.15b is obtained using these data points. The one-humped form
of the one-dimensional map clearly indicates the chaotic character of the
attractor. These observations were thought to provide the first example of a
physical system with many degrees of freedom that can be modeled in detail
by a one-dimensional map. However, Olsen and Degn [254] had observed
chaos in the oscillating enzyme reaction: peroxidase-oxidase reaction in an
open system some five years earlier. The next amplitude plot for this latter
reaction does not yield the simple one-humped mapping function shown
in Figure 5.15b, but rather has a “Cantor set-like” structure as shown in
Figure 5.16. Olsen and Degn [254] constructed a mathematical model con-
taining the minimal chemical expressions for quadratic branchings. The
results yielded periodic and chaotic oscillations closely resembling the ex-
perimental results. In Figure 5.16 the next amplitude plot of the chaotic
solutions for the data is overlaid on the numerical solutions. As pointed out
by Olsen [253], “The dynamic behavior of the peroxidase-oxidase reaction
may thus be more complex than the behavior previously reported for the
BZ reaction”.

60709_8577 -Txts#150Q.indd 229 19/10/12 4:28 PM


230 Applications of Chaotic Attractors

FIGURE 5.14. Observed bromide-ion potential series with periods τ (115 s), 2τ , 2x2τ ,
6τ , 5τ , 3τ , and 2 x 3τ ; the dots above the time series are separated by one period. (From
Simoyi et al. [321] with permission.)

Yet a third chemical reaction manifesting a rich dynamic behavior is gly-


colysis under periodic excitation. Markus et al. [223] examined the proper-
ties of periodic and aperiodic glycolytic oscillations in yeast extracts under
sinusoidal glucose input. They used a variety of methods to analyze these
reactions including spectral analysis, phase space reconstruction, Poincaré
surface of section and the determination of the Lyapunov exponent for the
attractor dynamics. They used a two-enzyme glycolytic model to predict
a rich variety of periodic responses and strange attractors from numerical
solutions of the equations and then experimentally confirmed the existence
of these predicted states.
The experiments were conducted with cell-free extracts of commercial
baker’s yeast (Saccharomyces cerevisiae) (ph 6.4, 22-23◦ , 20-27 mg pro-
tein/ml) by continuous and periodic injection of 0.3 M glucose and record-
ing the NADH fluorescence (F). In Figure 5.17 their experimental results
are depicted. The lower curve indicates the periodic input flux, and the up-

60709_8577 -Txts#150Q.indd 230 19/10/12 4:28 PM


Chemical Chaos 231

per curve shows a typical train of response variations with no discernible


period. Using the upper curve as our data set the power spectra density
indicates a broadband spectrum indicative of noise (randomness) on which
a number of sharp peaks are superimposed indicating order. Investigators
now know that the presence of these two features are indicative of chaos
in the time series. If Te is the period of the input flux then Fn defined as
the value of F (t) at time t = nTe can be used to obtain the ‘stroboscopic
transfer’ function; the nonlinear map such as shown in Figure 5.15 and
5.16.

Xn + 1
B(ti +53s)

B(ti) Xn
(a) (b)

FIGURE 5.15. (a) A two-dimensional projection of a three-dimensional phase portrait


for the chaotic state reconstructed from the Belousov-Zhabotinskii chemical reaction.
(b) A one dimensional map constructed from the data in (a). (From Simoyi et al. [321]
with permission.)

In Figure 5.18 two determinations of the one-dimensional map are de-


picted. In plot (a) an oscillation with a response period equal to 3Te is
shown, wherein the map consists of three distinct patches of data points.
In plot (b) we see from the single humped map that the time series is
áperiodic. Markus et al. point out that this transfer function allows the
determination of a sequence of points having the same periodicity as plot
(a), namely those indicated by 1, 2 and 3. According to the Li-York theo-
rem [197], this transfer function thus admits of chaos. Further verification
of the chaotic nature of the time series was made through the evaluation of
the Lyapunov characteristic exponent λ. As mentioned in Chapter One the
Lyapunov exponent is interpreted as the average rate of growth of infor-
mation as the system evolves. A chaotic system is one possessing a positive
Lyapunov exponent and thereby has a positive rate of increase of macro-

60709_8577 -Txts#150Q.indd 231 19/10/12 4:28 PM


232 Applications of Chaotic Attractors

scopic information. They obtain λ = 0.95 bits as the rate of growth of


information during chaotic response.

8
3

3 8
(a)
6.5
6

3.25 3.75
(b)

FIGURE 5.16. Next amplitude plot of the oscillators observed in the peroxidase-oxidase
reaction. (a) 3000 maxima have been computed. The first of these maxima is preceded
by 100 maxima that were discarded. (b) Magnification of the square region shown in
(a). (From Olsen [253] with permission.)

5.4 Cardiac Chaos


As discussed previously there are several areas of the mammalian heart ca-
pable of spontaneous, rhythmic self-excitation, but under physiologic con-
ditions the normal pacemaker is the sinoatrial (SA) node. The SA node is
a small mass of pacemaker cells embedded in the right atrial wall near the

60709_8577 -Txts#150Q.indd 232 19/10/12 4:28 PM


Cardiac Chaos 233

entrance of the superior vena cava. An impulse generated by the SA node


spreads through the atrial muscle (triggering atrial contraction). The de-
polarization wave then spreads through the atrioventricular (AV) node and
down the His-Purkinje conduction system into the right and left ventricles.
There are a large number of both linear and nonlinear mathematical mod-
els describing this process of conduction between the SA and AV nodes.
Here we show how a number of experimental studies have used nonlinear
tools to distinguish between chaos and noise [141, 167, 182, 294, 364, 381],
and to assist in understanding the physiological dynamics.
F
Vin

0 30 60 90 120 150
TIME (min)

FIGURE 5.17. Measured NADH fluorescence (upper curve) of yeast extract under sinu-
soidal glucose input flux (lower curve). (From Markus et al. [223] with permission.)

The experimental technique of externally stimulating a neuron to induce


behavior that enables the experimenter to deduce its intrinsic dynamics
has also been applied by Glass et al. [115] to aggregates of spontaneously
beating cultured cardiac cells. These aggregates of embryonic cells of chick
heart were exposed to brief single and periodic current pulses and the re-
sponse recorded. A fundamental assumption of this work was that changes
in the cardiac rhythm can be associated with bifurcations in the qualitative
dynamics of the type of mathematical models we have been considering.
The analysis of Glass et al. [115] makes three explicit assumptions:

(i) A cardiac oscillator under normal conditions can be de-


scribed by a system of ordinary differential equations with a
single unstable steady state and displaying an asymptotically

60709_8577 -Txts#150Q.indd 233 19/10/12 4:28 PM


234 Applications of Chaotic Attractors

stable limit cycle oscillation which is globally attracting except


for a set of singular points of measure zero.

Fn+1

Fn

(a)
2
Fn+1

Fn
(b)

FIGURE 5.18. Stroboscopic transfer function for a periodic response at


ωe = 3.02ω0 (a) and for chaos at ωe = 2.76 ω0 (b). The plus signs (+)
indicate the signal Fn+1 (arbitrary units) measured at time (n+1)Te versus
the signal Fn at time nTe , where Te is the input flux period. The solid curve
in panel (b) is an interpolated transfer function. The period in panel (a} is
3Te and the transfer function in panel (b) admits the same period. These
periodicities are indicated in both panels by vertical and horizontal lines
and by the numbers. (From Markus et al. [223] with permission.)

60709_8577 -Txts#150Q.indd 234 19/10/12 4:28 PM


Cardiac Chaos 235

(ii) Following a short perturbation, the time course of the


return to the limit cycle is much shorter than the spontaneous
period of oscillation or the time between periodic pulses.
(iii) The topological characteristics of the phase transition
curve (PTC) change in stereotyped ways as the stimulus strength
increases.

Denote the phase of the oscillator immediately before the ith stimulus of
a periodic stimulation with a period τ by φi . The recursion relation is
τ
φi+1 = g (φi ) + (5.8)
T0

FIGURE 5.19. The new phase of the cardiac oscillator following a stimulation is plotted
against the old phase, the resulting curve is called the phase transition curve. This is
denoted by g (φ) in the text. (From Glass et al. [115] with permission.)

where g (φ) is the experimentally determined phase response function for


that stimulus strength and g (φ + j) = g (φ) + j for an integer j and T0
is the period of the limit cycle. Equation (5.8) measures the contraction
of the aggregate as a function of the phase of the contraction at the time
of the perturbation. Using the phase resetting data a Poincaré map was
constructed to determine the phase transition function depicted Figure
5.19. This is done by plotting the new phase following a stimulation against
the old phase, the resulting curve is called the phase transition curve. The
theoretical equation Eq. (5.8) is now iterated, using the experimentally

60709_8577 -Txts#150Q.indd 235 19/10/12 4:28 PM


236 Applications of Chaotic Attractors

determined g (φ), to compute the response of the aggregate to periodic


stimulation. The observed responses to such perturbation are phase locking,
period doubling and chaotic dynamics as the frequency of the periodic
driver is increased.
The above authors do not attribute the observed irregularity to deter-
ministic chaotic dynamics alone, but also argue that the observed effects
can be strongly influenced by biological and environmental noise. Also that
prolonged periodic stimulation of the aggregate changes the response prop-
erties of the aggregate.
In summary, the dynamics in response to periodic stimulation are pre-
dicted by iterating the experimentally derived map and bears a close re-
semblance to that observed experimentally. Glass et al. [115] point out that
the experimentally observed dynamics show patterns similar to many com-
monly observed cardiac arrhythmias. Ikeda et al. [168] use the properties of
the phase response model to explain ventricular parasystoles. Guevara and
Glass [141] associate intermittent or variable AV block with the complex
irregular behavior characteristic of chaotic dynamics observed in the phase
response model.
Glass et al. [115] unanimously associate the chaotic dynamics with patho-
logical rather than normal cardiac behavior. The same conclusions were
reached by Ritzenberg et al. [294] using the electrocardiogram and arte-
rial blood pressure traces of noradrenaline-treated-dogs. Noradrenaline was
found to produce variations in these traces that repeat themselves with
regular periods of integral numbers of heart beats, an effect reminiscent of
subharmonic bifurcation. A next amplitude plot of the T-waves is depicted
in Figure 5.20. If this plot is viewed as a one-dimensional map then it is
monotonic and hence invertible and therefore in itself does not provide ev-
idence for the occurrence of chaos. Oono et al. [257] analyze the pulses of
a patient suffering from arrhythmia and also construct a next amplitude
plot of T-waves. The map in Figure 5.21 clearly shows that the arrhythmia
of this patient is characterized by an orbit of period three. This suggests
that Figure 5.20 may be more consistently interpreted as two distinct blobs
rather than as a continuous map.
Let us again consider the electrical activity of the normal heart, where
the potential difference between various points on the surface of the body is
called the electrocardiogram (ECG). The ECG time series consists of the
P-wave, the QRS complex and the T-wave. The first component reflects
the excitation of the atria, the second that of the ventricles (His-Purkinje
network) and the third is associated with recovery of the initial electrical
state of the ventricles. Traditional wisdom and everyday experience tells us
that the ECG time series is periodic; however, quantitative analysis of the
time series reveals that it is not periodic as was shown in Section 4.3.1.

60709_8577 -Txts#150Q.indd 236 19/10/12 4:28 PM


Cardiac Chaos 237

Section 3.1 presented a set of coupled nonlinear differential equations


to model certain features of the cardiac dynamics. This model, based on
a generalization of the cardiac oscillator [365] of van der Pol and van der
Mark [344, 345], gives a qualitative fit to the ECG time series, but does
not account for the observed fluctuations in the data. The question arises
as to whether these fluctuations are the result of the oscillations being
unpredictably perturbed by the cardiac environment, or are a consequence
of cardiac dynamics unfolding on a chaotic attractor, or both. As mentioned
there are several techniques available from dynamical systems theory that
enable discrimination between these two possibilities. Spectral analysis,
temporal autocorrelation functions and ART are qualitative whereas the
correlation dimension, Lyapunov exponents and Kolmogorov entropy are
quantitative.

15
14
13
12
11
T-Wave Height (N + 1)

10
9
8
7
6
5
4
3
2
1
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
T-Wave Height (N)

FIGURE 5.20. The height to the (N + 1)st T-wave is plotted against the height of the
N th T-wave during 48 beats of an episode of period doubling (from Oono et al. [257]
with permission).

Section 2.4 discussed the power spectrum of the QRS complex of a normal
heart and the hypothesis that the fractal structure of the His-Purkinje net-
work serves as a structural substrate for the observed broadband spectrum
[123], as depicted in Figure 2.22. Babloyantz and Destexhe [21] construct
the power spectrum of a four minute record of ECG which also shows

60709_8577 -Txts#150Q.indd 237 19/10/12 4:28 PM


238 Applications of Chaotic Attractors

a broadband structure as depicted in Figure 5.22, which can arise from


stochastic or deterministic processes. Unlike the power-law spectra found
for the single QRS complex, they [21] find an exponential power spectrum.
The exponential form has been observed in a number of chaotic systems
and has been used to characterize deterministic chaos by a number of au-
thors [138, 319]. This in itself is rather difficult to make consistent with the
Grassberger-Procaccia correlation dimension:
C(r) ∼ rν (5.9)

1.0
Tn + 1

0.0
Tn 1.0 sec

FIGURE 5.21. A next amplitude plot of T-wave maximum yields a period three orbit
from a patient with an arrhythmia. (From Oono et al. [257] with permission.)

since the Tauberian theorem applied to Eq. (5.9) implies that for small r
the spectrum corresponding to this correlation function is
1
S (ω) ∼ (5.10)
ω 2ν+1
for large ω. Whereas if the power spectrum is exponential
S (ω) ∼ e−γω
the corresponding correlation function is
γ
C(r) ∼ 2 .
γ + r2

60709_8577 -Txts#150Q.indd 238 19/10/12 4:28 PM


Cardiac Chaos 239

Thus, it would seem that the cardiac time series is not fractal, but fur-
ther measures suggest that it may in fact be chaotic or at least there is a
persistent controversy as we subsequently discuss.
A phase portrait of the ECG attractor may be constructed from the time
series using ART. Figure 5.23 depicts such a portrait in three-dimensional
phase space using two different delay times. The two phase portraits look
different; however, their topological properties are identical. It is clear that
these portraits depict an attractor unlike the closed curves of a limit cycle
describing periodic dynamics. Further evidence for this is obtained by cal-
culating the correlational dimension using the Grassberger-Procaccia cor-
relation function; this dimension is found to range from 3.5 to 5.2 using
four minute time segments of data or 6 × 104 data points.
Power specturm
No window. resol.: 29 x 4096 pts.

1012
(a)
1011
Log Amplitude

1010

109

108

107

0 20.0 40.0
Frequency Hz

FIGURE 5.22. Semi-logarithmic plot of a power spectrum from ECG showing exponen-
tial decay at high frequencies followed by a flat region at still higher frequencies (not
shown). The flat region accounts for instrumental noise. (Adapted from Babloyantz and
Destexhe [21] with permission.)

The successive intersection of the trajectory with a plane located at Q


in Figure 5.23 constitutes a Poincaré surface of section. In Figure 5.24 we
see a return map between successive points of intersection, that is, the
set of points P0 , ..., PN are related by Pn = f (Pn−1 ), where f (·) is the
return map. This obvious non-invertible functional relationship between

60709_8577 -Txts#150Q.indd 239 19/10/12 4:28 PM


240 Applications of Chaotic Attractors

these points indicates the presence of a deterministic chaotic dynamics, cf.


Figure 5.24b. Babloyantz and Destexhe [21] qualify this result by pointing
out that because of the high dimensionality of the cardiac attractor, no
coherent functional relationships between successive points were observed
in other Poincaré surfaces of section, however, correlational dimensions
were calculated for a total of 36 phase portraits and yielded the results
quoted previously, that is, the correlation dimension spans the interval
3.6 ± 0.01 ≤ D ≤ 5.2 ± 0.01.

FIGURE 5.23. Phase portraits of human ECG time series constructed in three-
dimensional space. A two-dimensional projection is displayed for two values of the delay
time: (a) 12 ms and (b) 1200 ms. (c) represents the phase portrait constructed from the
three simultaneous time series taken from the ECG leads. These portraits are far from
the single closed curve that would describe a periodic activity. (From Babloyantz and
Destexhe [21] with permission.)

Another indicator that ‘normal sinus rhythm’ is not strictly periodic is


the broad band 1/f −like spectrum observed by the analysis of interbeat in-
terval variations in healthy subjects [123, 186]. The heart rate is modulated
by a complicated combination of respiratory, sympathetic and parasympa-
thetic regulators. Akselrod et al. [3] showed that suppression of these effects
considerably alters the RR-interval power spectrum in healthy individuals,
but a broad band spectrum persists. Using the interbeat sequence as a
discrete time series Babloyantz and Destexhe evaluated the correlation di-
mension of RR-intervals to be 4.9 ± 0.40 with typically 1000 intervals in
the series. This dimension is significantly higher than that of the overall
ECG, but we do not as yet understand the relation in the dynamics of the
two quantities.

60709_8577 -Txts#150Q.indd 240 19/10/12 4:28 PM


Cardiac Chaos 241

Babloyantz and Destexhe [21] arrive at the conclusion reached earlier by


Goldberger et al. [123] among others that the normal human heart follows
deterministic dynamics of a chaotic nature. The unexpected aspect of the
present results are the high dimensions of the chaotic attractors. In any

Poincare section
Plane X=2250

1.9x103

2.0x103
-Z(k)

2.1x103

2.2x103

2.3x103
−2.40 −2.30 −2.20 −2.10
-Y(k) (units of 103)

(a)
First return map
Plane X=2250

−2.0x103

−2.1x103

−2.2x103
-Y(k+1)

−2.3x103

−2.4x103

−2.5x103

−2.50 −2.40 −2.30 −2.20 −2.10 −2.00


-Y(k) (units of 103)

(b)

FIGURE 5.24. The Poincaré map of normal heart activity. Intersection of the phase
portrait with the Y − Z plane (X = const) in the region Q of Figure 5.23. the first
return map is constructed form the Y −coordinate of the previous section. We see that
there may be a simple non-invertible relationship between successive intersections. (From
Babloyantz and Destexhe [21] with permission.)

60709_8577 -Txts#150Q.indd 241 19/10/12 4:28 PM


242 Applications of Chaotic Attractors

event there is no way that the ‘conventional wisdom’ of the ECG consisting
of periodic oscillations can be maintained in light of these results.
The above argument suggests that ‘normal sinus rhythm’ may be chaotic.
However in 2009 the journal Chaos initiated a new series on Controversial
Topics in Nonlinear Science. The first of the solicited topics was: Is the
Normal Heart Rate Chaotic? [116]; a question closely related to the above
discussion. One of the pioneers in the application of nonlinear dynamics to
biomedical phenomena is Leon Glass was a post doctoral researcher at the
University of Rochester with Elliott Montroll at the same time I was. Glass
provided a history of this controversial topic as well as an overview of the
contributions [116]. In part he states:

Several different operational definitions of chaos are offered.


Of the articles that comment on the question, “Is the normal
heart rate chaotic?”, most conclude that the evidence was in-
conclusive or negative, and several do not think the question
itself is the right question to pursue. Several articles describe
the application of new methods of time series analysis to help
elucidate the complex dynamical features of heart rate variabil-
ity.

Glass points out that his own research on the effects of periodic stimu-
lation on spontaneously beating chick heart cell aggregation yields chaotic
dynamics [114], as we discussed at the beginning of this section. In spite
of his own research results he concluded that normal heart rate variability
does not display chaotic dynamics. Moreover that the application of the in-
sights resulting from understanding the nonlinear dynamics of arrhythmias
to clinical situations is been more difficult than he had originally imagined.
I could not agree more.

5.5 EEG Data and Brain Dynamics


It has been well over a century since it was discovered that the mammalian
brain generates a small but measurable electrical signal. The electroen-
cephalograms (EEG) of small animals were measured by Caton in 1875,
and in man by Berger in 1925. It had been thought by the mathemati-
cian N. Wiener, among others, that generalized harmonic analysis would
provide the mathematical tools necessary to penetrate the mysterious re-
lations between the EEG time series and the functioning of the brain. The
progress along this path has been slow however, and the understanding
and interpretation of EEG‘s remains quite elusive. After 137 years one can

60709_8577 -Txts#150Q.indd 242 19/10/12 4:28 PM


EEG Data and Brain Dynamics 243

determine correlations between the intermittent activity of the brain and


that found in EEG records. There is no taxonomy of EEG patterns which
delineates the correspondence between those patterns and brain activity,
but one is presently being developed as subsequently discussed. The clini-
cal interpretation of EEG records is made by a complex process of visual
pattern recognition and associations made on the part of the clinician, and
significantly less often through the use of Fourier transforms. To some de-
gree the latter technique is less useful than it might be because it is now
evident that EEG time series do not have the properties necessary for the
application of Fourier analysis. These time series are not stationary and
consequently they are not ergodic; two properties of time series necessary
to relate the autocorrelation function of the time series and its spectrum.
The EEG signal is obtained from a number of standard contact configu-
rations of electrodes attached by conductive paste to the scalp. The actual
signal is in the microvolt range and must be amplified several orders of
magnitude before it is recorded. Layne et al. [196] emphasize that the EEG
is a weak signal in a sea of noise so the importance of skilled electrode
placement and inspection for artifacts of the recording cannot be over es-
timated [143]. Note that pronounced artifacts often originate from slight
movements of the electrodes and from contraction of muscles below the
electrodes.
The relationship between the neural physiology of the brain and the over-
all electrical signal measured at the brain’s surface is not understood. In
Figure 5.25 is depicted the complex ramified structure of typical nerve cells
in the cerebral cortex (note its similarity to the fractal structures discussed
in earlier chapters). The electrical signals originate from the interconnec-
tions of the neurons through collections of dendritic tendrils interleaving
the brain mass. These collections of dendrites generate signals that are
correlated in space and time near the surface of the brain, and their prop-
agation from one region of the brain’s surface to another can actually be
followed in real time. This signal is attenuated by the skull and scalp before
it is measured by the EEG contacts.
The long standing use of Fourier decomposition in the analysis of EEG
time series has provided ample opportunity to attribute significance to a
number of frequency intervals in the EEG power spectrum. The power as-
sociated with the EEG signal is essentially the mean square voltage at a
particular frequency. The power is distributed over the frequency interval
0.5 to 100 Hz, with most of it concentrated in the interval 1 to 30Hz. This
range is further subdivided into four sub-intervals, for historical rather than
clinical reasons: the delta, 1-3Hz; the theta, 4-1 Hz; the alpha, 8-14Hz; and
the beta for frequencies above 14 Hz. Certain of these frequencies dom-
inate in different states of awareness. A typical EEG signal looks like a

60709_8577 -Txts#150Q.indd 243 19/10/12 4:28 PM


244 Applications of Chaotic Attractors

random time series with contributions from throughout the spectrum ap-
pearing with random phases as depicted in Figure 5.26. This aperiodic
signal changes throughout the day and changes clinically with sleep, that
is, its high frequency random content appears to attenuate with sleep, leav-
ing an alpha rhythm dominating the EEG signal. The erratic behavior of
the signal is so robust that it persists, as pointed out by Freeman [105],
through all but the most drastic situations including near-lethal levels of
anesthesia, several minutes of asphyxia, or the complete surgical isolation
of a slab of cortex. The random aspect of the signal is more than appar-
ent, in particular, the olfactory EEG has a Normal amplitude histogram,
a rapidly attenuating autocorrelation function, and a broad spectrum that
resembles ‘1/f noise’ [103].

FIGURE 5.25. The complex ramified structure of typical nerve cells in the cerebral
cortex is depicted.

Here I review the applications of ART to EEG time series obtained un-
der a variety of clinical situations. This application enables us to construct
measures of the degree of irregularity of the time series, such as the cor-

60709_8577 -Txts#150Q.indd 244 19/10/12 4:28 PM


EEG Data and Brain Dynamics 245

relation and information dimensions. It is also interesting to compare a


number of these measures applied to EEG times series from a brain under-
going epileptic seizure with those of normal brain activity, see for example,
Figure 5.26. A clear reduction in the dimensionality of the time series is
measured for a brain in seizure as compared with normal activity. In ad-
dition to the processing of human EEG seizure data by Babloyantz and
Destexhe [21] the application of the neural net model by Freeman to model
induced seizures in rats is reviewed. A clear correlation between the mea-
sure of the degree of irregularity of the EEG signal and the activity state
of the brain is observed.

5.5.1 Normal activity


Because of its pervasiveness it probably bears repeating that the traditional
methods of analyzing EEG time series rely of the paradigm that all tem-
poral variations are decomposable into harmonic and periodic vibrations.
The attractor reconstruction technique, however, reinterprets the time se-
ries as a multi-dimensional geometrical object generated by a deterministic
dynamical process that is not necessarily a superposition of periodic oscil-
lations. If the dynamics are reducible to deterministic laws, then the phase
portraits of the system converge toward a finite subset of phase space; an
attractor. Thus, the phase space trajectories reconstructed from the data
should be confined to lie along such an attractor. In Figure 5.26 is depicted
the projection of the EEG attractor onto a two-dimensional subspace for
two different pairs of leads using different segments of the corresponding
time series.
Using the standard probe positions, Mayer-Kress and Layne [231] use
the reconstruction technique on the EEG time series to obtain the phase
portraits in Figure 5.26. These phase portraits suggest chaotic attractors
with diverging trajectories, however, the EEG time series seem to be non-
stationary. That is to say that the average position of the time series defined
over a time interval long compared with most of the EEG activity, is ob-
served to change over an even longer time scale. Also the EEG trajectory is
seen to undergo large excursions in the phase space at odd times. From this
Layne et al. [196] concluded that the EEG time series are non-stationary
and of high dimension, in which case the concepts of ‘attractor’ and ‘fractal
dimension’ may not apply, since these are asymptotic or stationary proper-
ties of a dynamic system. Babloyantz and Destexhe [20] point out that this
non-stationarity is strictly true for awake states, however it appears that
stationarity can be found in the time series from patients that are sleeping
and from those having certain pathologies as discussed in the sequel.

60709_8577 -Txts#150Q.indd 245 19/10/12 4:28 PM


246 Applications of Chaotic Attractors

FIGURE 5.26. Typical episodes of the electrical activity of the human brain as recorded
in EEG time series together with the corresponding phase portraits. These portraits
are the two-dimensional projections of three-dimensional constructions. The EEG was
recorded on a FM analog tape and processed off-line (signal digitized in 12 bits, 250Hz
frequency, 4th order 120 Hz low pass filter). (From Babloyantz and Destexhe [20] with
permission.)

The brain wave activity of an individual during various stages of sleep


was analyzed by Babloyantz [18]. She uses the standard partitioning of
sleeping into four stages. In stage one, the individual drifts in and out
of sleep. In stage two, the slightest noise arouses the sleeper, whereas in
stage three a loud noise is required. The final stage is one of deep sleep.
This is the normal first sequence of stages one goes through during a sleep
cycle. Afterwards the cycle is reversed back through stages three and two at
which time dreams set in and the individual manifests rapid eye movement
(REM). The dream state is followed by stage two after which the initial
sequence begins again. The EEG phase portraits for each of these stages
of sleep are depicted in Figure 5.27. It is clear that whatever the form of

60709_8577 -Txts#150Q.indd 246 19/10/12 4:28 PM


EEG Data and Brain Dynamics 247

the attractor it is not static, that is to say, it varies with the level of sleep.
Correspondingly, the correlation dimension has decreasing values as sleep
deepens.
Mayer-Kress and Layne [230] used the results obtained by a number of
investigators to reach the following conclusions:
(1) The ‘fractal dimension’ of the EEG cannot be deter-
mined regionally, due to non-stationarity of the signal and sub-
sequent limitations in the amount of acceptable data.
(2) EEG data must be analyzed in a comparative sense with
the subject acting as their control.
(3) In a few cases (awake but quiet, eyes closed) with limited
time samples, it appears that the dimension algorithm converge
to finite values.
(4) Dimension analysis and attractor reconstruction could
prove to be useful tools for examining the EEG and complement
the more classical methods based on spectral properties.
(5) Besides being a useful tool in determining the optimal
delay-time for dimension calculations, the mutual information
content is a quantity which is sensitive to different brain states.
The data processing results suggest the existence of chaotic attractors
determining the dynamics of brain activity underlying the observed EEG
signals. This interpretation of the data would be strongly supported by
the existence of mathematical models that could reproduce the observed
behavior; such as in the examples shown earlier in this chapter. One such
model has been developed by Freeman [105] to describe the dynamics of
the olfactory system, consisting of the olfactory bulb (OB), anterior nucleus
(AON) and prepyriform cortex (PC). Each segment consists of a collection
of excitatory or inhibitory neurons which in isolation is modeled by a non-
linear second-order ordinary differential equation. The basal olfactory EEG
is not sinusoidal as one might have expected, but is irregular and aperiodic.
This intrinsic unpredictability is manifest in the approach to zero of the
autocorrelation function of the time series data. This behavior is captured
in Freeman’s dynamic model.
The model of Freeman generates a voltage time series from sets of cou-
pled nonlinear differential equations with interconnections that are spec-
ified by the anatomy of the olfactory bulb, the anterior nucleus and the
prepyriform cortex. When an arbitrarily small input pulse is received at
the receptor, the model system generates continuing activity that has the
statistical properties of the background EEG of resting animals. A compar-
ison of the model output with that of a rat is made in Figure 5.28. Freeman
[105] comments:

60709_8577 -Txts#150Q.indd 247 19/10/12 4:28 PM


248 Applications of Chaotic Attractors

2400 2400

2000 2000

1600 1600

1600 2000 2400 1600 2000 2400


(a) (b)

2400 2400

2000 2000

1600 1600

1600 2000 2400 1600 2000 2400


(c) (d)

FIGURE 5.27. Two-dimensional phase portraits derived from the EEG of: (a) an awake
subject, (b) sleep stage two, (c) sleep stage four, (d) REM sleep. The time series x0 (t)
is made of N = 4000 equidistant points. The central EEG derivation C4-A1 according
to the Jasper system. Recorded with PDP11-44, 100Hz for 40 s. The value of the shift
from 1s to 1d is r = 10Δt. (From Babloyantz [18] with permission.)

Close visual inspection of resting EEG’s and their simulations


show that they are not indistinguishable, but statistical proce-
dures by which to describe their differences have not yet been
devised. Both appear to be chaotic on the basis of the proper-
ties listed, but the forms of their chaotic attractors and their
dimensions have not yet been derived.

The utility of chaotic activity in natural systems have by now been


pointed out by a number of scientists, there being four or five categories de-
pending on one’s discipline. In the olfactory neural context Freeman points
out that chaos provides a rapid and unbiased access to any member of a
collection of latent attractors, any of which may be selected at random on
any inhalation depending on the olfactory environment. He goes on to say:

60709_8577 -Txts#150Q.indd 248 19/10/12 4:28 PM


EEG Data and Brain Dynamics 249

The low-dimensional ‘noise’ is ‘turned off’ at the moment of


bifurcation to a patterned attractor, and it is ‘turned on’ again
on reverse bifurcation as the patterned attractor vanishes. Sec-
ond, the chaotic attractor provides for global and continuous
spatiotemporally unstructured neural activity, which is essen-
tial to maintain neurons in optimal condition by preventing at-
rophy of disuse in periods of low olfactory demand. Third, one
of the patterned attractors provides for responding to the back-
ground or contextual odor complex. It appears that a novel odor
interferes with the background and leads to failure of conver-
gence to any patterned attractor, and that chaotic OB output
may then serve by default to signal to the PC the presence of
a significant but unidentified departure from the environmental
status quo detected by the receptors. The correct classification
of a novel odor by this scheme can occur as rapidly and reliably
as the classification of any known odor, without requiring an
exhaustive search through an ensemble of classifiable patterns
that is stored in the brain. Fourth, the chaotic activity evoked
by a novel odor provides unstructured activity that can drive
the formation of a new nerve cell assembly by strengthening
of synapses between pairs of neurons having highly correlated
activity (the Hebb rule in its differential form). Thereby chaos
allows the system to escape from its established repertoire of
responses in order to add a new response to a novel stimulus
under reinforcement.

These speculations have been narrowly focused on the dynamics of the


olfactory system, but they are easily generalizable to a much broader neu-
ronal context. For example I have indicated elsewhere in this monograph
how chaos may be an integral part of the learning process. It has also ap-
peared that the dynamics in other complex systems manifest chaos in order
to ensure adaptability of the underlying process. Conrad [63] denotes five
possible functional roles for chaos. The first is the generation of diversity
as in the prey-predator species where the exploratory behavior of the an-
imal is enhanced. The second is the preservation of diversity in which the
diversity of behavior is used by the prey to act unpredictably and thereby
elude being the supper for the predator. The third possible role of chaos is
maintenance of adaptability that is to disentrain processes. In populations
this would correspond to keeping a broad age spectrum. The fourth is the
interaction between population dynamics and gene structure (cross-level
effects). Chaos on the genetic level would contribute to the diversity and
adaptability on the population level. Finally, the dissipation of disturbances

60709_8577 -Txts#150Q.indd 249 19/10/12 4:28 PM


250 Applications of Chaotic Attractors

is achieved by the sensitivity of orbits on the chaotic attractor to initial


conditions. In this way the attractor acts as a heat bath for the system and
ensures its stability.
EEG

OB

PC

MODEL

PC

TIME, 2 SEC

FIGURE 5.28. Examples of chaotic background activity generated by the model, simu-
lating bulbar unit activity and the EEGs of the OB, AON and PC. The top two traces
are representative records of the OB and PC EEGs from a rat at rest breathing through
the nose. (From Freeman [104] with permission.)

5.5.2 Epilepsy: reducing the dimension


One of the more dramatic results that has been obtained has to do with
the relative degree of order in the electrical activity of the human cortex
in an epileptic human patient and in normal persons engaged in various
activities, see for example, Figure 5.26. Babloyantz and Destexhe [18] used
an EEG time series from a human patient undergoing a ‘petit mal’ seizure
to demonstrate the dramatic change in the neural chaotic attractor using
the phase space reconstruction technique. Freeman [104] has induced an

60709_8577 -Txts#150Q.indd 250 19/10/12 4:28 PM


EEG Data and Brain Dynamics 251

epileptic form seizure in the prepyriform cortex of cat, rat and rabbit. The
seizures closely resembles variants of psychomotor or petit mal epilepsy in
humans. His dynamic model, discussed in the preceding section, enables
him to propose neural mechanisms for the seizures, and investigate the
model structure of the chaotic attractor in transition from the normal to
the seizure state. As I have discussed, the attractor is a direct consequence
of the deterministic nature of brain activity, and what distinguishes normal
activity from that observed during epileptic seizures is a sudden drop in the
dimensionality of the attractor. Babloyantz and Destexhe [18] determine
the dimensionality of the brain’s attractor to be 4.05 ± 0.5 in deep sleep
and to have the much lower dimensionality of 2.05 ± 0.09 in the epileptic
state.
Epileptic seizures are manifestations of a characteristic state of brain
activity that can and often does occur without apparent warning. The
spontaneous transition of the brain from a normal state to a epileptic state
may be induced by various means, but is usually the result of functional
disorders or lesions. Such a seizure manifests an abrupt, violent, usually
self-terminating disorder of the cortex; an instability induced by the break-
down of neural mechanisms that ordinarily maintain the normal state of
the cortex and thereby assure its stability. 1n the previous section evidence
indicated that the normal state is described by a chaotic attractor. Now
the seizure state is also modeled as a chaotic attractor, but with a lower
dimension. Babloyantz and Destexhe [18] were concerned with seizures of
short duration (approximately five seconds in length) known as ‘petit mal.’
This type of generalized epilepsy may invade the entire cerebral cortex and
shows a bilateral symmetry between the left and right hemispheres. As is
apparent in the EEG time series in Figure 5.29 there is a sharp transition
from the apparently noisy normal state to the organized, apparently pe-
riodic epileptic state. The transition from the epileptic state back to the
normal state is equally sharp.
A sequence of stimulations applied to the lateral olfactory tract (LOT)
induce seizures when the ratio of background activity to induced activ-
ity exceeds a critical value [104]. 1n Figure 5.30 the regular spike train
of the seizure induced by the applied stimulation shown at the left is de-
picted. These data are used to define the phase space variables {x0 (t), x0 (t+
τ ), ..., x0 [t + (m − l)τ ]} necessary to construct the phase portrait of the sys-
tem in both normal and epileptic states.
In Figure 5.31 is depicted the projection of the epileptic attractor onto a
two-dimensional subspace for four different angles of observation. Babloy-
antz and Destexhe [18] point out that the structure of this attractor is
reminiscent of the spiral or screw chaotic attractor of Rössler [298].

60709_8577 -Txts#150Q.indd 251 19/10/12 4:28 PM


252 Applications of Chaotic Attractors

FIGURE 5.29. (a) EEG recording of a human epileptic seizure of petit mal activity.
Channel 1 (left) and channel 3 (right) measure the potential differences between frontal
and parietal regions of the scalp, whereas channel 2 (left) and channel 4 (right) cor-
respond to the measures between vertex and temporal regions. This seizure episode,
lasting 5 seconds is the longest and the least noise-contaminated EEG selected from a
24-hr recording on a magnetic tape of a single patient. Digital PDP 11 equipment was
used. The signal was filtered below 0.2 Hz and above 45 Hz and is sampled in 12 bits at
1200Hz. (b) One pseudocycle is formed from a relaxation wave. (From Babloyantz and
Destexhe [18] with permission.)

Freeman [104] did not associate the attractor he observed with any of the
familiar mathematical forms, but he was able to capture a number of the
qualitative features of the dynamics with calculations using his model. It
is clear in Figure 5.32 that the attractor for a rat during seizures is well
captured by his model dynamics. He acknowledged that the unpredictabil-
ity in the detail of the simulated and recorded seizure spike trains indicate
that they are chaotic, and in this regard he agreed with the conclusion
of Babloyantz and Destexhe. Note the similarity in the attractor depicted
in Figure 5.32 with that for the heart in Figure 5.23c. The latter authors
calculated the dimension of the reconstructed attractor using the limited
data sample available in the single realization of human epileptic seizure.

60709_8577 -Txts#150Q.indd 252 19/10/12 4:28 PM


EEG Data and Brain Dynamics 253

PC

OB

EMG

FIGURE 5.30. The last 1.7 sec is shown of a 3sec pulse train to the LOT (10 V, 0.08
ms, 10/sec), with decrement in response amplitudes begining at 0.7 sec before the end
of the train. Seizure spike trains begin uncoordinated in both structures and settle into
a repetitive train at 3.4/sec with the PC spike leading by 25 ms both the OB spike and
EMG spike from the ipsilateral temporal muscle.(From Freeman [104] with permission.)

The correlation dimension is estimated by embedding the time series in a


space of a dimension much higher than the correlation dimension and cal-
culating the autocorrelation function. The autocorrelation function scales
with the dimension index that is a function of the embedding dimension.
If the dynamics occurs on a chaotic attractor the dimension index should
converge to the correlation dimension as the size of the embedding space
increases. In Figure 5.33 these results are used to determine the dependence
of the correlation dimension on the embedding dimension and are compared
with the correlation dimension for white noise. There is a clear indication
that the epileptic state possesses a chaotic attractor and therefore should
have a deterministic dynamic representation in either three of four vari-
ables. The low dimensionality of the attractor is indicative of the extreme
coherence of the brain during an epileptic seizure relative to normal brain
activity.

60709_8577 -Txts#150Q.indd 253 19/10/12 4:28 PM


254 Applications of Chaotic Attractors

V(t + T)

α = 135° α = 45°

,
V(t + 2T)

V(1)

α = 225° α = 315°

FIGURE 5.31. Phase portraits of human epileptic seizure. First, the attractor is repre-
sented in a three-dimensional phase space. The figure shows two-dimensional projections
after a rotation of an angle α around the V (t) axis. The time series is constructed from
the first channel of the EEG (n = 5000 equi-distant points and τ = 19Δt ). Nearly
identical phase portraits are found for all τ in the range from 17Δt to 25Δt and also in
other instances of seizure. (From Babloyantz and Destexhe [18] with permission.)

Section 5.2 discussed how certain neural activities could be modeled by


chaotic attractors. It is possible to speculate that such chaotic neural ac-
tivity could be benign or even beneficial. Rapp et al. [285] point out the
possible utility of such neural activity in searching memory and in the early
stages of decision making. The arguments rest on the recent quantitative
results in control theory which illustrate how certain complex systems can
be functionally optimized through the introduction of noise [?]. On the
other hand, most efforts have focused on the deleterious effects of chaos,
see for example the notion of dynamic diseases [212], in particular the pos-
sible relationship between neural chaos and failures of neural regulation.
There have been a number of suggestions on the possible role for chaos
in epileptogenesis (see Rapp et al. [285] for a list of early references). The
latter authors make the point that because of the decrease in dimensional-
ity that a seizure may not itself be chaotic, that is, there is a decrease in
the disorder of the brain activity: “The seizure might serve as a corrective

60709_8577 -Txts#150Q.indd 254 19/10/12 4:28 PM


EEG Data and Brain Dynamics 255

resynchronizing response to the loss of coherence of the brain activity that,


in turn, is the result of chaotic neural behavior.”

T + 30

MODEL T

EEG T

FIGURE 5.32. Comparison of the output of the trace from granule cells (G) in the
olfactory model with the OB seizure from a rat (below), each plotted against itself
lagged 30 ms in time. Duration is 1.0 sec; rotation is counterclockwise. (From Freeman
[104] with permission.)

5.5.3 Task-related scaling


The results of calculations of the degree of complexity of the EEG time
series discussed in the preceding section suggests that the erratic signals
from the brain may be correlated with the cognitive activity of the patient.
The complex electrical signal, its change in shape and amplitude are related

60709_8577 -Txts#150Q.indd 255 19/10/12 4:28 PM


256 Applications of Chaotic Attractors

to such states as sleep, wakefulness, alertness, problem-solving, and hearing,


as well as to several clinical conditions [51] such as epilepsy [279, 318]
and schizophrenia [170]. This in itself is not a new result, it has been
known for some time that brain activity responds in a demonstrable way
to external stimulation. The direct or peripheral deterministic simulation
could be electrical, optical, acoustical, etc., depending on the conditions of
the experiment.

3
v

2 ve

0
1 2 3 4 5 6 7
p

FIGURE 5.33. Dependence of the correlation dimension ν on the embedding dimension


p for a white noise signal (squares) and for the epileptic attractor (circles) with t = l9Δt.
The saturation toward a value ν is the manisfestation of a deterministic dynamics. (From
Babloyantz and Destexhe [21] with permission.)

One strategy for understanding the dynamic behavior of the brain has
been general systems analysis, or general systems theory. In this approach a
system is defined as a collection of components arranged and interconnected
in a definite way. As stated by Basar [30] the components may be physical,
chemical, biological or a combination of all three. From this perspective if
the stimulus applied to the system is known (measured) and the response of
the system to this response is known (measured) then it should be possible
to estimate the properties of the system. This, of course, is not sufficient
to determine all the characteristic of the ‘black box’ but is the first step in
formulating what Basar calls a ‘biological system analysis theory’ in which
special modes of thought, unique to the special nature of living systems,
are required. In particular Basar points out the non-stationary nature of

60709_8577 -Txts#150Q.indd 256 19/10/12 4:28 PM


EEG Data and Brain Dynamics 257

the spontaneous electrical activity of the brain, a property also observed


by Mayer-Kress and Layne [230] and Layne et al. [196] and mentioned in
the preceding section.
In a general systems theory context Basar [30] considers the analog rela-
tion of a driven relaxation oscillator and the spontaneous electrical activity
of the brain. The view is not unlike the coupled bio-oscillator model of
the SA and AV nodes of the heart discussed in Section 3.1, or the exter-
nally excited neurons in Section 5.2. The inappropriateness of the linear
concepts of the superposition of electrical signals and the independence of
different frequency modes is appreciated, and the importance of nonlinear
dynamical concepts is stressed.
Sleep is one of the primary tasks of the brain and is governed by complex
interactions between neurons in multiple brain regions. Everyone is familiar
with the fact that most people wake up for short periods during the night
only to fall back asleep. Scientists have assumed that these brief intervals
of wakefulness are due to external stimulation that alert the sleeping brain
and a person awakens to check them out. If the time and duration of these
periods was determined by the environment then they should be random
and the distribution of durations for both waking and sleeping would be
exponential with different average time intervals. However that was found
not to be the case [202]. These investigators determined that what peo-
ple thought were long uninterrupted intervals of nocturnal sleep actually
contained many short wake intervals independent of what was occurring
in the surroundings. The wake and sleep periods exhibit completely differ-
ent behavior; the sleep periods are exponentially distributed but the wake
periods have a scale-free inverse power-law distribution.
Lo et al. [202] determine that the parameters for the two distributions
have some interesting properties. The average time for sleeping for a cohort
of 39 individuals decreases during the night, starting with a value of 27
minutes in the first two hours, then 22 minutes in the second two hours,
and finally 18 minutes in the last two hours, each with a standard deviation
of about 1 minute. On the other hand, the exponent for the inverse power
law remains the same throughout all three two hour partitions of the data.
They developed a two-state random walk model of the two brain states;
sleep as simple random walk and awake as a random walk in a logarithmic
potential. This simple model is used to calculate the return times in the
presence of a reflecting barrier and reproduces the observed properties of
sleep-wake transitions. The fractal nature of the wake interval distributions
is a consequence an individual being pulled back to the sleep state by a
‘restoring force’.
This scale invariant pattern of sleep-wake transitions observed in humans
is interesting. But if the inverse power-law distribution is a consequence of

60709_8577 -Txts#150Q.indd 257 19/10/12 4:28 PM


258 Applications of Chaotic Attractors

the networking properties of the brain then it might be observed more


broadly. This was in fact found to be the case by Lo et al. [203]. The
sleep recordings of mice, rats, cats and humans were analyzed and the
distributions of sleep and wake episodes durations compared. The inverse
power law of wake periods was observed for all four species and with the
same power-law index. On the other hand, the duration of sleep episodes
follow exponential distributions with differing time scales across species in
a way that is consistent with the allometry relations discussed earlier, that
is, the time scale for the exponential distribution of sleep is proportional
to a power of the TBM of the species. Moreover the probability density
is not a simple inverse power law but seems to reach a ‘hump’ at long
times. They introduced a secondary mechanism based on the notion of
self-organized criticality and phase transition to account for this change in
the probability density. In the next chapter I show that a phase transition is
a generic property of a large class of complex networks and one result is an
asymptotic transition of the probability density of the wake time intervals
from inverse power law to a hump or shoulder, which is interpreted as a
transition from non-equilibrium to equilibrium behavior of the underlying
neural network.

5.6 Retrospective
The material included in this chapter spans the realm of chaotic activity
including the social influence on epidemiology, the internal dynamics of a
single neuron, up to and including complex biochemical reactions. In all
these areas the rich dynamic structure of chaotic attractors is seen and
scientists have been able to exploit the concepts of nonlinear dynamics to
answer some of the fundamental questions that were left unanswered or
ambiguous using more traditional techniques.
Infectious diseases may be divided into those caused by microparasites
such as viruses, bacteria and protozoa and those caused by macroparasites
such as helminths and arthropods. Childhood epidemics of microparasitic
infections such as mumps and chicken pox show almost periodic yearly
outbreaks and those cyclic patterns of an infection have been emphasized
in a number of studies [8]. In Figure 5.1 the number of reported cases of
infection each month for measles, chicken pox and mumps in New York City
and measles in Baltimore is depicted. The obvious irregularities in these
data explained historically in terms of stochastic models [8, 29], but the
subsequent applications of chaotic dynamics to these data have resulted in
a number of interesting results. In Section 5.1 we review the Schaffer and
Kott [306] analysis of the data in Figure 5.1.

60709_8577 -Txts#150Q.indd 258 19/10/12 4:28 PM


Retrospective 259

A research news article in Science by Pool [276] points out that the
outbreaks of measles in New York City followed a curious pattern before
the introduction of mass vaccinations. When children returned to school
each winter there was a sudden surge of infections corresponding to the
periods the student remained indoors exchanging germs. Over and above
this yearly cycle there occurred a biyearly explosion in the number of cases
of measles with a factor of five to ten increase in the number of cases
reported – sometimes as many as 10,000 cases a month. He points out,
however, that this biennial cycle did not appear until after 1945. Prior to
this, although the yearly peak occurred each winter, there did not seem to
be any alternating pattern of mild and severe years. In the period 1928 to
1944 there was no organized pattern of mild and severe years; a relatively
severe winter might be following by two mild ones, or vice versa. This is
the intermittency that is arguably described by means of chaos. It should
be pointed out that these dramatic yearly fluctuations were ended with the
implementation of a vaccination program in the early 1960’s.
If we attempt to model physiological structures as complex networks
arising out of the interaction of fundamental units, then it stands to reason
that certain clinically observed failures in physiological regulation occur
because of the failure of one or more of these fundamental units. One ex-
ample of such a system is the mammalian central nervous system, and
the question that immediately comes to mind is whether this system can
display chaotic behavior? Rapp et al. [284] present experimental evidence
that strongly suggest that spontaneous chaotic behavior does occur. In the
same vein Hayashi et al. [148] show that sinusoidal electrical stimulation
of the giant internode cell of the freshwater algae Nitella flexilis causes en-
trainment, quasiperiodic behavior and chaos just as did the two oscillator
model of the heart discussed previously. We review both of these examples
in Section 5.2.
The first dynamical system that was experimentally shown to manifest a
rich variety of dynamics involved nonequilibrium, chemical reactions. Ar-
neodo et al. [14] comment that one of the most common features of these
chemical reactions is the alternating sequence of periodic and chaotic states,
the Beslousov-Zhabotinskii reaction being the most thoroughly studied of
the oscillating chemical reactions. I briefly indicated some of the experi-
mental evidence for the existence of chaos in well-controlled nonequilibrium
reactions in Section 5.3.
There are a number of mathematical models of the heart with an imag-
inative array of assumed physical and biological characteristics. In Section
5.4 we display some of the laboratory data that suggests that the electrical
properties of the mammalian heart are manifestations of a chaotic attrac-
tor [123, 186]. One such indication comes from the time series interbeat

60709_8577 -Txts#150Q.indd 259 19/10/12 4:28 PM


260 Applications of Chaotic Attractors

intervals, that is, the number of and interval between R waves in the elec-
trocardiographic signal. The ordered set of RR intervals form a suitable
times series when the RR interval magnitude is plotted versus the interval
number in the sequence of heart beats. I also indicated how to determine
the fractal dimension of this time series.
The chapter closes with a brief discussion of the history and analysis
of EEG time series in Section 5.5. One point of interest was associating a
scaling index with the various stages of brain activity and in turn relating
the scaling index to a fractal dimension. The fractal dimension quantified
the 1/f variability depicted in the EEG power spectral density. The second
point of interest was the changing appearance of the electrical activity of
the brain as manifest in the differences in the phase portraits of the time
series. The differences between the brain activity in the awake state and
the four stages of sleep was found to be evident with the fractal dimension
changing dramatically between the various states. But the change was most
significant during epileptic seizures when the dimension would be reduced
by at least a factor of two from its value in deep sleep.

60709_8577 -Txts#150Q.indd 260 19/10/12 4:28 PM


Chapter 6
Physiological Networks: The Final
Chapter?

I hope this revision makes it clear that both chaos and fractals have revo-
lutionized the way scientists think about complexity. In particular the way
scientists think about complex physiologic phenomena in medicine. Heart
beats, stride intervals and breathing intervals do not have normal statistics,
they are inverse power-law processes. These long tailed distributions imply
that the underlying dynamics cannot be specified by a single scale such
as a rate or frequency, but span multiple scales that are interconnected
through their nonlinear dynamics. However the question why has not been
answered. Why should chaos and fractals be so important in physiology
and medicine? These two generic properties of complexity have in the past
decade dovetailed into what is optimistically called Network Science. If such
a new science is to exist it would over arch the traditional disciplines of bi-
ology, chemistry, physics, etc. because the properties of a complex network
would not be dependent on the mechanisms of a particular context. In the
present context we refer to this as a Physiological Network. Let me explain.
In this final chapter I tie together a number of the formal concepts dis-
cussed in this revision. This synthesis is assembled from the perspective
of complex networks. I could have provided an over view of the area, but
there are a number of excellent reviews from a variety of perspectives start-
ing with that of nonequilibrium statistical physics [4], the mathematics of
inverse power-law distributions [247] and the dynamics of social networks
[355, 356] to name a few. My own efforts to accomplish such a synthe-
261

60709_8577 -Txts#150Q.indd 261 19/10/12 4:28 PM


262 Physiological Networks: The Final Chapter?

sis required a book [386], which I recommend to those that enjoy finding
out why the mathematics is necessary. Another approach would have been
to describe things for a less mathematical audience as done in general
[27, 47, 356] and in medicine [381]. But I have elected to follow a third
path here; one that is based on a particular model that manifests most if
not all the properties I wish to relate to chaos and fractals in physiology
and medicine.
This network point of view has been developed into what has been termed
Network Physiology [33] but personally I prefer the term Fractal Physiology;
half the title of the present book. I believe the intent of coining this new
name was to capture relations between the topological structure of networks
and physiological function and I use it in this final chapter. However the
science is far from establishing that all fractal properties are a consequence
of dynamic networks and until that time I retain my preference.

6.1 Introduction to Complex Networks


A network is a set of entities (elements, nodes, units) that interact through
a series of connections or links. Such networks can be fairly simple as when
the links are static and have been historically described using graph the-
ory. Complex networks have dynamic nonlinear interactions between the
elements that change character over time including switching on and off.
A social network is different from a biological network not only through
those properties that depend on the difference between social and biologi-
cal interactions; those properties that are mechanism specific, but through
the architecture of the network as well (the network’s connection topol-
ogy). The interaction between people is both short-range via conversation,
but also long-range via the computer and other mechanical devices. These
interactions appear very different from the chemical interactions between
molecules. However, the short-range interaction between molecules are sig-
nificantly modified when the substance undergoes a phase transition, chang-
ing the short-range to long-range interactions. Is there something more than
an analogy that can be exploited for the understanding of physiologic net-
works?
Network science assumes that there are common properties that make
the complex phenomena in different disciplines identifiable as networks;
properties that can be extracted that make such networks complex and
these properties do not depend on the quantitative nature of the interaction
between the elements of a network. General concepts with which we are all
familiar such as consensus, cooperation, synchronization, harmony, and
coordination are all nuanced versions of a general qualitative notion of

60709_8577 -Txts#150Q.indd 262 19/10/12 4:28 PM


Introduction to Complex Networks 263

agreement that appears in most if not all disciplines. It would be strange


if the measurable notion of harmony in biology was totally independent
of the idea of consensus in sociology, or was completely disconnected from
that of the quantification of synchronization in physical processes. At a
high level of abstraction these concepts would be scientifically interrelated
if not identical and it is at this level that network science would prove its
value.
Arguably the most important and perhaps least well understood aspect of
collective dynamics is how groups reach consensus. Recently there has been
significant research activity into the connection between opinion formation
and the critical dynamics of phase transition in complex networks [237,
346, 407] also with time-dependent coupling between the elements [41, 340,
341]. Phase transitions in the dynamics of complex networks with local
interactions generates long-range correlations that have been conjectured
to be the dominant factor in the synchronized motion of swarms [346, 405]
and in the neuronal avalanches within the human brain [35, 36].
Other properties of importance in the description of complex networks
are the formation of clusters and how such clusters interact with one an-
other. The clusters consisting of aggregates of neurons within the brain and
those that constitute separate organs in the human body are very differ-
ent as are their modes of interaction. There is a wide variety of structure
and function even within physiologic networks as is subsequently reviewed.
Another difference between social and physiologic networks is they way in
which they grow over time. But before we get to these details lets set the
stage.

6.1.1 A little history


The study of complex networks range from the Internet involving the con-
nectivity of routers and servers, to neuron clusters requiring the coupling
of individual neurons, to the dynamics of social gatherings; all described
by heterogeneous, scale-free degree distributions [4, 101]. The first theoret-
ical study of randomly formed networks was by Erdös and Renyi [87] who
produced predictions regarding their quantitative properties most of which
turned out to be wrong. By wrong I do not mean there were any errors
in the analysis; there were not. However, their predictions did not corre-
spond to the complex networks found in the real world. On the other hand,
their results were key in determining which mathematical properties are
important for real-world complex networks and which are not. The seminal
paper of Watts and Strogatz [355] established that real-world networks are
distinct from these ideal random networks. Networks with completely ran-
dom connections have a Poisson distribution in the number of connections

60709_8577 -Txts#150Q.indd 263 19/10/12 4:28 PM


264 Physiological Networks: The Final Chapter?

between elements, whereas, real-world networks are not characterized that


way and instead show surprisingly long tails in the number of links to a
single element.
A consequence of these long tailed distributions is often the lack of av-
erage network properties. This is a reflection of the fractal statistics ob-
served in complex networks, which was one of the general themes of Where
Medicine Went Wrong [381]. The important fact regarding the lack of av-
erages was also emphasized by Barabási [27] who observed in Linked :

Erdös and Rényi’s random universe is dominated by aver-


ages. It predicts that most people have roughly the same num-
ber of acquaintances; most neurons connect roughly to the same
number of other neurons; most companies trade with roughly
the same number of other companies; most Web sites are vis-
ited by roughly the same number of visitors. As nature blindly
throws the links around, in the long run no node is favored or
singled out.

The next step beyond the random network in which elements are either
connected or not, was the social network in which the links can be either
weak or strong. The strong links exist within a family and among the closest
of friends, for example those that are called in case of emergency. On the
other hand, there are the weak links, such as link me to friends of my
friends, those I regularly meet at the grocery store, and so on. In a random
network clusters form in which everyone knows everyone else. These clusters
are formed from strong ties and can now be coupled together through weak
social interactions. Watts and Strogatz [355] were able to show that by
randomly coupling arbitrarily distant clusters together with weak links a
new kind of network was formed, the small world network. The connectivity
of small world networks are described by scale-free inverse power laws and
not Poisson distributions. In small world networks individuals are much
closer together than they are in random networks thereby explaining the
six degrees of separation phenomenon.
Small world theory demonstrates that one of the more important prop-
erties of networks is distance. This concept of distance is related to the
abstract notion of a metric and changes from that in a social network, to
that in a transportation network, to that in a neural network; each network
has its own intrinsic metric. I was informally exposed to this idea when I
was a graduate student by my friend and mentor Elliott Montroll and as I
explained elsewhere [381]:

Another example of social distance that Montroll discussed


with his students and post docs had to do with the random net-

60709_8577 -Txts#150Q.indd 264 19/10/12 4:28 PM


Introduction to Complex Networks 265

work mathematician Erdös and what is termed the Erdös num-


ber. We were told by Montroll that in the late 1960s, there was
a famous Hungarian mathematician who was homeless, with-
out permanent academic position, who traveled the world vis-
iting other mathematicians to generate, collaborate and solve
interesting problems in mathematics. This romantic image of a
vagabond mathematical genius, roaming the world in search of
problems, has stayed with me, although, I must say, my emo-
tional reaction to the image has changed from envy to sadness
over the years. Erdös was uniquely prolific, so much so that a
number had been invented to measure the intellectual distance
between other mathematicians and him. A person who collab-
orated with Erdös has an Erdös number of one; a person who
collaborated with an Erdös coauthor has an Erdös number of
two and so on. It was fascinating to me when nearly 40 years
later, after Montroll and Erdös had both died, that I read in
Linked [27], a list connecting the book’s author Barabási with
Erdös, with an Erdös number of four. Included in that list was
George H. Weiss, a colleague and friend of mine, with whom I
have published a paper. George has an Erdös number of two,
thereby bestowing on me an Erdös number of three. It is curi-
ous how even such an abstract connection gives one a sense of
continuity with the past.

A substantial number of mechanisms have been proposed to explain the


observed topology, that is, the connectivity of the elements within a real-
world network. The most popular mechanism for nearly a decade was that
of preferential attachment. This mechanism is based on the assumption
that scale-free networks, those with inverse power-law distributions, grow
in time and that the newly arriving elements tend to establish connections
preferentially with the elements having the larger number of links [4, 78].
Preferential attachment has a long lineage in sociology and goes by the
early name The Mathews Effect [234]. However, there exists a wide class of
networks that do not originate by this mechanism and the model presented
here addresses one class of these latter networks whose dynamics lead to
criticality [41, 340, 341] the underlying mechanism that produces phase
transitions.

6.1.2 Inverse power laws


Most if not all the time series from physiologic networks were shown to
have inverse power-law distributions [381] and the power-law index could be

60709_8577 -Txts#150Q.indd 265 19/10/12 4:28 PM


266 Physiological Networks: The Final Chapter?

expressed in terms of the fractal dimension. Where Medicine Went Wrong


[381] was written for a lay audience and did not contain any equations,
much less the development of the underlying mathematical models that
entail inverse power laws. Consequently it is of value to connect the inverse
power laws manifest by complex networks with the distributions generated
by the empirical data from physiologic networks. I have previously tried to
make those connections seamless and continue to do so in the sequel.
The research focus of a large part of the network science research com-
munity has been on generating the scale-free degree distribution
P (k) ∝ k −ν (6.1)
where k is the number of links to an element and determining the properties
of the underlying network in terms of the power-law index ν. This is not
unlike the surge of interest in fractals that occurred a quarter century
ago, where a great deal of effort and analysis was devoted to identifying
fractal phenomena and only latter did the understanding and insight into
the meaning of fractals for the underlying phenomena become apparent.
Scientists are somewhat further along on the complex network learning
curve and consequently they understand how to generate topology by using
the underlying network dynamics. We [41, 340, 341] confirm that the scale-
free topology emerges from the critical behavior of the network dynamics
[101, 269]. It is evident that for a critical value of the control parameter
K = Kc the cooperative interaction between the dynamical elements of a
regular two-dimensional lattice generates a phase transition in which the
majority of the elements transition to a critical state. This critical state
has a scale-free network of interdependent elements with ν ≈ 1.
In this chapter I do not adopt the widely held view that a network’s
complexity can be assessed strictly in terms of its topology. Instead the
emergence of temporal complexity through the intermittency of events in
time is emphasized, in addition to the topological complexity entailed by
the dynamics. An event is interpreted as a transition of a global variable
between the critical states produced by the phase transition. In this way
we identify two distinct forms of complexity. One associated with the con-
nectivity of the elements of the network and the other associated with the
variability of the time interval between events, both of which are shown to
have inverse power-law distributions.
In Section 6.2 a relatively simple decision making model (DMM) is in-
troduced [41, 340, 341] whose critical dynamics are shown to manifest the
dual complexity for time and connectivity. Direct calculation shows that
a DMM network undergoes a phase transition similar to that observed in
the Ising model [101], resulting in an inverse power-law degree distribution,
that is, a scale-free distribution in the connectivity of the network elements.

60709_8577 -Txts#150Q.indd 266 19/10/12 4:28 PM


The Decision Making Model (DMM) 267

I discuss two kinds of complex networks in this section. A static network


where the constitutive elements form an unchanging structure and a dy-
namic network generated by the self-organization of the elements located
on the backbone structure of the static network. Furthermore I examine
the propensity for cooperation of both the static and dynamic networks by
implementing DMM on both. The time behavior of the DMM network is
investigated and temporal complexity is discussed in this section. Calcula-
tions reveal a scale-free distribution density of the consensus times τ ,

ψ (τ ) ∝ τ −μ , (6.2)

that is separate and distinct from the scale-free degree distribution. The
consensus time is the length of time the majority of the elements stay within
one of the two available states in the critical condition.
Criticality is an emergent property of DMM and provides the dynamic
justification for its observed phase transition. This fundamental property
of criticality is phenomenologically explored in Section 6.3. Criticality is
shown to be a central mechanism in a number of complex physiologic phe-
nomena including neuronal avalanches [35] and multiple organ dysfunction
syndrome [49] as discussed subsequently.

6.2 The Decision Making Model (DMM)


Consider a model network in which all the elements have the same dynamic
range, but whose couplings to other elements within the network changes
with time. The network dynamics of each element of the decision making
model (DMM) network is determined by the two-state master equation
[340, 341]:
(l)
dpi (t) (l) (l) (l) (l)
= −gij (t) pi (t) + gji (t) pj (t) (6.3)
dt
(l)
where pj is the probability of element l being in the state j = 1, 2 and
l = 1, ..., N . The DMM network uses a social paradigm of decision makers
who choose between the state 1 (yes or +) and the state 2 (no or -) at
each point in time t and although the choice is between two states the
dynamics are fundamentally different from the popular voter models. The
interaction among the elements in the network is realized by setting the
coupling coefficients to the time-dependent forms:
 (l) (l)

(l) g Mi (t) − Mj (t)
gij (t) ≡ exp K , i = j = 1, 2 (6.4)
2 M (l)

60709_8577 -Txts#150Q.indd 267 19/10/12 4:28 PM


268 Physiological Networks: The Final Chapter?

The parameter M (l) denotes the total number of nearest neighbors to ele-
(l) (l)
ment l, and M1 (t) and M2 (t) gives the numbers of nearest neighbors in
the decision states ‘yes’ and ‘no’, respectively.
We define the global variable in order to characterize the state of the
network:

N1 (t) − N2 (t)
ξ(t) ≡ , (6.5)
N

where N is the total number of elements, and N1 (t) and N2 (t) are the
number of elements in the state “yes” and “no” at time t, respectively.
Individuals are not static but according to the master equation they ran-
(l) (l)
domly change their opinions over time thereby making M1 (t) and M2 (t)
vacillate in time as well. However, the total number of nearest neighbors is
(l) (l)
time independent: M1 + M2 = M (l) .
An isolated individual can be represented by a vanishing control param-
eter K = 0 and consequently that individual’s decision would randomly
oscillate between ‘yes’ and ‘no’, with Poisson statistics at the rate g. This
value of the control parameter would result in a collection of non-interacting
random opinions, such as that shown in the top panel of Figure 6.1. As the
control parameter is increased the coupling among the elements in the net-
work increases and consequently the behavior of the global variable reflects
this change. As the critical value Kc is approached the two states become
more clearly defined even in the case where all elements are coupled to
all other elements within the network. All-to-all coupling is often assumed
in the social science for convenience and we make that assumption tem-
porarily. Subsequently a more realistic assumption is made that restricts
the coupling of a given element to only its nearest neighbors.
The elements of the network are coupled when K > 0; an individual in
the state ‘yes’ (‘no’) makes a transition to the state ‘no’ (‘yes’) faster or
slower according to whether M2 > M1 (M1 > M2 ) or M2 < M1 (M1 < M2 )
and we have suppressed the superscript l, respectively. The quantity Kc
is the critical value of the control parameter K, at which point a phase-
transition to a self-organized, global majority state occurs. The efficiency of
a network in facilitating global cooperation can be expressed as a quantity
proportional to 1/Kc . Herein that self-organized state is identified as con-
sensus. On the other hand, expressing network efficiency through consensus
has the effect of establishing a close connection between network topology
and the ubiquitous natural phenomenon of synchronization. In this way a
number of investigators have concluded that topology plays an important
role in biology, ecology, climatology and sociology [12, 54, 273, 383]

60709_8577 -Txts#150Q.indd 268 19/10/12 4:28 PM


The Decision Making Model (DMM) 269

1.00

0.50

ξ(t)
0.00

−0.50

−1.00

1.00

0.50
ξ(t)

0.00

−0.50

−1.00

1.00

0.50
ξ(t)

0.00

−0.50

−1.00
0 250000 500000 750000 1000000
t

FIGURE 6.1. The variation of the mean field-global variable as a function of time. For
the network configuration: (top) N = 500, K = 1.05 and g = 0.01; (middle ) N = 1500,
K = 1.05 and g = 0.01; (bottom ) N = 2500, K = 1.05 and g = 0.01.

Typical DMM calculations of the global variable for the control param-
eter greater than the critical value Kc = 1 in the all-to-all coupling con-
figuration for three sizes of the network are similar to those depicted in
Fig. 6.1. However this figure depicts a different dependence of the vari-
ability of the dynamics and that is on the size of the network. The three
panels display the global variable with the control parameter just above
the critical value. The dynamics appear random in the top panel for 500
elements. The dynamics in the central panel reveal two well-defined criti-
cal states with fluctuation for 1500 elements. Finally, the dynamics in the
lower panel indicate a clear decrease in the size of the fluctuations for 2500
elements. The variability in the time series resemble the thermal noise ob-
served in physical processes but there is no such mechanism in the DMM.
The erratic variations are the result of the finite number of elements in the
network. Moreover the magnitude of the fluctuations are found to conform

60709_8577 -Txts#150Q.indd 269 19/10/12 4:28 PM


270 Physiological Networks: The Final Chapter?

to the√Law of Large Numbers in probability theory and to decrease in size


as 1/ N .
The critical state is referred to as consensus, since all the individuals are
in agreement. The duration time τ of the consensus state, is the length
of time where either ξ(t) > 0 or ξ(t) < 0. We use the consensus time to
calculate the time average of the modulus |ξ (t)|. We denote this average
with the symbol ξeq in Fig. 6.2 where the dependence of the global state of
the network on the value of the control parameter K is evident.
Unlike the voter models of social phenomena that consider an explicit
rewiring of the network at each time step to mimic dynamics [237] the DMM
provides a smooth change in attitude that depends on the local opinions
of the dynamic individual [340, 341].The DMM is consequently minimally
spatial with the inclusion of only nearest neighbor interactions.

6.2.1 Topological Complexity


The dynamics of the DMM has been confused with that of the Ising model
in physics. In the special case when the number of nearest neighbors M is
the same for all the elements, the quantity M/K in the exponential of the
time-dependent coupling coefficients is analogous to physical temperature.
However, in the g −→ 0 limit the DMM becomes identical to the Ising
model as established by Turalska et al. [341]. The equivalence between the
two models even in the limit is strictly formal since there is no temperature
in the DMM, but it is still a useful concept, especially the physical concept
of phase transition. As examples of conditions highlighting this utility, we
consider two cases. The first case is for all-to-all coupling, where M = N
and there is no spatial structure for the network. This situation is one
that is often considered in the social sciences, but finds little use in the
physical domain. The second case is the simplest two-dimensional lattice
where each element is coupled to its four nearest neighbors, M = 4. The
thermodynamic condition where the number of elements becomes infinitely
large N = ∞ yields the critical value of the control parameter Kc = 1 in
the all-to-all coupling
√ case [340], whereas the critical parameter has the
value Kc = 2ln(1 + 2) ≈ 1.7627 for the Ising model [256]. In Fig. 6.2 the
DMM is seen to undergo a phase transitions at these two critical values.
We see that for a very small value of the coupling strength g = 0.01 the
numerical evaluation of ξeq (K) on a 100 × 100 lattice is very close to the
theoretical prediction of Onsager [256].
The patterns generated by the Ising model at criticality corresponds
to the emergence of correlation links yielding a scale-free network with
the inverse power-law degree distribution. The DMM also generates such
connectivity distributions as depicted in Fig. 6.3. One difference between

60709_8577 -Txts#150Q.indd 270 19/10/12 4:28 PM


The Decision Making Model (DMM) 271

the DMM and Ising degree distributions is the value of the power-law index.
In the DMM the index is near 1.0 in the Ising model it is near 1.5.
1.00

0.75
ξeq

0.50

0.25

0.00
0.50 1.00 1.50 2.00 2.50
K

FIGURE 6.2. The phase diagram for the global variable ξeq . The solid and dashed lines
are the theoretical predictions for the fully connected and two-dimensional regular lattice
network, respectively. In both cases N = ∞ and the latter case is the Onsager prediction
[256]. The circles are the DMM calculation for K = 1.70.

The versatility of the DMM enables us to go beyond the topological com-


plexity depicted in Fig. 6.3 and examine temporal complexity as well. To
realize temporal complexity we rely on numerical results and focus our
attention on the condition K = 1.70, which, although slightly smaller
than the Onsager theoretical prediction, is compatible with the emergence
of cooperative behavior due to the phase transition. The dynamically-
induced network topology can be derived using the ‘correlation network’
approach, where a topology is generated by linking only those elements
with cross-correlation levels above a given threshold [101]. Thus, after run-
ning the DMM on a two-dimensional network for a time sufficient to allow
all transients to fade away Turalska et al. [341] evaluate the two-point
cross-correlation coefficient between all pairs of individuals. If the cross-
correlation coefficient between two individuals is smaller than the arbitrar-
ily chosen threshold value Θ = 0.61, the link between them is removed
in the newly formed network; if the cross-correlation is greater than this
value the individuals remain coupled. This prescription is found to generate
a scale-free network with the inverse power index α ≈ 1, as shown in Fig.

60709_8577 -Txts#150Q.indd 271 19/10/12 4:28 PM


272 Physiological Networks: The Final Chapter?

6.3. Turalska et al. [341] also evaluate the distribution density p(l) of the
Euclidian distance l between two linked elements and find that the average
distance is of the order of 50, namely, of the order of the size of the two-
dimensional grid 100 × 100. This average distance implies the emergence
of long-range links that go far beyond the nearest neighbor coupling and
is essential to realizing the rapid transfer of information over a complex
network [43, 185].

FIGURE 6.3. The degree distribution for the Dynamically Generagted Complex Topol-
ogy created by examining the dynamics of elements placed on a two-dimensional regular
lattice with the parameter values, N = 100 × 100, g = 0.01 and K = 1.69 in the DMM.

They construct from the DMM dynamically-induced network a network


backbone, called a Dynamically Generated Complex Topology (DGCT) net-
work and then study its efficiency implementing the DMM dynamics on
it. It is convenient to compare the cooperative behavior of the DGCT net-
work with another seemingly equivalent scale-free degree networks with the
same α ≈ 1. This latter scale-free network uses a probabilistic algorithm
[178] and we refer to it as an ad hoc network, and implement the DMM on
it as well as on the DGCT network. The phase transition occurs on both
networks at K = 1, namely, at the same critical value of the control pa-
rameter corresponding to the all-to-all coupling condition and consequently
both are critical.

60709_8577 -Txts#150Q.indd 272 19/10/12 4:28 PM


The Decision Making Model (DMM) 273

FIGURE 6.4. Consensus survival probability. Thick solid and dashed lines refer to the
DMM implemented on a two-dimensional regular lattice with control parmater K = 1.70
and to dynamics of the ad hoc network evaluated for K = 1.10, respectively. In both
cases g = 0.01. The thin dashed line are visual guides corresponding to the scaling
exponents μ = 1.55 and μ = 1.33, respectively. The thin solid line fitting the shoulder
is an exponential.

6.2.2 Temporal Complexity


Topological complexity is now fairly well understood from a number of per-
spectives [27, 47, 356] and now we are beginning to understand its relation
to dynamics [340, 341]. However temporal complexity seems to be more
complicated. The apparently intuitive notion that topological complexity
with a scale-free distribution of links, P (k) ∝ k −ν and time complexity
with a scale-free distribution of consensus times, ψ (τ ) ∝ τ −μ , are closely
related, is surprisingly shown to be wrong. Fig. 6.4 illustrates the consensus
survival probability

Ψ (t) = ψ (τ ) dτ ∝ t1−μ (6.6)
t

corresponding to the critical value of the control parameter Kc = 1.70,


generating the scale-free topology of Fig. 6.3. Although emerging from a
simple spatial network, that is, one with no structural complexity, the sur-
vival probability is scale-free with α = μ − 1 ≈ 0.55 over more than four
decades of time intervals.

60709_8577 -Txts#150Q.indd 273 19/10/12 4:28 PM


274 Physiological Networks: The Final Chapter?

The survival probability of the consensus state emerging from the ad hoc
network, with Kc = 1, is limited to the time region 1/g, and for N → ∞ is
expected [340] to be dominated by the shoulder depicted in Fig. 6.4. The
shoulder is actually a transition from the inverse power-law to an exponen-
tial distribution in time. The exponential is a signature of the equilibrium
regime of the network dynamics and is explained in detail elsewhere us-
ing a formal analogy to Kramers theory of chemical reactions [340]. It is
worth noting that this shoulder looks remarkably like the hump observed
in the sleep-wake studies of Lo et al. [203] in the last chapter. Their intu-
ition that the hump in the wake distribution was a consequence of a phase
transition in the underlying neural network that induced long-range order
into the network interactions is consistent with the explicit network calcu-
lation carried out here. The major difference is that the present calculation
did not require a separate assumption about self-organized criticality; the
phase transition emerged as a consequence of the network dynamics.

6.3 Criticality
Topology and criticality are the two central concepts that arise from the
application of dynamics to the understanding of the measurable properties
of the brain through the lens of complex networks. Topology is related
to the inverse power-law distributions of such newly observed phenomena
as neuronal avalanches [35, 36] and criticality [101] has to do with the
underlying dynamics that gives rise to the observed topology. Criticality
was first systematically studied in physics for systems undergoing phase
transitions as a control parameter is varied. For example, water transitions
from a liquid to a solid as temperature is lowered and to a gas as the
temperature is raised. The temperature at which these transitions occur are
called critical points or critical temperatures. Physical systems consist of a
large number of structurally similar interacting units and have properties
determined by local interactions. As a critical point is reached, the critical
value of the control parameter, the interactions suddenly change character.
In the case of the phase transition from water vapor to fluid what had been
the superposition of independent dynamical elements becomes dominated
by short-range interactions and on further temperature decrease the second
critical point is reached and one has long-range coordinated activity; ice.
The dynamical source of these properties was made explicit through the
development of DMM, which is related to but distinct from the Ising model
used by others in explaining criticality in the context of the human brain
[387].

60709_8577 -Txts#150Q.indd 274 19/10/12 4:28 PM


Criticality 275

Zemanova et al. [410] point out that investigators [325] have determined
that the anatomical connectivity of the animal brain has a number of prop-
erties similar to those of small-world and scale-free networks and organizes
into clusters (communities) [158, 159]. However the topological of these
networks remains largely unclear.

6.3.1 Neuronal Avalanches


Beggs and Plenz [35] point out that living neurons are capable of generating
multiple patterns but go on to identify a new mode of activity based on an
analogy with nonlinear physical phenomena. They hypothesize that cortical
neural networks are organized in a critical state in which events are fractal
and consequently described by power laws. They demonstrated that the
propagation of spontaneous activity in cortical networks obey an inverse
power law with index −3/2 for event size and refer to this as “neuronal
avalanches”. The event size is the number of neurons firing together and
the avalanche of firings is a generic property of cortical neural networks.
The network theory approach is consistent with the procedure widely
adopted in neuroscience to define functional connections between different
brain regions [101, 326]. Numerous studies have shown the scale-free char-
acter of networks created by correlated brain activity as measured through
electroencephalography [235], magnetoencephalography [326] or magnetic
resonance imagining [82]. Fraiman et al. [101] used the Ising model to ex-
plain the origin of the scale-free neuronal network, and found the remark-
able result that the brain dynamics operate at the corresponding critical
state. The DMM research was, in part, inspired by these results [101], and
yielded the additional discovery that the emergence of consensus produces
long-range connections as well as scale-free topology.
Consider the DMM results in the light of the recent experimental findings
on brain dynamics [45]. The analysis of Bonifazi et al. [45] established that,
in a manner similar to other biological networks, neural networks evolve by
gradual change, incrementally increasing their complexity, and rather than
growing along the lines of preferential attachment, neurons tend to evolve
in a parallel and collective fashion. The function of the neuronal network is
eventually determined by the coordinated activity of many elements, with
each element contributing only to local, short-range interactions. However,
despite this restriction, correlation is observed between sites that are not
adjacent to each other, a surprising property suggesting the existence of a
previously incomprehensible long-distance communication [66]. The DMM
dynamical approach, as well as other network models manifesting critical
behavior [101], afford the explanation that the local but cooperative interac-

60709_8577 -Txts#150Q.indd 275 19/10/12 4:28 PM


276 Physiological Networks: The Final Chapter?

tions embed the elements in a phase-transition condition that is compatible


with long-range interdependence.
The neuron network differs in detail from the DMM network in a number
of ways, one being that there is a threshold for a neuron to fire; the event.
Consequently, as explained by Beggs and Plenz [35], in complex networks
events such as earthquakes [142], forest fires [224], and nuclear chain re-
actions [144] emerge as one element exceeds threshold and triggers other
elements to do so as well. This sequencing of firings initiates a cascade
that propagates through the larger network. They also point out that the
spatial and temporal distributions of such cascades or ‘avalanches’ had
been well described previously by inverse power laws [263]. At the time
they concluded that such a network is in a critical state referencing self-
organized criticality [23] as the possible underlying mechanism. However
we know that criticality is a robust feature of complex networks where it
can be induced by the dynamics in a number of ways.
One of the important features of neuronal avalanches is the balance be-
tween information transmission and network stability. The study of Beggs
and Plenz [35] was designed to address this balance and in so doing to
answer two questions:

(1) Do cortical networks in vitro produce avalanches that


comply with physical theories of critical systems?
(2) If cortical networks are in the critical state, what conse-
quences does this have for information processing?

They defined the size of an avalanche as the number of electrodes n


activated during the avalanche. The resulting empirical inverse power-law
probability density P (n) they obtained is depicted in Figure 6.5

P (n) ∝ nα (6.7)

and the cutoff at the maximum number of 60 electrodes is evident. When


the data from the various cultures are binned at their own average In-
terevent interval the power-law exponent was observed to have the con-
stant value −1.5 ± 0.008. A number of tests reinforced the findings insuring
that the power-law index indeed has this constant value. They further sug-
gest that a neural network with a power law index of −3/2 has optimal
excitability.
This discovery of avalanches in neural networks [35, 36] has aroused sub-
stantial interest in the neurophysiology community and, more generally,
among complex networks researchers [338]. The main purpose of this sec-
tion was to introduce the idea that the phenomenon of neural avalanches
is generated by the same cooperative properties as those responsible for

60709_8577 -Txts#150Q.indd 276 19/10/12 4:28 PM


Criticality 277

the surprising effect of cooperation-induced synchronization, such as illus-


trated using the DMM. The phenomenon of neural entrainment [172] is
another manifestation of the same cooperative property. Grigolini et al.
[140] address the important issue of further connecting neural avalanches
and criticality. Avalanches are thought to be a manifestation of critical-
ity, and especially self-organized criticality [58, 274]. Fraiman et al. [101]
hypothesize that the brain stays near the critical point of a second order
phase transition and explain the inverse power-law behavior using the Ising
model. At the same time, criticality is accompanied by long-range correla-
tion [58] and a plausible model for neural dynamics is expected to account
for the surprising interaction between agents separated by relatively large
distances. A general agreement exists in the literature that the brain func-
tion rests on these crucial properties, and phase transition theory [327] is
thought to afford the most important theoretical direction for the research
work on this subject, with criticality emerging at a specific single value of
a control parameter as previously discussed.

100

P
10−2

10−4

100 101 102


size (#electrodes)

FIGURE 6.5. Probability distribution of neuronal avalanche size: Black) Size measured
using the total number of activated electrodes. Teal) Size measured using total local field
potential (LFP) amplitude measured at all electrodes participating in the avalanche.
(adapted from [35])

60709_8577 -Txts#150Q.indd 277 19/10/12 4:28 PM


278 Physiological Networks: The Final Chapter?

6.3.2 Multiple Organ Dysfunction Syndrome (MODS)


The failure of multiple organ systems as a consequence of shock was first
identified over a quarter century ago [337]. If left untreated shock results in
death. Buchman et al. [48] observed that once multiple organs have failed,
nearly all patients die, despite aggressive support. They go on to point out
that the notion of ‘cause’ in multiple organ failure is elusive, but ultimately
traceable from a physiologic defect back to the failure of homeostasis. Al-
ternatively there is the integrated biological systems perspective that shows
how being linked to a network results in stability. Buchman [49, 50] seeks
the explanation of this new disease, multiple organ dysfunction syndrome
(MODS), at the network level.
From one point of view the human body may be regarded as a system of
networks. This view exploits an analogy with small world networks where
the individual networks, for example the respiratory network has strong
internal coupling but is relatively weakly coupled to the cardiovascular net-
work, another network with strong internal links. Multiple organ coupling
is consequently a possible manifestation of a network that is small-world-
like, where the internal dynamics of organs are critical and therefore act as
a collective unit, whereas the coupling between organs is of a different kind.
What distinguishes the multiple organ physiologic network from other real-
world networks that have been described by small world theory is that the
networks being linked together (the organs) are not identical and neither
are the links. Consequently there is, as yet, no sufficiently rich network
theory from which to calculate the properties of multiple organ networks
or their failures, but there are suggestions. The idea that Goldberger and
I had that disease is the loss of complexity [130, 381] may be applied in
this context, or the related notion of Pincus [271] that regularity increases
the isolation of organs, both suggest that the ‘cause’ of MODS may be the
loss of complexity due to breaking the weak ties that couple the physiologic
networks to one another. Godin and Buchman [120] suggested that unbri-
dled inflammation could cause the uncoupling of organs from one another
thereby precipitating MODS.
It is reasonable to speculate that widespread network failure produces
MODS [49, 50], as was done above. A number of investigators have observed
the uncoupling of autonomic regulation in patients going into clinical septic
shock [83, 121], as well as being produced by severe brain injury [131]. A
rigorous understanding of MODS however requires a computational net-
work model that can be tested against the various experiments that have
been conducted and which can be used to design a few new ones. The lack
of fundamental models was emphasized by Buchman [49]:
It is vital to create models that embed homeostatic mech-
anisms into larger networks that themselves confer robustness

60709_8577 -Txts#150Q.indd 278 19/10/12 4:28 PM


Finale 279

to perturbation and thereby protect the community of self. But


more important, and much harder, will be determining whether
a particular model or class of models properly captures the
protective behaviors reflected across multiple resolutions, from
genes to humans.

6.4 Finale
General systems theory, cybernetics, catastrophe theory, nonlinear dynam-
ics, chaos theory, synergetics, complexity theory, complex adaptive systems,
and fractal dynamics, have all contributed to our understanding of physi-
ology and medicine. Some have passed out of fashion whereas others have
proven to be foundational. As the title of this chapter suggests network sci-
ence is the ‘modern’ strategy devised to understand the intermittent, scale-
invariant, nonlinear, fractal behavior of physiologic structure and function.
Part of what make network science an attractor is that although it follows
a long tradition of theoretical methods and constructs it retains its intellec-
tual flexibility. Network Science engenders understanding of the complexity
of living networks through emergent behavior.
I was tempted to end this book with a review of what has been covered,
but on reflection that seemed to be ending on a sour note. After some
tossing and turning I decided that lists of outstanding problems that might
stimulate the interest of some bright cross-disciplinary researcher would be
of more value. One such list of questions for future research was compiled
by Sporns et al. [325]:

• What are the best experimental approaches to generate large and


comprehensive connectional data sets for neural systems, especially
for the human brain?
• What is the time scale for changes in functional and effective connec-
tivity that underlie perceptual and cognitive processes?
• Are all cognitive processes carried out in distributed networks? Are
some cognitive processes carried out in more restricted networks,
whereas others recruit larger subsets?
• Does small-world connectivity reflect developmental and evolutionary
processes designed to conserve or minimize physical wiring, or does
it confer other unique advantages for information processing?
• What is the relationship between criticality, complexity and informa-
tion transfer?

60709_8577 -Txts#150Q.indd 279 19/10/12 4:28 PM


280 Physiological Networks: The Final Chapter?

• Is the brain optimized for robustness towards lesions, or is such ro-


bustness the by-product of an efficient processing architecture?

• What is the role of hubs within scale-free functional brain networks?

• How can scale-free functional networks arise from the structural or-
ganization of cortical networks?

It is interesting that since 2004 when this list was compiled a number of
partial answers to some questions have been obtained.
In a complex network many element are interconnected but only a few
play a crucial role and are considered central for the network to carry out
its function. In the Network Science literature these elements are called
hubs and one of the questions concerned its role in scale-free functional
brain networks. Various quantitative measures to identifying these central
elements had been developed but they did not readily lend themselves to
the study of the structure and function of the human brain. To rectify
this situation in 2010 Joyce et al. [175] developed an innovative centrality
measure (leverage centrality) that explicitly accounts for the local hetero-
geneity of an element’s connectivity within the network. This previously
neglected heterogeneous property determines how efficiently information is
locally transmitted and identifies elements that are highly influential within
a network. It is noteworthy that these elements that are the most influen-
tial are not necessarily the elements with the greatest connectivity; they
need not be hubs. The hierarchical structure of brain networks was a prime
candidate for use of the new centrality metric and fMRI data was used to
verify its utility.
In another investigation Hasagawa and Laurienti resolved inconsis-
tencies made across studies by others that had used networks deduced
from fMRI data. They did not merely apply the techniques of network
theory to the construction of cortical networks but showed that network
characteristics, such as the domain over which the connectivity distribu-
tion was inverse power law, depends sensitively on how the fMRI data are
averaged (by region or by voxel). They demonstrated that voxel-based net-
works, being more fine-grained, exhibit many desirable properties, such as
the co-locality of high connectivity and high efficiency within modules that
region-based networks do not share.
There is no natural end point for the discussion of the dovetailing
of chaos and fractals into physiologic networks and medicine, nor is there
a natural end to the present edition of this book. So in the tradition of
Sherlock Holmes let me bow out by saying “The game’s afoot.” and you
are invited to join in the chase.

60709_8577 -Txts#150Q.indd 280 19/10/12 4:28 PM


References

[1] K. Aihara, G. Matsumoto and Y. Ikegaza, “Periodic and non-periodic


responses of a periodically forced Hodgkin-Huxley oscillator,” J.
Theor. Biol. 109,249-269 (1984).

[2] P.A. Agutter and J.A. Tuszynski, “Analytic theories of allometric


scaling”, The J. Exp. Biol. 214, 1055-1062 (2011).

[3] S. Akselrod, D. Gordon, F. A. Ubel, P. C. Shannon, A. L. Barger and


R. I. Cohen, ”Power spec- trum analysis for heart rate fluctuation: a
quantitative probe to beat cardiovascular control,” Science llJ, 220-
222 (1981).

[4] R. Albert and A.-L. Barabasi, “Statisitical Mechanics of Complex


Networks”, Rev. Mod. Phys. 74, 48 (2002).

[5] P. Allegrini, P. Paadissi, D. Menicci and A. Gemignani, “Fractal


complexity in spontaneous EEG metastable-state transitions: new
vistas on integrated neural dynamics”, Front. Physiol. 1:28. doi:
10.3389/fphys.2010.00128 (2010).

[6] W.A. Altemeier, S. McKinney and R.W. Glenny, “Fractal nature of


regional ventilation distribution”, J. Appl. Physiol. 88, 1551-1557
(2000).
281

60709_8577 -Txts#150Q.indd 281 19/10/12 4:28 PM


282 References

[7] B. Andresen, J.S. Shiner and D.E. Uehlinger, “Allometric scaling and
maximum efficiency in physiological eigen time”, PNAS 99, 5822-
5824 (2002).

[8] R. M. Anderson, “Directly transmitted viral and bacterial infections


of man,” in The Population Dynamics of Infectious Diseases: Theory
and Applications (ed. R. M. Anderson), pp. 1-37, London, Chapman
and Hall (1982).

[9] R. M. Anderson and R. M. May, Science 215, 1053 (1982).

[10] F. T. Angelakos and G. M. Shephard, Circ. Res. 5, 657 (1957).

[11] M.E.F. Apol, R.S. Etienne and H. Olff, “Revisiting the evolutionary
origin of allometric metabolic scaling in biology”, Funct. Ecol. 22,
1070-1080 (2008).

[12] A. Arenas, A. Diaz-Guilera and J. Kurths, Phys. Rep. 469, 93 (2008).

[13] V.I. Arnold and A. Avez, Ergodic Problems in Classical Mechanics.


Benjamin, New York (1968).

[14] A. Arneodo, F. Argoul, P. Richetti and I. C. Roux, “The Belousov-


Zhabotinskii reaction: a paradigm for theoretical studies of dynami-
cal systems,” in Dynamical Systems and Environmental Models, eds.
H.G. Bothe, W. Ebeling, A.M. Zurzhanski & M. Peschel (Akademie
Verlag, Berlin, 1987) p. 122.

[15] J. L. Aron and I.B. Schwartz, “Seasonality and period doubling bi-
furcations in an epidemic model,” J. Theor. Biol. 110, 665 (1984).

[16] Y. Ashkenazy, J.M. Hausdorff, P. Ivanov, A.L. Goldberger and H.E.


Stanley, A Stochastic Model of Human Gait Dynamics, Physica A
316, 662-670 (2002).

[17] I. Asimov, The Human Brain, Signet Science Lib., New York. (1963).

[18] A. Babloyantz, “Evidence of chaotic dynamics of brain activity during


the sleep cycle,” in Dimensions and Entropies in Chaotic Systems,
ed. G. Mayer-Kress, Springer-Verlag, Berlin (1986).

[19] A. Babloyantz and A. Destexhe, “Low dimensional chaos in an in-


stance of epilepsy,” Proc. Nat. Acad. Sci. USA 83, 3515-3517 (1986).

[20] A. Babloyantz and A. Destexhe, “Chaos in neural networks,” in Pro-


ceed. Int. Conf. on Neural Networks, San Diego, June (1987).

60709_8577 -Txts#150Q.indd 282 19/10/12 4:28 PM


References 283

[21] A. Babloyantz and A. Destexhe, “Is the normal heart a periodic os-
cillator?” Biol. Cybern. 58, 203-211 (1988).
[22] A. Babloyantz, I. M. Salazar and C. Nicolls, “Evidence of chaotic
dynamics during the sleep cycle,” Phys. Lett. 11lA, 152-156 (1985).
[23] P. Bak, C. Tang and K. Wiesenfeld, “Self-organized criticality: an
explanation of the 1/f noise”, Phys Rev Lett 59,381–384 (1987).
[24] J.R. Banavar, J. Damuth, A. Maritan and A. Rinaldo, “Allometric
cascades”, Nature 421, 713 (2003).
[25] J.R. Banavar, J. Damuth, A. Maritan and A. Rinaldo, “Scaling in
Ecosystems and the Linkage of Macroecological Laws”, Phys. Rev.
Lett. 98, 068104 (2007).
[26] J.R. Bavavar, M.E. Moses, J.H. Brown, J. Damuth, A. Rinaldo, R.M.
Sibly and A. Maritan, “A general basis for quarter-power scaling in
animals”, Proc. Natl. Acad. Sci. USA 107, 1516-1520 (2010).
[27] A.-L. Barabasi, A.-L., Linked: How Everything is Connected to Ev-
erything Else and What it Means for Business, Science, and Everyday
Life, Plume, NewYork (2003).
[28] G.I. Barenblatt and A.S. Monin, “Similarity principles for the biol-
ogy of pelagic animals”, Proc. Nat. Acad. Sci. USA 99, 10506-10509
(1983).
[29] M. S. Bartlett, Stochastic Population Models in Ecology and Epidemi-
ology, London, Methuen (1960).
[30] E. Basar, Biophysical and Physiological Systems Analysis, Addison-
Wesley, London (1976).
[31] E. Basar, H. Flohr, H. Haken and A. I. Mandell, eds. Synergetics of
the brain, Springer-Verlag, Berlin (1983).
[32] E. Basar, A. G. lnder, C. Ozesmi and P. Ungan, “Dynamics of brain
mythmic and evoked potentials. III Studies in the auditory pathway,
recticular formation, and hippocampus during sleep,” Biol. Cyber-
netics 20, 161-169 (1975).
[33] A. Bashan, R.P. Bartsch, J.W. Kantelhardt, S. Havlin and
P.Ch Ivanov, “Network physiology reveals relations between net-
work topology and physiological function”, Nature Comm.|3:702|
DOI:10.1038/hcomms1705|www.nature.com/naturecommunications
(2012).

60709_8577 -Txts#150Q.indd 283 19/10/12 4:28 PM


284 References

[34] J.B. Bassingthwaighte, L.S. Liebovitch and B.J. West, Fractal Phys-
iology, Oxford University Press, New York (1994).
[35] J.M. Beggs and D. Plenz, “Neuronal avalanches in neocortical cir-
cuits”, J. Neurosci. 23, 11167-77 (2003).
[36] J.M. Beggs and D. Plenz, “Neuronal avalanches are diverse and pre-
cise activity patterns that are stable for many hours in cortical slice
cultures”, J. Neurosci. 24, 5216-29 (2004).
[37] G. Benettin, L. Golgani and J. M. Strelcyn, “Kolmogorov entropy
and numerical experiments,” Phys. Rev. A 14, 2338 (1976).
[38] J. Beran, Statistics of Long-Memory Processes, Monographs on
Statistics and Applied Probability 61, Chapman & Hall, New York
(1994).
[39] M. Berry, “Diffractals,” J. Phys. A 12, 781-797 (1979).
[40] M. V. Berry and Z. V. Lewis, “On the Weierstrass-Mandelbrot fractal
function,” Proc. Roy. Soc. Lond. 370A, 459 (1980).
[41] S. Bianco, E. Geneston, P. Grigolini and M. Ignaccolo, Phys. Rev. E
387, 1387 (2008).
[42] J.W. Blaszcyk and W. Klonowski, “Postural stability and fractal dy-
namics”, Acta Neurobiol. Exp. 61, 105-112 (2001).
[43] M. Boguna et al., Nature Physics 5, 74 (2009); M. Boguna and D.
Krioukov, Phys. Rev. Lett. 102, 058701 (2009).
[44] F. Bokma, ”Evidence against universal metabolic allometry”, Func.
Eco. 18, 184-187 (2004).
[45] P. Bonifazi, M. Goldin, M.A. Picardo, I. Jorquera, A. Cattani, G.
Bianconi, A. Represa, Y. Ben-Ari, and R. Cossart, “GABAergic hub
neurons orchestrate synchrony in developing hippocampal networks”.
Science 4, 5958, 1419-1424 (2009).
[46] Brown J.H., West G.B. and Enquist B.J., “Yes, West, Brown and
Enquist’s model of allometric scaling is both mathematically correct
and biologically relevant”, Funct. Ecol. 19, 735-738 (2005).
[47] M. Buchanan, Nexus, W.W. Norton, New York (2002).
[48] T.G. Buchman, J.P. Cobb, A.S. Lapedes and T.B. Kepler, “Complex
systems analysis: a tool for shock research”, SHOCK 16, 248-251
(2001).

60709_8577 -Txts#150Q.indd 284 19/10/12 4:28 PM


References 285

[49] T.G. Buchman, “The Community of the Self”, Nature 420, 246-251
(2002).

[50] T.G. Buchman, “Physiologic failure: multiple organ dysfunction syn-


drome”, in Complex Systems Science in BioMedicine, T.S. Deisboeck
and S. A. Kauffman, Eds., Kluwer Academic Plenum Publishers, New
York (2006).

[51] T. H. Bullock, R. Orkand and A. Grinnel, Introduction to the Nervous


Systems, W.H. Freeman, San Francisco (1981).

[52] W.W. Calder III, Size, Function and Life History, Harvard University
Press, Cambridge, MA (1984).

[53] C.G. Caro, T.J. Pedley, R.C. Schroter and W.W. Seed, The Mechan-
ics of Circulation, Oxford University Press, Oxford (1978).

[54] C. Castellano, S. Fortunato and V. Loreto, Rev. Mod. Phys. 81, 591
(2009).

[55] C. Cattuto, W. Van den Broeck, A. Barrat, V. Colizza, J.F. Pinton,


and A. Vespignani , “Dynamics of person-to-person interactions from
distributed RFID sensor networks”, PloS One 5, 11596 (2010).

[56] M.A. Changizi, “Principles underlying mammalian neocortical scall-


ing”, Biol. Cybern. 84, 207-215 (2001).

[57] G.A. Chauvet, “Hierarchiacal functional organization of formal bio-


logical systems: a dynamical approach. I. The increase of complexity
by self-association increases the domain of stability of a biological sys-
tem”, Philos. Trans. R. Soc. Lond. B Biol. Sci. 339, 425-444 (1993).

[58] D. Chialvo, Nature Physics 6, 744-750 (2010).

[59] D. L. Cohn, “Optimal systems. Parts I and II,” Bull. Math. Biophys.
16, 59-74 (1955); 17, 219-227 (1954).

[60] P. Collet and J. P. Eckmann, Iterated Maps on the Interval as Dy-


namical Systems, Birkh!iuser, Basse!(1980).

[61] J.J. Collins and I. N. Stewart, “Coupled Nonlinear Oscillators and the
Symmetries of Animal Gaits”, J. Nonlinear Sci. 3, 349-392 (1993).

[62] J.J. Collins and C.J. De Lucca, “Random walking during quiet stand-
ing”, Phys. Rev. Lett. 73, 764-767 (1994).

60709_8577 -Txts#150Q.indd 285 19/10/12 4:28 PM


286 References

[63] M. Conrad, “What is the use of chaos?” in Chaos, ed. A.V. Holden,
Manchester University Press, Manchester UK (1986).

[64] J.P. Crutchfield, D. Donnelly, D. Farmer, G. Jones, N. Packard and


R. Shaw, Phys. Lett. 76, 1 (1980).

[65] J. P. Crutchfield, J. D. Farmer, N. H. Packard and R. S. Shaw,


“Chaos,” Scientific American, 46-57 (1987).

[66] Couzin, I.D. (2007) Collective minds. Nature 445, 715; Couzin, I.D.
(2009) Collective cognition in animal groups. TRENDS in Cognitive
Sciences 13, 36-43.

[67] J. H. Curry, “On the Hen6n transformation,” Common. Math. Phys.


68, 129 (1979).

[68] G. Cuvier, Recherches sur les ossemens fossils (Paris, 1812)

[69] H. Cyr and S.C. Walker, “An Illusion of Mechanistic Understanding”,


Ecology 85, 1802-1804 (2004).

[70] L. Danziger and G. L. Elmergreen, “Mathematical theory of periodic


relapsing cataonia,” Bull. Math. Biophys. 16, 15-21 (1954).

[71] C.A. Darveau, R.K. Suarez, R.D. Andrews and P.W. Hochachka,
“Allometric cascade as a unifying principle of body mass effects on
metabolism”, Nature 417, 166-170 (2002).

[72] C.A. Darveau, R.K. Suarez, R.D. Andrews and P.W. Hochachka,
“Darvearu et al. reply”, Nature 417, 714 (2003).

[73] C. Darwin, The Origin of the Species by Means of Natural Selection


on the Preservation of Favored Races in the Struggle of Life (1859).

[74] G.S. Dawes, H.E. Cox, M.B. Leduc, E.C. Liggins and R.T. Richards,
J. Physiol. Lond. 220, 119-143 (1972).

[75] R. Dawkins, The Selfish Gene, Oxford University Press, New York
(1976).

[76] K. Dietz. Lect. Notes in Biomath. 11, 1 (1976).

[77] Dodds P.S., Rothman D.H. and Weitz J.S., “Re-examination of the
“3/4-law” of Metabolism”, J. Theor. Biol. 209, 9-27 (2001).

[78] S.N. Dorogovtsev and J.F.F. Mendes, Adv. Phys. 51, 1079 (2002).

60709_8577 -Txts#150Q.indd 286 19/10/12 4:28 PM


References 287

[79] F. Dyson, Origins of Life, Cambridge University Press, Cambridge


(1985).
[80] J.P. Eckmann, “Roads to turbulence in dissipative dynamic systems,”
Rev. Mod. Phys. 53, 643 (1981).
[81] J.P. Eckmann and D. Ruelle, “Ergodic theory of chaos and strange
attractors,” Rev. Mod. Phys. 57, 617-656 (1985).
[82] Eguiluz, V.M., Chialvo, D.R., Cecchi, G.A., Baliki, M., Apkarian,
A.V. (2005) Scale-free brain functional networks. Phys. Rev. Lett.
94, 018102.
[83] M.S. Ellenby et al., “Uncoupling and recoupling of autonomic regu-
lation of the heart beat in pediatric septic shock”, Shock 16, 274-277
(2001).
[84] B.J. Enquist,“Universal scaling in trees and vascular plant allometry:
toward a general quatitative theory linking plant form and function
from cells to ecosystems”, Tree Physiology 22, 1045-1064 (2002).
[85] R.S. Etienne, M.E. Apol and H. Olff, “Demystifying the West, Brown
& Enquist model of the allometry of metabolism”, Funct. Ecol. 20,
394-399 (2006).
[86] I. Ekeland, Mathematics and the Unexpected, The Univ. Chicago
Press. Chicago (1988).
[87] P. Erdös and A. Rényi, Magyar Tud. Akad. Mat. Kutato Int. Közl.
5, 17 (1960).
[88] D. K. Faddeev, in Mathematics Vol. 3, eds. A.D. Aleksandrov, A.N.
Kolmogorov and M. A.Lavrenrev, MIT Press, Cambridge (1964).
[89] D. Fanner, J. Crutchfield, H. Froehling, N. Packard and R. Shaw,
“Power spectra and mixing properties of strange attractors,” Ann.
N. Y. Acad. Sci. 357, 453-472 (1980).
[90] J. D. Fanner, E. Ott and J.A. Yorke, “The dimension of Chaotic
Attractors,” Physica D 7, 153-180 (1983).
[91] D. Farmer, J. Crutchfield, H. Froehling, N. Packard and R. Shaw,
“Power spectra and mixing properties of strange attractors,” Ann.
N. Y. Acad. Sci. 357, 453-472 (1980).
[92] J. D. Farmer, E. Ott and J.A. Yorke, “The dimension of Chaotic
Attractors,” Physica D 7, 153-180 (1983).

60709_8577 -Txts#150Q.indd 287 19/10/12 4:28 PM


288 References

[93] J. Feder, Fractals, Plenum Press, New York (1988).


[94] M. J. Feigenbaum, “Quantitative universality for a class of nonlinear
transfonnations,” J. Stat. Phys. 19, 25 (1978); “The universal met-
ric properties of nonlinear transfonnations,” J. Stat. Phys. 21, 669
(1979).
[95] S. D. Feit, “Characteristic exponents and strange attractors,” Com-
mun. Math. Phys. 61, 249 (1978).
[96] H.A. Feldman and T.A. McMahon, “The 3/4 mass exponent for en-
ergy metabolism is not a statistical artifact”, Resp. Physiol. 52, 149-
163 (1983).
[97] R. J, Field, “Chemical organization in time and space,” Am. Sci 73,
142-150 (1987).
[98] R.L. Flood and E.R. Carson, Dealing with Complexity, 2nd Edition,
Plenum Press, New York (1993); 1st Edition (1988).
[99] J. Ford, “Directions in Classical Chaos,” in Directions in Chaos, ed.
H. Bai-Liu, World Sci., Singapore (1987).
[100] K. Fraedrich, “Estimating the Dimension of Weather and Climate
Attractors,” J. Atmos. Sci. 43, 419-432 (1986).
[101] D. Fraiman, P. Balenzuela, J. Goss and D.R. Chialvo, Physica A 387,
1387 (2009).
[102] A.M. Fraser and H.L. Swinney, “Independent coordinates for strange
attractors from mutual information”, Phys. Rev. A 33, 1134–1140
(1986)
[103] W. J. Freeman, Mass action in the nervous system, Chapter 7, Aca-
demic Press, New York, pp 489 (1975).
[104] W. J. Freeman, “Petit mal seizures in olfactory bulb and cortex
caused by runaway inhibition after exhaustion of excitation,” Brain
Res. Rev. 11, 259-284 (1986).
[105] W. J. Freeman, “Simulation of chaotic EEG patterns with a dynamic
model of the olfactory system,” Biol. Cybern. 56, 139-150 (1987).
[106] S. Freud and Breuer, Studies in Hysteria, (1859).
[107] A. Fuchs, R. Friedrich, H. Haken and D. Lehmann, “Spatio-temporal
analysis of multi-channel alpha EEG way series,” (preprint) (1986).

60709_8577 -Txts#150Q.indd 288 19/10/12 4:28 PM


References 289

[108] Y.C. Fung, Biodynamics, Springer-Verlag, New York (1984).

[109] J. Gayon, “History of the Concept of Allometry”, Amer. Zool. 40,


748-758 (2000).

[110] Gingerich P.D., “Arithmetic or geometri normality of biological vari-


ation: an empiricial test of theory”, J. Theor. Biol. 204, 201-221
(2000).

[111] L.R. Ginzburg, O. Burger and J. Damuth, “The May threshold and
life-history allometry”, Bio. Lett. doi:10.1098/rsbl.2010.0452

[112] R. Gjessing, “Beitrage zur Kenntnis der Pathophysiologic les katato-


nen stupors: I. Mitteilung uber periodische regidevierenden ketonen
stupor, mit kritischen Begeun und Abschlerss.” Arch Psychiat. Ner-
venkrankh. 96, 391-392 (1932).

[113] P. Glansdorf and I. Prigogine, Thermodynamic theory of Structure,


Stability and Fluctuation, Wiley, New York (1971).

[114] M.R. Guevara, L. Glass and A. Shrier, “Phase locking, period-


doubling bifurcations, and irregular dynamics in periodically stim-
ulated cardiac cells,” Science 214, 1350–1353 (1981).

[115] L. Glass, M. R. Guevara and R. Perez, “Bifurcation and chaos in a pe-


riodically stimulated cardiac oscillator,” Physica 7D, 39-101 (1983).

[116] L. Glass, “Introduction to Controversial Topics in Nonlinear Science:


Is the Normal Heart Rate Chaotic?”, Chaos 19, 028501 (2009).

[117] D.S. Glazier, ”Beyond the ’3/4-power law’: variation in the intra-
and interspecific scaling of metabolic rate in animals”, Biol. Rev. 80,
611-662 (2005).

[118] D.S. Glazier, “The 3/4-power law is not universal: Evolution of iso-
meric, ontogenetic metabolic scaling in pelagoic animals”, BioScience
56, 325-332 (2006).

[119] D.S. Glazier, “A unifying explanation for diverse metabolic scaling


in animals and plants”, Biol. Rev. 85, 111-138 (2010).

[120] P.J. Godin and T.G. Buchman, “Uncoupling of biological oscillators:a


complementary hypothesis concerning the pathogenesis of multiple
organ dysfunction syndrome”, Crit. Care Med. 24, 1107-16 (1996).

60709_8577 -Txts#150Q.indd 289 19/10/12 4:28 PM


290 References

[121] P.J. Godin et al., “Experimental human endotosemia increaes car-


diac regularity: results from a protective, randomized crossover trial”,
Crit. Care Med. 24, 1117-24 (1996).
[122] A. L. Goldberger, L. J. Findley, M. R. Blackburn and A. J. Mandell,
“Nonlinear dynamics in hean failure: Implications of long-wavelength
cardiopulmonary oscillations.” Am. Heart J.107, 612-615 (1984).
[123] A. L. Goldberger, B. J. West and V. Bhargava, “Nonlinear mecha-
nisms in physiology and pathophysiology. Toward a dynamical theory
of health and disease.” Proceeding of the lllh IMACS World Congress,
Oslo, Norway, Vol. 2, eds. B. Wahlstrom, R. Henrikson and N. P.
Sunby, North-Holland, Amsterdam (1985)..
[124] A. L. Goldberger, V. Bhargava, B. J. West and A. J. Mandell, “On
a mechanism of cardiac electrical stability: the fractal hypothesis,”
Biophys. J. 48, 525-528 (1985).
[125] A. L. Goldberger, K. Kobalten and V. Bhargava, IEEE Trans. Bio
Med. Eng. 33, 874 (1986).
[126] A. L. Goldberger, V. Bhargava, B. J. West and A. J. Mandell, “Some
observations on the question is ventricular fibrillation chaos?” Physica
D 19, 282-289 (1986)
[127] A. L. Goldberger and B. J. West, “Chaos in physiology: health or
disease’ ?” in Chaos in Biological Systems pp 1-5, eds. A. Holton and
L.F. Olsen, Plenum (1987).
[128] A. L. Goldberger and B. J. West, “Applications of nonlinear dynam-
ics to clinical cardiology,” in Perspectives in biological dynamics and
Theoretical Medicine, Ann. N.Y. Acad. Sci. 504, 195-215 (1987).
[129] A. L. Goldberger and B. J. West, “Fractals; a contemporary math-
ematical concept with a applications to physiology and medicine,”
Yale J. Bioi. Med. 60, 104-119 (1987).
[130] A. L. Goldberger, D. R. Rigney, B. J. West, “ Chaos and Fractals in
Human Physiology” Scientific American 262 (1990).
[131] B. Goldstein, D. Toweill, S. Lai, K. Sonnenthal and B. Kimberly,
“Uncoupong of the autonomic and cardiovascular systems in acute
brain injury”, Am. J. Physiol. 275, R1287-92 (1998).
[132] J. P. Gollub, T. 0. Brunner and D. G. Danby, “Periodicity and Chaos
in Coupled Nonlinear Oscillators,” Science 200,48-50 (1978).

60709_8577 -Txts#150Q.indd 290 19/10/12 4:28 PM


References 291

[133] J. P. Gollub, E. J. Romer and J. G. Socolar, “Trajectory Divergence


for Coupled Relaxation Oscillators: Measurements and Model,” J.
Stat. Phys. 23, 321-333 (1980).
[134] S.J. Gould, ”Allometry and size in ontogeny and phylogeny”, Biol.
Rev. Cam. Philos. Soc. 41, 587-640 (1966).
[135] Graham J.H., K. Shumazu, J.E. Emien, D.C. Freeman and J.
Merkel,“Growth models and the expected distribution of fluctuating
symmetry”, Biol. J. Linn. Soc. 80, 57-65 (2003)
[136] P. Grassberger and I. Procaccia, “Measuring the strangeness of
strange attractors,” Physica D 9, 189-208 (1983).
[137] P. Grassberger and I. Procaccia, ”Characterization of strange attrac-
tors” Phys. Rev. Lett. 50, 346 (1983).
[138] H. S. Greenside, G. Ahlers, P. C. Hohenberg and R. W. Walden, “A
simple stochastic model for the generation of turbulence in Rayleigh
Benard convection,” Physica D 5, 322-334 (1982).
[139] B.T. Grenfell, C.S. Williams, O.N. Bjornstad and J.R. Banavar,
“Simplifying biological complexity”, Nature 21, 212-213 (2006).
[140] P. Grigolini, M. Zare, A. Svenkeson and B.J. West, “Neural Dynam-
ics: Criticality, Cooperation, Avalanches and Entrainment between
Complex Networks”, in Criticality in Neural Systems, Ed. D. Plenz
and E. Neibur, John Wiley & Sons, New York (2012).
[141] M. R. Guevara and L. Glass, ”Phase locking, period doubling bifur-
cations and chaos in a mathematical model of a periodically driven
oscillator,“ J. Math Biol. 14 1-23 (1982).
[142] B. Gutenberg and C.F. Richter, Seismicity of the earth. Princeton,
NJ (1956).
[143] J. Hanley, “Eiectroencephlography in Psychiatric Disorders: Parts I
and II,” in Directions in Psychiatry, vol 4,1esson 7, pp. 1-8; lesson
8, pp. 1-8 (1984).
[144] T.E.Harris, The theory of branching processes. New York: Dover
(1989).
[145] J.M. Hausdorff, P.L. Purdon, C.-K. Peng, Z. Ladin, J.Y. Wei, A.L.
Goldberger, Fractal Dynamics of Human Gait: Stability of Long-
range Correlations in Stride Interval Fluctuations, J. Appl. Physiol.
80, 1448-1457 (1996).

60709_8577 -Txts#150Q.indd 291 19/10/12 4:28 PM


292 References

[146] J.M. Hausdorff, S.L. Mitchell, R. Firtion, C.K. Peng, M.E. Cud-
kowicz, J.Y. Wei and A.L. Goldberger, “Altered fractal dynamics
of gait: reduced stride-interval correlations with aging and Hunting-
ton’disease”, J. Appl. Physiol. 82, (1997).

[147] J.M. Hausdorff, Y, Ashkenazy, P.K. Peng, et al., “When human walk-
ing becomes random walking: fractal analysis and modeling of gait
rhythm fluctuations”, Physica A-Stat. Mech. and its Appl., 302: 138-
147 (2001).

[148] H. Hayashi, M. Nakao and K. Hirakawa, “Chaos in the self-sustained


oscillation of an excitable biological membrane under sinusoidal stim-
ulation,” Phys. Lett. A 88, 265-268 (1982).

[149] H. Hayashi, M. Nakao and K. Hirakawa, “Entrained, harmonic,


quasiperiodic and chaotic responses of the self-sustained oscillation
of Nitella to sinusoidal stimulation,” J. Phys. Soc. Japan 52, 344-351
(1983).

[150] H. Hayashi, S. Ishizuka and Hirakawa, ”Transition to chaos via inter-


mittency in the Onchidium Pacemaker Neuron,” Phys. Lett. A 98,
474-476 (1983).

[151] H. Hayashi, S. Ishizuka, M. Ohta and K. Hirakawa, “Chaotic behavior


in the Onchidium giant neuron under sinusoidal stimulation,” Phys.
Lett. A 88, 435-438 (1982).

[152] M. Henon, “A two-dimensional mapping with a strange attractor,”


Comm. Math. Phys. SO, 69 (1976).

[153] A.A. Heusner, “Energy metabolism and body size: I. Is the 0.75 mass
exponent of Kleiber’s equation a statistical artifact?”, Resp. Physiol.
48, 1-12 (1982).

[154] A.A. Heusner, “Size and power in mammals”, J. Exp. Biol. 160, 25-
54 (1991).

[155] B. Hess, Trends in Biochem, Sci. 2, 193-195 (1977).

[156] B. Hess and M. Markus, “Order and chaos in biochemistry,” Trends


in Biochem. Sci. 12, 45-48 (1987).

[157] A.V. Hill, “The dimensions of animals and their muscular dynamics”,
Sci. Prog. 38, 209-230 (1950).

60709_8577 -Txts#150Q.indd 292 19/10/12 4:28 PM


References 293

[158] C.C. Hilgetag, G.A.P.C. Burns, M.A. O’Neill, J.W. Scannell, and
M.P. Young, “Anatomical Connectivity Defines the Organisation of
Clusters of Cortical Areas in Macaque Monkey and Cat”, Phil Trans
R Soc Lond B 355, 91-110 (2000).
[159] C.C. Hilgetag and M. Kaiser, “Clustered organization of cortical con-
nectivity”, Neuroinformatics 2, 353-360 (2004).
[160] M.A. Hofman, “Size and shape of the cerebral cortex in mammals. I.
The cortical surface”, Brain Behav. Evol. 27, 28-40 (1985).
[161] “Heart rate variability”, European Heart Journal 17, 354-381 (1996).
[162] J.T.M. Hosking, “Fractional Differencing”, Biometrika 68, 165-176
(1982).
[163] J. Hou, H. Zhao and D. Huang, “The Computation of Atrial Fibrilla-
tion Chaos Characteristics Based on Wavelet Analysis”, Lect. Notes
in Comp. Sci. 4681, 803-809 (2007).
[164] J. L. Hudson and J. C. Mankin, “Chaos in the Belousov-Zhabotinsky
reaction,” J. Chem. Phys. 74, 6171-6177 (1981).
[165] J.S. Huxley, Problems of Relative Growth, Dial Press, New York
(1931).
[166] R. E. Ideker, G. J. Klein and L. Harrison, Circ. Res. 63, 1371 (1981).
[167] N. Ikeda, ”Model of bidirectional interaction between myocardial
pacemakers based on the phase response curve,” Biol. Cybern. 43,
157-167 (1982).
[168] N. Ikeda, H. Tsuruta and T. Sato, ”Difference equation model of
the entrainment of myocardial pacemaker cells based on the phase
response wave,” Biol. Cybern. 42, 117-128 (1981).
[169] L. Isella, J. Stehlé, A. Barrat, C. Cattuto, J.F. Pinton, and W. Van
den Broeck, “What’s in a crowd? Analysis of face-to-face behavioral
networks”, J Theor Biol 271,166-180 (2010).
[170] T. M. Itil, “Qualitative and quantitative EEG findings in schizophre-
nia,” Schizophrenia Bulletin 3, 61-79 (1977).
[171] P. Ch. Ivanov, M.G. Rosenblum, C.-K. Peng, J. Mietus, S. Havlin,
H.E. Stanley and A.L. Goldberger, “Scaling behavior of heartbeat in-
tervals obtained by wavelet-based time-series analysis”, Nature 383,
323-327 (1996).

60709_8577 -Txts#150Q.indd 293 19/10/12 4:28 PM


294 References

[172] G. W. Gross, J. M. Kowalski, “Origins of Activity Patterns in Self-


Organizing Neuronal Networks in Vitro”, J. Intelligent Material Sys-
tems and Structures 10, 558-564 (1999).

[173] H.J. Jerison, “Allometry, brain size, cortical surface, and convoluted-
ness” in Armstrong E. and Falk O. (Eds.) Primate Brain Evolution,
Plenum, New York, pp. 77-84 (1982).

[174] J.H. Jones, “Optimization of the mammalian respiratory system:


symmorphosis versus single species adaptation”, Comp. Biochem.
Physiol. B 120, 125-138 (1998).

[175] K.E. Joyce, P.J. Laurienti, J.H. Burdette and S. Hayasaka, “A


New Measure of Centrality for Brain Networks”, PlusOne 5(8),
e12200,doi:10.1371/journal.pone.0012200 (2010).

[176] M. E. Josephson and S. F. Seides, Clinical Cardiac Electrophysiology:


Techniques and Interpretations, Lea and Febiger, Phil. (1979).

[177] P. E. B. Jourdain, Introduction to Contributions to Transfinite Num-


bers (1915), by G. Cantor, Dover (1955).

[178] T. Kalisky, R. Cohen, D. ben-Avraham and S. Havlin, in Complex


Networks, Lecture Notes in Physics 650, 3, Springer, Berlin (2004).

[179] E.R. Kandel, “Small Systems of Neurons,” Mind and Behavior, R.L.
Atkinson and R.C. Atkinson eds. W.H. Freeman and Co., San Fran-
cisco (1979).

[180] H. Kantz and T. Schreiber, Nonlinear time series analysis, Cam-


bridge University Press, Cambridge, UK (1997).

[181] C. R. Katholi, F. Urthaler, J. Macy Jr. and T. N. James, Comp.


Biomed. Res. 10, 529 (1977).

[182] J. P. Keener, “Chaotic cardiac dynamics,” in Lectures in Applied


Mathematics 19, 299-325 (1981).

[183] J. Kemeny and J. L. Snell, Mathematical Models in the Social Sci-


ences, MIT Press, Cambridge, Mass. (1972).

[184] Kerkhoff A.J. and B.J. Enquist, “Multiplicative by nature: why log-
arithmic transformation is necessary in allometry”, J. Theor. Biol.
257, 519-521 (2009).

[185] J.M. Kleinberg, Nature (London) 406, 845 (2000).

60709_8577 -Txts#150Q.indd 294 19/10/12 4:28 PM


References 295

[186] M. Kobayashi and T. Musha, IEEE Trans. on Biomedical Eng. 29,


456-457 (1982).
[187] T.Kolokotrones, V. Savage, E.J. Deeds and W. Fontana, “Curvature
in metabolic scaling”, Nature 464, 753-756 (2010).
[188] A.Korobeinikov and P.K. Maini, “A Lyapunov function and gloval
properies and stability of SIR and SEIR epidemiological models with
nonlinear incidence”, Math. Biosci.and Eng. 1, 57-60 (2004).
[189] S. H. Koslow, A. J. Mandell and M. F. Shlesinger, eds. Perspectives
in Biological Dynamics and Theoretical Medicine, Ann. N.Y. Acad.
Sci. 504 (1987).
[190] J. Kozlowski and M. Konarzewski, “Is West, Brown and Enquisst’s
model of allometric scaling mathematically correct and biologically
relevant?” , Func. Ecol. 18, 283-289 (2004).
[191] J. Kozlowski and M. Konarzewski, “West, Brown and Enquisst’s
model of allometric scaling again: the same questions remain”, Func.
Ecol. 19, 739-743 (2005).
[192] A.D. Kuo, “The relative roles of feedforward and feedback in the
control of rhythmic movements”, Motor Control 6, 129-145 (2002).
[193] P.S. Laplace, Analytic Theory of Probabilities, Paris(1810).
[194] A. Lasota and M.C. Mackey, Chaos, Fractals and Noise, Springer-
Verlag, New York (1994).
[195] M. A. Lavrentv and S. M. Nikol ’skii, in Mathematics Vol. 1, eds. A.
D. Aleksandrov, A. N. Kolmogorov and M.A. Lavrent v. MIT Press,
Cambridge (1964).
[196] S. P. Layne, G. Mayer-Kress and J. Holzfuss, “Problems associ-
ated with dimensional analysis of electroencephalogram data,” in
Dimensions and Entropies in Chaotic Systems ed. G. Mayer- Kress,
Springer- Verlag, Berlin pp. 246-256, (1986).
[197] T. Y. Li and J. A. Yorke, “Period three implies chaos,” Am. Math.
Mon. 82, 985 (1975).
[198] K. Lindenberg and B.J. West, The Nonequilibrium Statistical Me-
chanics of Open and Closed Systems, VCH, New York (1990).
[199] Lindstedt S.L. and W.A. Calder III, “Body size and longevity in
birds”, The Condor 78, 91-94 (1976).

60709_8577 -Txts#150Q.indd 295 19/10/12 4:28 PM


296 References

[200] Lindstedt S.L. and W.A. Calder III, “Body size, physiological time,
and longevity of homeothermic animals”, Quart. Rev. Biol. 36, 1-16
(1981).

[201] Lindstedt S.L., B.J. Miller and S.W. Buskirk, “Home range, time and
body size in mammals”, Ecology 67, 413-418 (1986).

[202] C.-C. Lo, L.A. Nunes Amaral, S. Havlin, P.Ch. Ivanov, T. Penzel,
J.-H. Peter and H.E. Stanley, “Dynamics of sleep-wake transitions
during sleep”, Europhys. Lett. 57, 625-631 (2002).

[203] C.-C. Lo, T. Chou, T. Penzel, T.E. Scammell, R.E. Strecker, H.E.
Stanley and P.Ch. Ivanov, “Common scale-invariant patters of sleep-
wake transitions across mammalian species”, PNAS 101, 17545-
17548 (2004).

[204] W. P. London , N.I. Arthretis and J. A. Yorke, “Recurrent outbreaks


of measels, chickenpox and mumps: I. Seasonal variations in contact
rates”, Am. J. Epidem 98, 453 (1973).

[205] E. N. Lorenz, “Deterministic Nonperiodic flow,” J. Atmos. Sci. 20,


130 (1963).

[206] E.N. Lorenz, The Essence of Chaos, University of Washington Press,


Seattle (1993).

[207] A. J. Lotka, Elements of Mathematical Biology, Williams and Wilkins


(1925): Dover (1956).

[208] G. G. Luce, Biological Rhythms in Human and Animal Physiology,


Dover, New York (1971).

[209] P.C. Ivanov, L.A.N. Amaral, A.L. Goldberger, S. Havlin, M.G. Rosen-
blum, Z.R. Struzik, H.E. Stanley, “Multifractality in human heart-
beat dynamics”, Nature 399, 461 (1999).

[210] P.C. Ivanov et al., “Levels of complexity in scale-invariant neural


signals”, Phys. Rev. E 79, 041920 (2009).

[211] N. MacDonald, Trees and Networks in Biological Models, Wiley-


Interscience, Chichester (1983).

[212] M. C. Mackey and L. Glass, “Oscillations and chaos in physiological


control systems,” Science 197, 287-289 (1977).

60709_8577 -Txts#150Q.indd 296 19/10/12 4:28 PM


References 297

[213] M. C. Mackey and J. C. Milton, “Dynamical Diseases,” in Perpectives


in Biological Dynamics and Theoretical Medicine, Ann. N.Y. Acad.
Sci. 504, 16-32 (1987).

[214] .C. Mackey, Time’s Arrow, Springer-Verlag, New York (1992).

[215] R.L. Magin, Fractional Calculus in Bioengineering, begell house, inc.,


Connecticut (2006).

[216] B.B. Mandelbrot, “How Long is the Coast of Britain? Statistical Self-
Similarity and Fractal Dimension”, Science 156, 636-640 (1967).

[217] B. B. Mandelbrot, Fractals, Form and Chance, W. H. Freeman


(1977).

[218] B. B. Mandelbrot, “Fractal aspects of the iteration of z l..z (1-z) for


complex 1.. and z,” Ann.N.Y. Acad. Sci 357, 249-259 (1980).

[219] B. B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman


(1982).

[220] B.B. Mandelbrot, Fractals and Scaling in Finance, Springer, New


York (1997).

[221] A. J. Mandell, P. V. Russo, and S. Knapp, “Strange stability in hi-


erarchically coupled neurophy- siological systems,” in Evolution of
Order and Chaos in Physics, Chemistry and Biology, ed. H. Haken,
Springer-Verlag (1982).

[222] R.N. Mantegna and H.E. Stanley, Econophysics, Cambridge Univer-


sity Press, New York (2000).

[223] M. Markus, D. Kuschrnitz and B. Hess, “Properties of Strange At-


tractors in Yeast Glycoysis,” Biophys. Chern. 22, 95-105 (1985).

[224] B.D. Malamud, G. Morein and D.L. Turcotte, “Forest fires: an ex-
ample of self-organized critical behavior”, Science 281,1840 –1842
(1998).

[225] R. T. Malthus, Population: The First Essay (1798), Univ. Mich.


Press, Ann Arbor (1959).

[226] G. Matsumoto, K. Aihara, M. Ichikawa and A. Tasaki, “Periodic and


nonperiodic response of membrane potentials in squid giant axons
during sinusoidal current stimulation,” J. Theor. Neurobiol. 3, 1-14
(1984).

60709_8577 -Txts#150Q.indd 297 19/10/12 4:28 PM


298 References

[227] R.D. Mauldin and S.C. Williams, “On the Hausdorff dimension of
some graphs,” Trans. Am. Math. Soc. 298, 793-803 (1986).

[228] R. M. May, “Simple mathematical models with very complicated dy-


namics,” Nature 261, 459-467 (1976).

[229] R.M. May and G.F. Oster, “Bifurcations and dynamic complexity in
simple ecological models”, Am. Nat. 110, 573-599 (1976).

[230] G. Mayer-Kress, F. E. Yates, L. Benton, M. Keidel, W. Tirsch, S. J.


Pilpp1 and K. Geist, “Dimensional analysis of nonlinear oscillations
in brain, heart and muscle,” preprint (1987).

[231] G. Mayer-Kress and S.C. Layne, “Dimensionality of the human


electroencephalogram”, in Perspectives in Biological Dynamics and
Theoretical Medicine, eds. S.H. Koslow, A.J. Mandell and M.F.
Shlesinger, Ann. N.Y. Acad. Sci. 504 (1987).

[232] T.A. McMahon and J.T. Bonner, On Size and Life, Sci. Am. Library,
New York (1983).

[233] B.K. McNab, The Physiological Ecology of Vertebrates; A View from


Energetics, Comstock Publ. Assoc. (2002).

[234] R.K. Merton, Science 159, 56-63 (1968).

[235] Micheloyannis, S., Pachou, E., Stam, C.J., Vourkas, M., Erimaki, S.,
Tsirka, V. (2006) Using graph theoretical analysis of multi channel
EEG to evaluate the neural efficiency hypothesis. Neurosci. Lett. 402,
273-277; Stam, C.J., de Bruin, E.A. (2004) Scale-free dynamics of
global functional connectivity in the human brain. Hum. Brain Mapp.
22, 97-109.

[236] K.S. Miller and B. Ross, An Introduction to the Fractional Calculus


and Fractional Differential Equations, John Wiley & Sons, New York
(1993).

[237] M. Mobilia, A. Peterson and S. Redner, J. Stat. Mech.: Th. and Exp.,
P08029 (2007).

[238] H. D. Modanlon and R. K. Freeman, “Sinusoidal fetal heart rate pat-


tern: Its definition and clinical significance,” Am. J. Obsts. Gynecol.
142, 1033-1038 (1982).

[239] G. K. Moe, W. C. Rheinboldt and J. A. Abildskov, Am. Heart J. 67,


200 (1964).

60709_8577 -Txts#150Q.indd 298 19/10/12 4:28 PM


References 299

[240] E. W. Montroll and B. J. West, “On an enriched collection of stochas-


tic processes,” in Fluctuation Phenomena, 2nd ed, E. W. Montroll and
J. L. Lebowitz, North-Holland Personal Library, Amsterdam (1987).
[241] E. W. Montroll and M. F. Shlesinger, “On l/f noise and distributions
with long tails,” Proc. Natl. Acad. Sci. USA 79, 3380-3383 (1982).
[242] F. Moss and P.V.E. McClintock, editors, Noise in Nonlinear Dy-
namical Systems, 3 volumes Cambridge University Press, Cambridge
(1989).
[243] C.D. Murray, ”The physiological principle of minium work. I. The
vascular system and the cost of blood”, Proc. Nat. Acad. Sci. USA
12, 207-214 (1926).
[244] W.A.C. Mutch, S.H. Harm, G.R. Lefevre, M.R. Graham, L.G. Girling
and S.E. Kowalski, “Biologically variable ventilation increases arte-
rial oxygenation over that seen with positive end-expiration pressure
alone in a porcine model of acute respiratory distress syndrome”,
Crit. Care Med. 28, 2457-2464 (2000).
[245] W.A.C. Mutch and G.R. Lefevre, “Health, ‘small-worlds’, fractals
and complex networks: an emerging field”, Med. Sci. Monit. 9, MT55-
MT59 (2003).
[246] T.R. Nelson, B.J. West and A.L. Goldberger,“The fractal
lung:Universal and species-related scaling patterns”, Cell. Mol. Life
Sci. 46, 251-254 (1990).
[247] M.E.J. Newman, “The structure and function of complex networks”,
SIAM Rev. 45, 167 (2003).
[248] J. S. Nicolis and I. Tsuda, “Chaotic dynamics of information process-
ing: The magic number seven plus minus two revisited,” Bull. Math.
Biol. 47,343-365 (1985).
[249] C. Nicolls and G. Nicolis, Proc. Natl. Acad. Sci. USA 83,536 (1986).
[250] J. S. Nicolis, “Chaotic dynamics applied to information processing,”
Rep. Prog. Phys. 49, 1109-1196 (1986).
[251] T.F. Nonnenmacher and R. Metzler, “On the Riemann-Liouville
Fractional Calculus and some Recent Applications”, Fractals 3, 557
(1995).
[252] N. E. Nygards and J. Hutting, Computer in Cardiology, (IEEE Com-
puter Society) 393 (1977).

60709_8577 -Txts#150Q.indd 299 19/10/12 4:28 PM


300 References

[253] L. F. Olsen, “An enzyme reaction with a strange attractor,” Phys.


Lett A 94, 454-457 (1983).
[254] L. F. Olsen, and H. Degn, “Chaos in an enzyme reaction,” Nature
267, 177-178 (1977).
[255] L. F. Olsen and H. Degn, “Chaos in biological systems,” Q. Rev.
Biophys. 18, 165-225 (1985).
[256] L. Onsager, Phys. Rev. 65, 117149 (1944).
[257] Y. Oono, T. Kohda, and H. Yamazaki, “Disorder parameter for
chaos,” J. Phys. Soc. of Japan 48,738-745 (1980).
[258] A. R. Osborne and A. Provenzale, “Finite correlation dimension for
stochastic systems with power law spectra,” Physica D 35, 357-381
(1989).
[259] V. I. Oseledec, “A multiplicative ergodic theorem, Lyapunov char-
acteristic numbers for dynamical systems,” Trudy Mosk. Mat. Obsc.
19, 179 [Moscow Math. Soc. 19, 197 (1968)].
[260] E. Ott, “Strange attractors and chaotic motions of dynamical sys-
tems,” Rev. Mod. Phys. 57, 655-671 (1985).
[261] E. Ott, Chaos in Dynamical Systems, Cambridge University Press,
New York (1993).
[262] N.H. Packard, J.P. Crutchfield, J.D. Farmer and R. S. Shaw, “Geom-
etry from a Times Series,” Phys. Rev. Lett. 45, 712-716 (1980).
[263] M. Paczuski, S. Maslov and P. Bak, “Avalanche dynamics in evo-
lution, growth, and depinning models”, Phys Rev E 53, 414 – 443
(1996).
[264] S. Panchev, Random Functions and Turbulence, Pergamon Press, Ox-
ford (1971).
[265] P.R. Painter, “Allomtric scaling of the maximum metabolic rate of
mammals: oxygen transport from the lungs to the heart is a limiting
step”, Theor. Biol. Med. Model. 2, 31-39 (2005).
[266] P. J. E. Peebles, The Large-scale Structure of the Universe, Princeton
Univ. Press (1980).
[267] C.K. Peng, J. Mistus, J.M. Hausdorff, S. Havlin, H.E. Stanley and
A.L. Goldberger, “Long-range anticorrelations and non-Gaussian be-
havior of the heartbeat”, Phys. Rev. Lett. 70, 1343-1346 (1993).

60709_8577 -Txts#150Q.indd 300 19/10/12 4:28 PM


References 301

[268] C.-K. Peng, J. Metus, Y. Li, C. Lee, J.M. Hausdorff, H.E. Stanley,
A.L. Goldberger and L.A. Lipsitz, “Quantifying fractal dynamics of
human respiration: age and gender effects”, Ann. Biom. Eng. 30,
683-692 (2002).
[269] J.I. Perotti, O.V. Billoni, F.A. Tamarit, D.R. Chialvo and S.A. Can-
nas, Phys. Rev. Lett. 103, 108701 (2009).
[270] R.H. Peters, The Ecological Implications of Body Size, Cambridge
University Press, Cambridge (1983).
[271] S.M. Pincus, “Greater signal regularity may indicate increased system
isolation”, Math. Biosci. 122, 161-181 (1994).
[272] http://www.physionet.org/
[273] A. Pikovsky, M. Rosenblum and J. Kurths, Synchronization: A Uni-
versal Concept in Nonlinear Science, Cambridge University Press,
Cambridge, UK (2001).
[274] D. Plenz, “Neuronal avalanches and coherence potentials”, The Eu-
ropean Physical Journal-Special Topics 205, 259-301 (2012).
[275] H. Poincaré, Mémoire sur les courves définies par les equations
différentielles, I-IV, Oevre 1, Gauthier-Villars, Paris, (1888).
[276] R. Pool, “Is it chaos, or is it just noise?” in Science 243, 25 (1989).
[277] I. Podllubny, Fractional Differential Equations, Academic Press, San
Diego, CA (1999).
[278] C.A. Price, B.J. Enquist and V.M. Savage, “A general model for
allometric covariation in botanical form and function”, PNAS 104,
13204-09 (2007).
[279] J. C. Principe and J. R. Smith, “Microcomputer-based system for
the detection and quantification of petit mal epilepsy,” Comput. Biol.
Med. 12, 87-95 (1982).
[280] J.W. Prothero, “Scaling of cortical neuron density and white matter
volume in mammals”, J. Brain Res. 38, 513-524 (1997).
[281] O.G. Raabe, H.D. Yeh, G.M. Schum and R.F. Phalen, Tracheo-
bronchial Geometry: Human, Dog, Rat, Hamster. Albuquerque:
Lovelace Foundation for Medical Education and Research (1976).
[282] B. Rajagopalon and D.G. Tarboton, Fractals 1, 6060 (1993).

60709_8577 -Txts#150Q.indd 301 19/10/12 4:28 PM


302 References

[283] P.E. Rapp, J. Exp. Biol. 81, 281-306 (1979).

[284] P. E. Rapp, I. D. Zimmerman, A. M. Albano, G. C. de Guzman and


N. N. Greenbaum, “Dynamics of spontaneous neural activity in the
simian motor cortex: the dimension of chaotic neurons,” Phys. Lett.
A, 335-338 (1985).

[285] P. E. Rapp, I. D. Zimmerman, A. M. Albano, G. C. de Guzman, N.


N. Greenbaum and T. R. Bashore, “Experimental studies of chaotic
neural behavior: cellular activity and electroencephalogram signals,”
in Nonlinear Oscillations in Biology and Chemistry, ed. H.G. Oth-
mer, 175-205, Springer- Verlag (1987).

[286] P. E. Rapp, R. A. Latta and A. I. Mees, “Parameter-dependent tran-


sitions and the optimal control of dynamic diseases,” Bull. Math.
Biol. 50, 227-253 (1988).

[287] N. Rashevsky, Mathematical Biophysics Physico- Mathematical Foun-


dations of Biology, vol. 2, 3rd rev. ed., Dover, New York (1960).

[288] P.B. Reich, M.G. Tjoelker, J. Marchado and J. Oleksyn, “Universal


scaling of respiratory metabolsim, size and nitrogen in plants”, Nature
439, 457-461 (2006).

[289] L.E. Reichl, A Modern Course in Statistical Physics, John Wiley &
Sons, New York (1998).

[290] L.F. Richardson, “Atmospheric diffusion shown on a distance-


neighbor graph,” Proc. Roy. Soc.Lond. A 110,709 (1926).

[291] L.F. Richardson, “Statistics of Deadly Quarrels”, reprinted in Vol. II


World of Mathematics, J. Newman, p. 1254 (1956).

[292] J.P. Richter, Ed., The Notebooks of Leonardo da Vinci, Vol. 1, Dover,
New York (1970); unabridged edition of the work first published in
London in 1883.

[293] J. Rinzel and R.N. Miller, Math Biosci. 49, 27 (1980).

[294] A. I. Ritzenberg, D. R. Adam and R. J. Cohen “Period multiplying


evidence for nonlinear behaviour of the canine bean,” Nature 307,
157 (1984).

[295] F. Rohrer, “Flow resistance in human air passages and the effect of
irregular branching of the bronchial system on the respiratory pro-
cess in various regions .of the lungs.” Pflugers Arch. 162, 225-99.

60709_8577 -Txts#150Q.indd 302 19/10/12 4:28 PM


References 303

Repr. 1975: Translations in Respiratory Physiology, ed. J. B. West,


Stroudsburg, PA: Dowden, Hutchinson and Ross.

[296] S. Rossitti an dH. Stephensen, Acta Physio. Scand. 151, 191 (1994)

[297] 0. E. Rössler, “An Equation for Continuous Chaos,” Phys. Lett. A


57, 397-398 (1976).

[298] 0. E. Rössler, “Continuous chaos-four prototype equations,” in Bifur-


cation Theory and Applications to Scientific Disciplines, Ann. N.Y.
Acad. Sci. 316, 376-392 (1978).

[299] J. C. Roux, J. S. Turner, W. D. McConnick and H. L. Swinney, “Ex-


perimental observations of complex dynamics in a chemical reaction,”
in Nonlinear Problems: Present and Future, eds. A. R. Bishop, D. K.
Campbell and B. Nicolaenko, 409-422, North-Holland, Amsterdam
(1982).

[300] J. C. Roux, R. M. Simoyi and H. L. Swinney, “Observation of a


strange attractor,” Physica D 8, 257 (1983).

[301] V.M. Savage, J.P. Gillooly, W.H. Woodruff, G.B. West, A.P. Allen,
B.J. Enquist and J.H. Brown, “The predominance of quarter-power
scaling biology”, Func. Ecol. 18, 257-282 (2004).

[302] V.M. Savage, E.J. Deeds and W. Fontana, “Sizing up Allometric


Scaling”, PLoS Compt. Biol. 4(9), e1000171 (2008).

[303] N. Scafetta and P. Grigolini, “Scaling detection in time series: diffu-


sion entropy analysis”, Phys. Rev. E 66, 036130 (2002).

[304] N. Scafetta, L. Griffin and B.J. West, “Hölder exponent for human
gait”, Physica A 328, 561-583 (2003).

[305] N. Scafetta, D. Marchi and B.J. West, “Understanding the complexity


of human gait dynamics”, Complexity (2011).

[306] W. M. Schaffer, “Can nonlinear dynamics elucidate mechanisms in


ecology and epidemiology?” IMAJ Math Appl. Med. Biol. 2, 221-252
(1985).

[307] W. M. Schaffer and M. Kott, ”Nearly one dimensional dynamics in


an epidemic,” J. Theor. Biol. 112, 403-427 (1985).

[308] D. Schertzer, S. Lovejoy, F. Schmitt, Y. Chigirinskays and D. Marsan,


Fractals 5, 427 (1997).

60709_8577 -Txts#150Q.indd 303 19/10/12 4:28 PM


304 References

[309] K. Schmidt-Nielson, Scaling, Why is Animal Size so Important?,


Cambridge University Press, Cambridge, London (1984).

[310] E. Schrödinger, What is Life?, Cambridge University Press, New York


(1995), first published in 1944.

[311] I. B. Schwartz and H. L. Smith, “Infinite subhannonic bifurcations


in an SEIR model,” J. Math. Biol. 18, 233-253 (1983).

[312] I. B. Schwartz, “Multiple stable recurrent outbreaks and predictabil-


ity in seasonally forced nonlinear epidemic models,” J. Math. Biol.
21, 347 (1985).

[313] M.F. Shlesinger and B.J. West, ”Complex Fractal Dimension of the
Bronchial Tree”, Phys. Rev. Lett. 67, 2106-2108 (1991).

[314] L. A. Segal, Modeling Dynamic Phenomena in Molecular and Cellular


Biology, Cambridge Univ. Press, London (1984).

[315] M. Sernetz, B. Gelleri and J. Hoffman, “The organism as bioreactor.


Interpolation of the reduction law of metabolism in terms of hetero-
geneous catalysis and fractal structure.” J. Theor. Biol. 117, 209-230
(1985).

[316] R. Shaw, “Strange attractors, chaotic behavior, and infonnation


flow,” Z. Naturforsch 36A, 80-112 (1981).

[317] R. Shaw, The Dripping Faucet as a Model Chaotic System, Ariel


Press, Santa Cruz, CA (1984).

[318] A. Siegel, C. L. Grady and A. F. Mirsky, “Prediction of spike-wave


bursts in disence epilepsy by EEG power-spectra signals,” Epilepsia.
23, 47-60 (1982).

[319] D. Sigeti and W. Horsthemke, “High frequency spectra for systems


subject to noise,” Phys. Rev. A 35, 2276-2282 (1987).

[320] J.K.L. da Silva, G.J.M. Garcia and L.A. Barbosa, “Allometric scaling
laws of metabolism”, Phys. Life Reviews 3, 229-261 (2006).

[321] R. H. Simoyi, A. Wolf and H. L. Swinney, “One-dimensional dynamics


in a multicomponent chemical reaction,” Phys. Rev. Lett. 49, 245-248
(1982).

[322] J. M. Smith and R. J. Cohen, Proc. Nat!. Acad. Sci. 81,233 (1984).

60709_8577 -Txts#150Q.indd 304 19/10/12 4:28 PM


References 305

[323] O. Snell, ”Die Abhängigkeit des Hirngewichts von dem Körpergewicht


und den geistigen Fähigkeiten”, Arch. Psychiatr. 23, 436–446 (1892).

[324] K. Snell, Ed. Understanding the Control of Metabolism, Portland,


London (1997).

[325] O. Sporns, D.R. Chialvo, M. Kaiser and C.C. Hilgetag, “Orga-


nization, development and function of complex brain networks”,
TRENDS in Cog. Sci. 8, 418-425 (2004).

[326] Stam, C.J. (2004) Functional connectivity patterns of human magne-


toencephalographic recordings: a “small-world” network? Neurosci.
Lett. 355, 25-28.

[327] H.E. Stanley, Introduction to Phase Transitions and Critical Phe-


nomena (Oxford University Press, New York, 1971).

[328] J. Stehlé, N. Voirin, A. Barrat, C. Cattuto, V. Colizza, L. Isella, C.


Régis, J. Pinton, N. Khanafer, W. Van den Broeck and P. Vanhems,
“Simulation of an SEIR infectious disease model on the dynamic con-
tact network of conference attendees”, BMC Medicine 9, 87 (2011).

[329] K.M. Stein, J. Walden, N. Lippman and B.B. Lerman, “Ventricular


response in atrial fibrillation: random or deterministic?”, AJP - Heart
277, H452-H458 (1999).

[330] Z.R. Struzik, “Determining Local Singularity Strengths and their


Spectra with the Wavelet Transform”, Fractals, 8, 163-179 (2000).

[331] M.P.H. Stumpl and M.A. Porter, “Critical Truths About Power
Laws”, Science 335, 665-666 (2012).

[332] H.H. Szeto, P.Y. Cheng, J.A. Decena, Y. Chen, Y. Wu and G. Dwyer,
“Fractal properties of fetal breathing dynamics”, Am. J. Physiol. 262
(Regulatory Integrative Comp. Physiol. 32) R141-R147 (1992).

[333] F. Takens, in Lecture Notes in Mathematics, 898, ed. D. A. Rand


and L. S. Young, Springer- Verlag, Berlin (1981).

[334] G. R. Taylor, The Great Evolution Mystery, H;uper and Row (1983).

[335] J. Theiler, “Spurious dimension from correlation algorithms applied


to limited time-series data”, Phys. Rev. A 34, 2427-2432 (1986).

[336] D. W. Thompson, On Growth and Form, 2nd Ed., Cambridge Univ.


Press (1963), original (1917).

60709_8577 -Txts#150Q.indd 305 19/10/12 4:28 PM


306 References

[337] N.L. Tilney, G.L. Bailey and A.P. Morgan, “Sequential system failure
after rupture of abdominal aortic aneurysms: an unsolved problem in
postopertive care”, Ann. Surg. 178, 117-122 (1973).

[338] J. Touboul and A. Destexhe, ”Can Power-Law Scaling and Neuronal


Avalanches Arise from Stochastic Dynamics?”, PLoSone 5, e8982
(2010); L. deArcangelis and H. J. Herrmann,
”Activity-dependent neuronal model on complex networks”, Frontiers
in Physiology 3, 62 (2012); X. Li, M. Small, ”Neuronal avalanches of
a self-organized neural network with activeneuron- dominant struc-
ture”, Chaos 22, 023104 (2012).

[339] D.B. Tower, “Structural and functional organization of mammalian


cerebral cortex The correlation of neurone density with brain size”,
J. Comp. Neurol. 101, 9-52 (1954).

[340] M. Turalska, M. Lukovic, B.J. West and P. Grigolini, Phys. Rev. E


80, 021110 (2009).

[341] M. Turalska, B.J. West and P. Grigolini, Phys. Rev. E 83, 061142
(2011).

[342] M. Trualska, B.J. West and P. Grigolini, submitted to Phys. Rev.


Lett.

[343] D.L. Turcotte, Fractals and chaos in geology and geophysics, Cam-
bridge University Press, Cambridge (1992).

[344] B. van der Pol and J. van der Mark., “The heartbeat considered as a
relaxation oscillator and an electrical model of the hean,” Phil. Mag.
6, 763 (1928).

[345] B. van der Pol and J. van der Mark., Extr. arch. neerl. physiol. de
l’homme et des animaux 14, 418 (1929).

[346] F. Vanni, M. Lukovic and P. Grigolini, Phys. Rev. Lett. 107, 078103
(2011).

[347] D. M. Vassalle, Circ. Res. 41, 269 (1977).

[348] P. F. Verhulst, Mem. Acad. Roy. Bruxelles 28, 1 (1844).

[349] Vierordt, Ueber das Gehen des Menchen in Gesunden und kranken
Zustaenden nach Selbstregistrirender Methoden, Tuebigen, Germany
(1881).

60709_8577 -Txts#150Q.indd 306 19/10/12 4:28 PM


References 307

[350] M.O. Vlad, F. Moran, V.T. Popa, S.E. Szedlacsek and J. Ross, “Func-
tional, fratal nonlinear response with application to rate proesses with
memory, allometry, and population genetics”, Proc. Natl. Acad. Sci.
USA 104, 4798-4803 (2007).

[351] M.V. Volkenshstein, Biophysics, MIR pub., Moscow (1983).

[352] J. L. Waddington, M. J. MacColloch and J. E. Sambrooks, Experi-


entia 35, 1197 (1979).

[353] D.I. Warton, I.J. Wright, D.S. Falster and M. Westoby, “Bivariate
line fitting methods for allometry”, Biol. Rev. 85, 259-291 (2006).

[354] R. C. Watt and S. R. Hameroff, “Phase space reconstruction and


dimensional analysis of EEG,” in Perspectives in Biological Dynamics
and Theoretical Medicine, eds. S. H. Koslow, A. J. Mandell and M.
F. Shlesinger, N.Y. Acad. Sci. 504, (1987).

[355] D.J. Watts and S.H. Strogatz, Nature (London) 393, 440 (1998).

[356] D.J. Watts, Small Worlds, Princeton University Press, Princeton,


N.J. (1999).

[357] Weaver, ”Science and Complexity”, American Scientist 36, 536-44


(1948).

[358] E. R. Wiebel and D. M. Gomez, “Architecture of the human lung,”


Science 137, 577-585 (1962).

[359] E. R. Wiebel, Morphometry of the Human Lung, Academic Press,


New York. (1963).

[360] E.R. Weibel, Symmorphosis: On form and function in shaping life,


Harvard University Press, Cambridge, MA (2000).

[361] E.R. Weibel, “The pitfalls of power laws”, Nature 417, 131-132
(2002).

[362] E.R. Weibel, “How Benoit Mandelbrot Changed my Thinking about


Biological Form”, Ed. M. Frame, Benoit Mandelbrot Memorial 2011,
(2012).

[363] G. Werner, “Fractals in the nervous system: conceptual impli-


cations for theoretical neuroscience”, Front. Physiol. 1:15. doi:
10.3389/fphys.2010.00015 (2010).

60709_8577 -Txts#150Q.indd 307 19/10/12 4:28 PM


308 References

[364] B. J. West, An Essay on the Importance of Being Nonlinear, Lect.


Notes in Biomathematics 62, Springer-Verlag, Berlin (1985).
[365] B. J. West, A. L. Goldberger, G. Rovner and V. Bhargava, “Nonlinear
dynamics of the heanbeat I. The AV junction: Passive conduit or
active oscillator?” Physica D 17, 198-206 (1985).
[366] B. J. West, V. Bhargava and A. L. Goldberger, “Beyond the principle
of similitude: renormalization in the bronchial tree,” J. Appl. Physiol.
60, 189-197 (1986).
[367] B. J. West, “Fractals, Intermittency and Morphogenesis,” in Chaos
in Biological Systems, pp. 305-317, eds. A. Holton and L. F. Olsen,
Plenum (1987).
[368] B. J. West and A. L. Goldberger, “Physiology in fractal dimensions,”
Am. Sci. 75, 354-365 (1987).
[369] B. J. West and J. Salk, “Complexity, Organization and Uncenainty,”
E. J. Oper. Res. 30, 117-128 (1987).
[370] B. J. West, “Fractals in Physiology,” In Dynamic Patterns in Com-
plex Systems, eds. J. A. S. Kelso, A. J. Mandell and M. F. Shlesinger,
World Science, Singapore (1988).
[371] B.J. West, Fractal Physiology and Chaos in Medicine, Studies of Non-
linear Phenomena in Life Science: Vol. 1, World Scientific, Singapore
(1990).
[372] B.J. West, “Physiology in fractal dimension: error tolerance”, Ann.
Biomed. Eng. 18, 135-149 (1990).
[373] B.J. West and W. Deering, The Lure of Modern Science: Fractal
Thinking, Studies of Nonlinear Phenomena in Life Science, Vol. 3,
World Scientific, New Jersey (1995).
[374] B.J. West, Physiology, Promiscuity and Prophecy at the Millennium:
A Tale of Tails, Studies of Nonlinear Phenomena in Life Science,
Vol. 7, World Scientific, Singapore (1999).
[375] B.J. West and L. Griffin, “Allometric control of human gait”, Fractals
6, 101-108 (1998); B.J. West and L. Griffin, “Allometric Control,
Inverse Power Laws and Human Gait”, Chaos, Solitons & Fractals
10, 1519-27 (1999).
[376] B.J. West and N. Scafetta, “A nonlinear model for human gait”,
Phys. Rev. E 67: 051917 (2003).

60709_8577 -Txts#150Q.indd 308 19/10/12 4:28 PM


References 309

[377] B.J. West, M. Latka, M. Glaubic-Latka and D. Latka, “Multifractal-


ity of cerebral blood flow”, Physica A 318, 453-460 (2003).
[378] B.J. West, M. Bolognia and P. Grigolini, Physics of Fractal Operators,
Springer, New York (2003).
[379] B.J. West, L.A. Griffin, H.J. Frederick, and R.E. Moon, “The Inde-
pendently Fractal Nature of Respiration and Heart Rate During Ex-
ercise Under Normobaric and Hyperbaric Conditions”, Respiratory
Physiology & Neurobiology 145, 219-233, 2005.
[380] B.J. West and L. Griffin, Biodynamics: Why the Wirewalker Doesn’t
Fall, Wiley & Sons, New York (2004).
[381] B.J. West, Where Medicine Went Wrong; Rediscovery the path to
complexity, Studies of Nonlinear Phenomena in Life Science, Vol.
11, World Scientific (2006).
[382] B. J. West, “Complexity, Scaling and Fractals in Biological Signals”,
Wiley Encyclopedia of Biomedical Engineering, Wiley & Sons, New
York(2006).
[383] B.J. West, E.L. Geneston and P. Grigolini, “Maximizing information
exchange between complex networks”, Phys. Rept. 468, 1-99 (2008).
[384] B.J. West, “Fractal physiology and the fractional calculus: a perspec-
tive”, Front. Physiol. 1:12. doi: 10.3389/fphys.2010.00012 (2010).
[385] B.J. West and P. Grigolini, Complex Webs: Anticipating the Improb-
able, Cambridge University Press, Cambridge, UK (2011)
[386] D. West and B.J. West, “Statistical origin of allometry”, EPL 94,
38005p1-p6 (2011).
[387] B.J. West and D. West, “Fractional Dynamics of Allometry”, Frac-
tional Calculus and Applied Analysis 15, 1-25 (2012).
[388] G.B. West, J.H. Brown and B.J. Enquist., “A General Model for the
Origin of Allometric Scaling Laws in Biology”, Science 276, 122-124
(1997).
[389] G.B. West, ”The Origin of Universal Scaling Laws in Biology”, Phys-
ica A 263, 104-113 (1999).
[390] G.B. West et al., ”The origin of universal scaling laws in biology”,
in Sclaing in Biology, Eds. Brown J.H. and West G.B., pp.87-112,
Oxford University Press, Oxford (2000).

60709_8577 -Txts#150Q.indd 309 19/10/12 4:28 PM


310 References

[391] G.B. West, V.M. Savage, J. Gillooly, B.J. Enquist, W.H. Woodruff
and J.H. Brown, “Why does metabolic rate scale with body size?”,
Nature 421, 712 (2003)
[392] C.R. White and R.S. Seymour, “Allometric scaling of mammalian
metabolism”, J. Exp. Biol. 208, 1611-1619 (2005).
[393] H. Whitney, Ann. Math. 37, 645 (1936).
[394] C. Wickens, A. Kramer, L. Vanasse and E. Donchin, “Performance
of concurrent tasks: a psychophysiological analysis of the reciprocity
of information-processing resources,” Science 221, 1080-1082 (1983).
[395] N. Wiener, Time Series, MIT press, Cambridge, Mass. (1949).
[396] N. Wiener, Cybernetics, MIT Press, Cambridge, Mass. (1963).
[397] N. Wiener, Harmonic Analysis, MIT Press, Cambridge Mass (1964).
[398] K. G. Wilson, “Problems in physics with many scales of length,” Sci.
Am. 241, 158-179 (1979).
[399] T. A. Wilson, “Design of the bronchial tree,” Nature Lond. 18, 668-
669 (1967).
[400] A. T. Winfree, J. Theor. Biol. 16, 15 (1977).
[401] A. T. Winfree, J. Theor. Biol. 249, 144 (1984).
[402] J.M. Winters and P. E. Crago, Biomechanics and Neural Control of
Posture and Movements, Spring-Verlag, New York 2000.
[403] A. Wolf, J. B. Swift, H. L. Swinney and J. A. Vastano, “Determin-
ing Lyapunov exponents from a time series,” Physica D 16, 285-317
(1985).
[404] S. J. Worley, J. L. Swain and P. G. Colavita, Am. J. Cardiol.. 5, 813
(1985).
[405] C.A. Yates, r. Erban, C. Escudero, I.D. Couzin, J. Buhl, I.G.
Kevrekidis, P.K. Maini and D.J.T. Sumpter, PNAS 106, 5464 (2009).
[406] J. A. Yorke and E. D. Yorke, “Metastable Chaos: The transition to
sustained chaotic behavior in the Lorenz model,” J. Stat. Phys. 21,
263 (1979).
[407] J. Xie, S. Sreenivasan, G. Korniss, W. Zhang, C. Lim and B.K. Szy-
manski, Phys. Rev. E 84, 011130 (2011).

60709_8577 -Txts#150Q.indd 310 19/10/12 4:28 PM


References 311

[408] R. Zhang, J.H. Zuckerman, C. Giller and B.D. Levine, Am. J. Physiol.
274, H233 (1999)..

[409] J.J. Zebroski, K. Grudzinski, T. Buchner, P. Kuklik, J. Gac, G. Giel-


erak, P. Sanders and R. Baranowski, “Nonlinear oscillator model re-
producing various phenomena in the dynamics of the conduction sys-
tem of the heart”, Chaos 17, 015121 (2007).

[410] L. Zemanova, C. Zhou and J. Kurths, “Structural and functional clus-


tering of complex brain networks”, Physica D 224, 202-212 (2006).

60709_8577 -Txts#150Q.indd 311 19/10/12 4:28 PM


This page intentionally left blank

60709_8577 -Txts#150Q.indd 312 19/10/12 4:28 PM


Index

1/f noise, 103 all-to-all


1/f scaling, 108 coupling, 269
1/f-noise, 182 allergies, 97
2-cycle, 137 allometric aggregation, 194
4-cycle, 137 allometry
8-cycle, 137 aggregation approach, 200
coefficient, 199
abdominal ganglion exponent, 63, 199
aphysia, 221 exponents, 71
action potential, 221 parameters, 19, 67, 74, 77
action potentials, 79 relation, 15
activation, 226 relations, 63
adaptive, 201 allometry relation, 198
adaptive variation, 67 allometry relations, 258
Adrian, R., 10 Altemeier, W.A., 205
adulthood, 105 alveoli, 37, 40
aging, 2 analytic function, 96
agreement, 263 analytic functions, 81
Agutter, P.A., 72 anesthesia, 244
Aihara, K., 225 anti-persistent, 184, 192
airways, 203 aorta, 70
algae, 259 aperiodic, 119, 125, 143
313

60709_8577 -Txts#150Q.indd 313 19/10/12 4:28 PM


314 Index

aperiodic processes, 4 avalanches


Apol, M.E.F., 70 neuronal, 275
AR, 199 axon, 220
Arneodo, A., 259
Aron, I.L., 218 Babloyantz, A., 237
arrhythmia, 155 bacteria, 97, 258
ART, 176, 211, 212, 239 baker’s transformation, 14
arteries, 58 baker’s yeast, 230
arthropods, 258 Baltimore, 258
Ashkenazy,Y,, 107 Barenblatt, G.I., 59
asphyxia, 244 Bartlett, M.S., 217
asymptotically stable, 101 basal ganglia, 104, 205
ATP, 97 basin
atrial of attraction, 149
fibrillation, 156 basin of attraction, 97
atrial contraction, 110 basin of attrraction, 14
atrioventricular node, 110 beating heart, 97
atropine, 94 Beggs J.M., 276
attractor, 13, 211 behaviorial sciences, 144
cardiac, 237 Belousov-Zhabotinskii
chaotic, 127, 211, 212 reaction, 166
cognitive, 25 Belsousov-Zhabotinskii reaction, 24
ECG, 239 Beran, J., 21, 182
funnel, 158 Bernard, C., 2
strange, 123 Bernoulli, J., 33
attractor reconstruction, 23, 162 Berry, M.V., 52
attractor reconstruction technique, Beslousov-Zhabotinskii
211 reaction, 259
attractors Bhargava, V., 55
chaotic, 24 bidirectional, 111
strange, 3, 23 bifurcates, 217
autocorrelation, 124, 181 bifurcation, 155
coefficient, 187 parameter, 228
function, 83 period-doubling, 128
autonomic, 201 subharmonic, 137
nervous system, 94 bifurcation points, 143
regulation, 278 bifurcations, 116
AV block, 236 bile duct , 58
AV junction, 84, 118 binomial
AV node, 110, 233 coefficient, 185
AV oscillator, 115 expansion, 185
avalanche, 276 bio-oscillators, 23

60709_8577 -Txts#150Q.indd 314 19/10/12 4:28 PM


Index 315

biochemical branchings, 39
reactions, 258 tube, 37
biochemistry, 228 tube sizes, 30
biological clock, 228 bronchial airways, 20
biological evolution, 7 bronchial tree, 27
biological time, 65 bronchial tube, 180
biology, 128, 178 bronchioles, 29
biomechanics, 108, 205 Brown, J.H., 66, 71
biomedical Brownian motion, 52, 184
processes, 3 BRV, 203
bioreactor, 65 Buchman T.G., 278
birth rate, 132 Buchman T.K., 2
blood circulation time, 65 bursting, 222
blood flow, 70 BZ reaction, 228
to brain, 24
blood flow velocity, 195 Calder III, W.A., 65
body cooling, 74 Cannon, W., 2
bone, 27 canonical surface, 216
Bonifazi P., 275 Cantor set, 40, 123
boundaries Cantor, G., 40
metabolic level, 78 capillary bed, 70
boundary carbon dioxide, 203
constraints, 74 cardiac
bowel, 58 conduction, 116
brain, 255 depolarization pulse, 84
dynamics, 275 output, 70
injury, 278 pulses, 3
waves, 3 cardiac chaos, 156, 232
brain wave, 4 cardiac oscillator, 110
branching process, 55 cardiac pulse, 32
breath rate variability cardiac system, 79
BRV, 202 cascade, 276
breath time, 65 catatonia, 94
breathing, 97 CBF, 195
episodes, 203 Central Limit Theorem, 9
broadband, 124 central nervous system, 225
bromide ion, 166 central pattern generator
bromide ions, 228 CPG, 101
bronchi, 29 cerebellum, 205
bronchial cerebral
airway, 29, 61 auto-regulation, 198
architecture, 29 cerebral blood flow

60709_8577 -Txts#150Q.indd 315 19/10/12 4:28 PM


316 Index

CBF, 194 colored noise, 168


cerebral cortex, 243 communication, 275
cerebrovascular complex network, 125, 177
impedance, 197 complexity, 261
cerium ions, 228 loss, 79
chain reactions, 276 temporal, 266
chain rule, 142 conductances, 111
Changizi, M.A., 63 conduction system, 32
chaos, 1, 13, 176, 178, 211, 261 congestive heart failure, 194
chaotic, 107, 118 consensus, 262, 275
attractor, 15 survival probability, 273
orbit, 15 conservation
systems, 148 action, 128
chaotic attractor, 222 energy, 128
chaotic neurons, 220 momenta, 128
chaotic transients, 123 of energy, 95
chemical chaos, 228 of momentum, 95
chemical kinetics, 178 conservative, 144
chemical species, 162 forces, 100
chemotherapy, 94 constrained randomness, 32
Cheyne-Stokes, 158 constrained walking
chick heart, 233 metronome, 103
chick hearts, 24, 99 contact rate, 212
chicken pox, 258 continuity, 121
child’s swing, 6 continuous, 29
childhood, 105 contracting process, 55
epidemics, 258 contractions, 203
choice, 267 control
chordae tendineae, 59 feedback, 197
circadian rhythms, 97 system, 93, 154
circuit, 111 control group, 196
clustering, 202 control mechanism, 89
CNS, 110 control parameter, 128, 266
co-variation, 73 control parameters, 228
function, 74 control system, 110
coagulates, 97 control theory, 81
coastline conversation, 262
length, 42 conversion hysteria, 7
cochlea, 33 convolution
cognitive psychology, 17 discrete, 186
Cohen, R.J., 156 cooperation, 262
coin flip, 183 coordination, 262

60709_8577 -Txts#150Q.indd 316 19/10/12 4:28 PM


Index 317

correlation dimension, 159 data processing


correlation exponent, 160 nonlinear, 24
correlation function, 23 Dawes, G.S., 203
correlation time, 86 death rate, 132
correlations decision making model, 274
long-time, 205 decomposable, 245
coupled equations decomposition
nonlinear, 237 Fourier, 243
covariance decorrelation rate, 85
stationary, 181 deflation, 203
two-point, 184 Degn, H., 136, 229
CPG, 102, 174 delay times, 239
critical dendrites, 220
behavior, 275 dendritic tendrils, 243
dynamics, 266 Descartes, R., 33
states, 269 Destexhe, A., 237
critical parameter, 143 deterministic randomness, 118
critical phenomena, 32 diagnostic, 1
critical state, 266, 275 diameter
critical value, 48, 228, 266 average, 55
criticality, 265 diferential equaitons
self-organized, 276 fractional, 24
Crutchfield, J.P., 13, 163 differentiable
Curie temperature, 48 tangent, 48
Cuvier, G., 63 diffusion, 3, 70
cyclic anamolous, 24
phenomenon, 105 anomalous, 182
cyclic patterns, 258 classical, 182
Cyr, H., 71 coefficient, 183
cytoplasm, 220 Einstein, 183
entropy analysis
da Vinci, L., 20 DEA, 187
Darwin, C., 7 molecular, 38
data, 152 dimansion, 15
analysis, 68 dimension
epidemilogical, 212 asymptotes, 214
geophysical, 167 correlation, 23, 176, 212
interspecies, 73 embedding, 168
intraspecies, 73 fractal, 23, 48
metabolic, 73 Hausdorff, 161
record, 152 non-integer, 15
dimensionless variable, 55

60709_8577 -Txts#150Q.indd 317 19/10/12 4:28 PM


318 Index

diode dynamical systems


resistance, 113 low-dimensional, 119
diodes dynamics
tunnel, 111 chaotic, 32
discontinuous, 29 dynamics of epidemics, 212
discrete
dynamics, 23 earthquakes, 276
disease, 2, 79, 128, 278 ECG, 79, 118, 156, 200
diseases time series, 237
infectious, 219 Eckmann, J.P., 123, 146
disorder, 179 ecological, 131
disorders, 116 economics, 21, 178
disorganization, 179 EEG, 83, 118, 158, 242
displacement time series, 255
oscillator, 99 trajectory, 245
disruptions, 9 Ekeland, I., 15
dissapative elderly, 205
system, 167 electrical pulses, 102
dissipation, 120, 145 electrocardiogram, 83, 156
parameter, 188 electrodes, 276
dissipative, 99 electroencephalograms
flwo, 70 EEG, 242
distance electroencephalography, 275
Euclidean, 272 embedded, 123
distribution embedding theorem, 24
contacts, 219 embedding theorems, 229
ensemble, 81 embryonic cells, 99
fractal, 48 energy
log-normal, 76 minimization, 69
multifractal, 196 spectral density, 84
Pareto-like, 76 Enquist, B.J., 66, 71
wake interval, 257 ensemble, 52
divergence, 149 entrainment, 111, 118
DMM, 267, 274, 275 entropy, 15
Dodds, P.S., 66, 76 production, 37
dynamic, 6 environment, 178
dynamic harmony, 80 enzyme reaction, 229
dynamic interaction, 115 epidemic
dynamic laws, 223 simulated, 219
dynamic processes, 31 epidemics
dynamical diseases, 91 dynamics, 24
dynamical system, 187 epidemiology, 132, 258

60709_8577 -Txts#150Q.indd 318 19/10/12 4:28 PM


Index 319

epiphenomena, 116 Field, R.J., 228


equation error, 75 finance, 21
equations of motion, 96 firing rate, 201
equiangular spiral, 33 fixed point, 2, 96, 127, 222
equilibrium degenerate, 141
statistical physics, 81 flexibility, 196
Erdos number, 265 flow field, 120
ergodic, 243 fluctuation-dissipation relation, 188
error, 120 fluctuations, 52
functions, 61 external, 24
mean-square, 66 thermal, 81
measurement, 19 focus, 96
relative, 61 folding, 126
tolerance, 59, 62 force, 4
two sources, 75 vector, 5
error-tolerance, 87 Ford, J., 119
estimate, 152 forest fires, 276
Euclidean space, 42 fossil remnant, 62
events Fourier
unpredicted, 9 expansion, 180
evolked potential, 197 transform, 152
evolutionary advantage, 203 Fourier transform, 23, 84, 123
excitation, 220 fractal
expiration, 199 cascade, 86
exposed, 212 curve, 42, 200
extinction, 133 design priniciple, 59
dimension, 9, 40, 122, 144,
fad, 128 160, 176, 181, 199
Faddeev, 4 dimensions, 27
false alarms, 167 dynamic, 32
Fanner, D., 124 dynamics, 22
Farmer, D., 125, 127 function, 49
Farmer, J.D., 163 geometry, 73
faucet line, 42
dripping, 3 lungs, 54
feedback, 2, 113 neural network, 59
feedback loops, 80 no definition, 8
Feigenbaum, M., 143 object, 126
Feldman, H.A., 66 persistent properties, 103
fetal distress syndrome, 158 scaling, 68
fetal lamb signals, 78
breathing, 203 statistical, 32

60709_8577 -Txts#150Q.indd 319 19/10/12 4:28 PM


320 Index

statistics, 3, 73 Gauss, K.F., 10


structure, 54 gene frequency, 128
surface, 168 general systems
time series, 79, 84, 168, 194 analysis, 256
transport, 59 theory, 257
tree, 32 generalized harmonic analysis, 242
trees, 220 generation index, 57
variability, 109 generation number, 29
fractal generations, 128
dimension, 105 geophysical, 120
toplology, 59 geophysics, 21
Fractal Physiology, 79, 262 Ginzburg B., 174
fractal stochastic dynamics, 182 Glansdorf, P., 37
fractal-like Glass, L., 19, 93, 98, 118, 233
anatomy, 22 Glazier, D.S., 74
fractals, 1, 27, 261 global
geometric, 32 cooperation, 268
three kinds, 32 global variable, 266
fractional glucose, 230
Brownian motion, 182 glycolysis, 230
fractional calculus, 73, 188 glycolytic model, 230
fractional diffusion Godin P.J., 278
operator, 182 Goldberger A.L., 278
Fraedrich, K., 167 Goldberger, A., 2, 8, 19
Fraiman D., 275 Goldberger, A.L., 55, 93, 103
frational index, 192 Gollub, J.P., 111
Freeman, W.J., 25 Gomez, D., 34
frequency Grassberger, P., 160
ratio, 115 gray matter, 63
frequency spectra, 94 Grenfell, B.T., 63
Freud, S., 7 Griffin, L., 205
Grigolini P., 277
gait Guevara, M.R., 118
cycle, 104
dynamics, 108 Hamiltonian, 144, 187
models, 101 harmonic content, 81
gait cycle, 205 harmonic decomposition, 154
fluctuations, 205 harmonic variation, 99
gait dynamics harmonics, 154
quasi-periodic, 106 harmony, 262
Galileo Galilei, 28 Harvard
gamma function, 185 Medical School, 103

60709_8577 -Txts#150Q.indd 320 19/10/12 4:28 PM


Index 321

Hasagawa S., 280 human


Hausdorff dimension, 46 body, 177
Hausdorff, F., 46 brain, 242
Hausdorff, J.M., 105, 205 eye, 2
Hausdorff,J.M., 107 gait, 103
Hayashi, H., 223, 259 human brain, 24
health, 6 human heart, 24
healthy function, 31 human lung, 28
hearing, 256 Huntington’s disease, 104
heart, 22, 31, 203 Hurst exponent, 181, 202
heart rate Huxley, A., 65
variability, 87 Huxley, J., 65
heart rate variability, 32 hydordynamic, 120
HRV, 199 hydrodynamic
heartbeat, 31 resistance, 68
Heaviside function, 160 hyperbolic tangent, 114
height, 3 hypercubes, 161
Helmholtz, H., 220 hyperexcitability, 197
helminths, 258 hysteresis, 111
Henderson L.J., 2 hysteretic, 113
Henon
system, 147 Ikeda, N., 236
Henon, M., 147 immune, 212
Hess, B., 228 impedance
Hess-Murray law, 69, 70 minimazation, 70
Heusner, A.A., 74 inactivation, 226
hierarchal, 223 inerbeat interval, 23
Hill, B., 65 infection, 258
His-Purkinje infectious, 212
conduction, 233 diseases, 258
conduction network, 84 infectious period, 212
histogram, 67, 76, 103 infinite period, 137
histograms, 109 influence, 152
Hodgkin-Huxley equations, 226 information, 3, 15, 23, 78, 132
Holder exponent, 103 change, 148
homeodynamics, 3 generation, 17
homeostasis, 2, 80 generator, 148
homogeneous, 29, 39 information-rich, 158
function, 180 inhomogeneity, 21
HRV, 158, 196, 200 inhomogeneous, 29
Huesner, A.A., 66 initial conditions, 123
inner ear, 33

60709_8577 -Txts#150Q.indd 321 19/10/12 4:28 PM


322 Index

inspiration, 199 Kolmogorov entropy, 237


insulin secretion, 97 Kolokotrones, T., 72
integrable, 120 Konarzewski, M., 71
interactions Kott, M., 258
short-range, 275 Kozlowski, J., 71
interbeat intervals, 240
interbreath intervals, 203 Lévy stable, 193
interconnections, 243 Lévy statistics, 193
intermediaries, 228 lacunary, 89
intermittency, 3, 21, 225, 259, 266 Langevi, P., 182
Internet, 263 Langevin equation, 182
interspecies fractional, 189
allometry relation, 64 generalized, 188
metabolic AR, 64 language, 178
interspike intervals, 222 Laplace, P.S. de, 9
interstride interval, 194 latency period, 212
intraspecies Latka, M., 193
allometry relation, 64 lattice
metabolic AR, 64 two-dimensional, 266, 270
inverse power law, 190 Laurienti P.J., 280
ionic conductance, 226 Lavrentev, 5
ionic currents, 221 law of error, 11
ions, 220 Layne, S.C., 245
irregular, 29 Layne, S.P., 243
Ising model, 270, 274, 275 learning curve, 266
iterated, 145 legacy, 178
iterates, 136 length, 8
iteration number, 143 leukemic
Ivanov, P., 194 cell production, 158
Lewis, Z.V., 52
Joyce K.E., 280 life expectancy, 212
life sciences, 81
Kandel, E.R., 220 life-threatening, 194
Kemeny, J., 132 limit cycle, 18, 97, 118, 222, 239
kernel, 192 limiting manifold, 123
kidney, 58 Lindstedt, S.L., 65
kinetic laws, 228 linear
Kirchhoff’s law, 112 regression, 64, 73
Kleiber’s law, 71 regression analysis, 200
Koch curve, 43 relation, 74
Kohlrausch-Williams-Watts response, 102
Law, 190 linear system, 5

60709_8577 -Txts#150Q.indd 322 19/10/12 4:28 PM


Index 323

linearity, 4 Mandelbrot, B., 8, 32, 52, 59, 88


linearly coupled, 113 Mandelbrot, B.B., 146, 181
links map
correlation, 270 cardiac, 23
weak, 264 linearized, 149
living neurons, 275 nonlinear, 129
living organisms, 81 slope, 149
living systems, 256 mapping
locomotion, 101, 105, 205 function, 131
bipedal, 108 mappings, 23
logarithmic spiral, 33 maps
logical inconsistencies, 71 dynamical, 3
logistic equation, 134 non-invertible, 138
London, W.P., 213 one-dimensional, 131
long tail, 12, 84 two-dimensional, 144
Lorenz Markus, M., 228, 230
attactor, 158 marrow, 39
Lorenz system, 119 mass exponent, 193
Lorenz, E., 13, 123 mass points, 40
Lotka, A.J., 100 master equation
Luce, G.G., 97 two-state, 267
lung, 203 mathematical biology, 100
complexity, 29 Mathews Effect, 265
mammalian, 70 Matsumoto, G., 224
structures, 34 Mauldin, R.D., 52
lungs, 22 May R.M., 174
Lyapunov May threshold, 175
exponent, 230 May, R.M., 131, 138, 143
numbers, 151 Mayer-Kress, G., 245
Lyapunov expenent, 127 McMahon, T.A., 66
Lyapunov exponent, 23, 148 McNab, B.K., 74
Lyapunov exponents, 17 measles, 259
measurement error, 75
Mackey, M.C., 19, 93 measures, 6
macroparasites, 258 mechanoreceptor, 198
magnetic resonance medical diagnosis, 110
imaging, 275 medicine, 1, 12, 128
magnetoencephalography, 275 membrane
Malthus, R.T., 133 capacitance, 226
mammalian current, 226
metabolism, 74 membranes, 220
mammalian lung, 89 memory

60709_8577 -Txts#150Q.indd 323 19/10/12 4:28 PM


324 Index

long-term, 21 Montroll, E.W., 55


long-time, 180, 190 Moon, R., 203
memory morphogenesis, 37, 62
kernel, 193 morphogenetic laws, 58
metabolic, 197 motor control system, 79
allometry, 74 motor cortex, 205
exponent, 74 motor-control, 104
resources, 74 mouse
metabolic allometry, 66 to whale, 63
metabolic rate, 65 mullusk, 224
metabolism, 37 multifractal, 183, 194
Metzler, R., 189 gait, 103
microbes, 28 time series, 79
microbiology, 7 multiple organ
microparasites, 258 dysfunction syndrome, 278
middle cerebral artery, 195 mumps, 258
migraine, 198 Murray’s law, 20, 69
migraines, 24, 194 Murray, C.D., 20
Miller, K.S., 189 muscle cells, 220
Miller, R.N., 227 mutation, 128
Milton, J.C., 93 Mutch, W.A.C., 204
mirror, 140 myocardial infarction, 159
mitral valves, 59 myocardium, 32
Mittag-Leffler function, 189 myogenic, 197
generalized, 191
mode amplitudes, 81 Natural Philosophy, 7
mode-locking, 201 natural variability, 75
models Nautilus, 33
discrete dynamic, 23 neocortex, 63
mathematical, 7 nerve pulse, 220
nonlinear, 3 network
quantitative, 32 arterial, 35
MODS, 278 complex, 174
Moe, G.K., 156 complex , 59
moment DGCT, 272
second, 183 DMM, 266
Monin, A.S., 59 dynamically induced, 271
monofractal, 183, 188 efficiency, 268
noise, 103 fractal, 62
monotonic, 136 fractal-like, 66
monotonous, 158 His-Purknije, 27
Montroll E.W., 264 intraspinal, 101

60709_8577 -Txts#150Q.indd 324 19/10/12 4:28 PM


Index 325

neuronal, 275 nework


plant vascular, 70 backbone, 272
random, 264 Nicolis, C., 16, 167
resistance, 36 Nicolis, G., 16, 167
size, 63 Nicolis, J.S., 16
small world, 264, 278 Nikolskii, 5
social, 262 Nitella flexilis, 99
stability, 276 noise, 3, 31, 154, 176, 211
tree-like, 29 biological, 107, 206
network biological and environmental,
failure, 278 236
network of networks, 177 membrane, 24
Network Physiology, 262 multiplicative, 217
Network Science, 261 random, 83
networks non-differentiable function, 48
biological, 23, 175 non-equilibrium
biomedical, 93 statistics, 80
complex, 32 non-homeostatic, 202
neural, 58 non-invertible, 143
nutrient transport, 74 non-invertible map, 128
physiologic, 3, 84 non-Normal
scale-free, 265 probability density, 182
neural chain, 108 non-periodic, 121
neural physiology, 243 non-stationary, 245
neurodegenerative, 104 statistics, 80
neurodegenerative nonlinear
disorders, 94 analysis, 102
neurohumoral regulation, 87 bio-mapping, 128
neuron, 220, 258 bio-oscillators, 95
neuron clusters, 263 cardiac oscillator, 111
neuron density, 63 filter, 108
neuronal network, 98
dynamics, 223 oscillator, 99
neuronal avalanches, 263 nonlinear dynamics, 1, 95, 218
neurophysiologic deterministic, 167
diseases, 205 systems theory, 93
neurophysiology, 17 nonlinearity, 4
neuroscience, 21, 275 Nonnenmacher, T.F., 189
neurosurgeon, 177 Normal
neutrally stable, 100 distribution, 10, 21, 61
neutrophils, 94 random variable, 53
New York City, 258 normal sinus rhythm, 111, 240

60709_8577 -Txts#150Q.indd 325 19/10/12 4:28 PM


326 Index

nutrient pacemaker cells, 84


distribution, 68 pacemakers, 111
nutrients, 66 Packard, N.H., 24, 163
Nutting Law, 190 pain tolerance, 97
Panchev, S., 172
observable, 6 paradox, 178
olfactory parasympathetic, 201
EEG, 244 Pareto
olfactory system, 25 fluctuations, 73
Olsen, L.F., 136, 229 Pareto, V.F.D., 12
Onchidiium Parkinson’s disease, 104
giant neuron, 224 passive conduit, 111
one-humped map, 229 passive resistive, 111
Onsager L., 270 patchiness, 202
operational definition, 7 pathology, 105
orbit, 6 Peng, C.-K., 192
closed, 115 perfect clocks, 120
order, 179
perfusion pressure, 197
organisms, 132
period doublling, 137
organized variability, 32
period-doubling, 155, 215
Ornstein-Uhlenbeck process, 191
periodic, 3
Osborne, A.R., 167
fluctuations, 94
oscillating
periodic motion, 19
chemical reaction, 259
periodic rhythms, 2
oscillation
cardiac, 237 peroxidase-oxidase
oscillations reaction, 229
periodic, 245 persistent, 184
oscillator perturbation, 135, 235
loss-free, 100 theory, 120
van der Pol, 18, 108 perturbations, 100
oscillators Peters, R.H., 67
nonlinear, 23 petri dish, 132
oscillatory, 136 phase locked, 115
Oseledec, V.I., 149 phase plot, 223
Oster G.F., 174 phase portrait, 166
Ott, E., 123, 144, 148 phase response function, 235
oxygen, 203 phase space, 6, 13, 96, 160
four-dimensional, 112
pace frequency, 108 orbit, 99
pacemaker, 201 reduced, 114
normal, 110 three-dimensional, 121, 239

60709_8577 -Txts#150Q.indd 326 19/10/12 4:28 PM


Index 327

phase transition, 48, 258, 262, 266, Principle of Similitude, 33


276 probability, 52
phase transitions, 80 calculus, 20, 74
phase-portrait mass, 46
three-dimensional, 229 power-law, 20
physics, 178 theory, 74
physiologic network, 178 probability density
physiologic structures, 27 Normal, 182
physiologic time series, 198 Procaccia, I., 160
physiology, 1, 21, 189 products, 228
Pincus S.M., 278 proprioceptive, 205
placenta, 58 protozoa, 258
Plenz D., 276 Provenzale, A., 167
Poincare psychoanalysis, 7
surface of section, 165 psychobiology, 108
Poincare, H., 13, 80, 120 psychology, 132
Poiseuille’s law, 36 pulmonary tree, 29
Poisson statistics, 268 pulsate flow, 70
Pool, R., 259 pulse train, 221
population, 128
population growth, 133 QRS, 158, 237
population levels, 23 QRS-complex, 83
postcentral gyri, 222 quadratic branchings, 229
postural sway, 110 quasiperiodic, 259
potassium, 226
power laws, 261 ramified structure, 243
power spectral density, 124 ramified structures, 81
power spectrum, 23, 152, 237 random, 3, 160
EEG, 243 force, 188, 192
power-law variable, 52
distributions, 27 random variables
index, 53, 180 independent, 53
pulse, 87 random walk
spectrum, 88, 158 fractional, 185
preadapted, 62 simple, 183
predictability, 119 two-state, 257
predictions, 129, 263 random walks
prefractal, 44 fractional, 24
preleukemic states, 94 simple, 24
Prigogine, I., 37 randomness, 126
Principle Rapp, P., 19, 94, 222, 259
Optimal Design, 35 Rashevsky, N., 35

60709_8577 -Txts#150Q.indd 327 19/10/12 4:28 PM


328 Index

rate of infection, 214 Richardson, L.F., 42


reactants, 228 Rinzel, J., 227
reconstruction, 211 robot, 108
recovered, 212 Rohrer, F., 33
recurrent epidemics, 215 Ross, B., 189
recursion relation, 134 Rossitti, S., 195
reductionistic, 20 Rossler, O.E., 14
reflecting barrier, 257 Roux, J.C., 228
refractoriness, 117 RR interval, 260
regression Ruelle, D., 123
line, 67 ruler, 201
regulation, 197 rumor, 132
physiologic, 222
physiological, 259 SA node, 115, 232
regulatory mechanism, 99 Saccharomyces cerevisiae, 230
relative dispersion, 195 saturation, 134
relative frequency, 86 Savage, V.M., 66, 71
relaxation Scafetta, N., 105
oscillator, 106 scaling, 8, 32, 105
relaxation oscillator, 98 classical, 29
renormalization exponent, 21, 171
group theory, 86 exponential, 54
renormalization group, 39 index, 204, 206
relations, 27 isometric, 74
renormalizes, 85 parameter, 35, 187
repetitive firing, 223 pathologies, 93
reproduce, 133 power-law, 62
respiration, 203 principle, 39
respiratory relation, 52, 184
arrhythmia, 199 renormalization, 181
respiratory musculature, 37 self-similar, 81
respiratory system, 79 simple, 74
response, 2, 4 traditional, 57
characteristics, 89 scaling
scalar, 5 classical, 39
return map, 239 ideas, 93
reversal potentials, 226 scaling measures, 179
reversible, 144 scaling relation
rgular, 29 da Vinci, 20
rhythmic Schaffer, W.M., 215, 258
behavior, 97 schizophrenic
rhythmic movements, 108 symptoms, 94

60709_8577 -Txts#150Q.indd 328 19/10/12 4:28 PM


Index 329

Schrodinger, E., 81 aperiodic, 154, 244


Schwartz inequality, 161 coherent, 83
Schwartz, I.B., 218 EGG, 154
scientific maturity, 7 electrical, 154
SCPG, 106, 109, 174 physiologic, 79
sea shells, 33 plus noise, 79
second moment transmisssion, 228
finite, 187 signals
secular growth, 101 fibrillatory, 158
sef-similar Silva, J.K.L., 70
scaling, 199 simian
Segal, L.A., 134 cortical neurons, 99, 222
SEIR, 225 Simoyi, R.H., 166
SEIR model, 212 single-humped map, 225
self-affine, 171 singularity spectrum, 193
self-affinity, 52 sinoatrial node, 110
self-aggregating behavior, 179 sinus node, 84, 201
self-excitation, 232 six degrees
self-regulating, 99 of separation, 264
self-regulatory, 194 skin temperature, 97
self-similar sleep, 244, 257
branches, 40 sleep-wake
cascade, 89 transitions, 257
scaling, 87 Smith, J.M., 156
structure, 148 Snell, J.L., 132
trajectory, 171 Snell, O., 63
self-similarity, 8, 27, 39 social gathering, 219
self-starting, 99 social gatherings, 263
self-sustained social science, 268
oscillations, 223 sociology, 128, 132
sensitive dependence sodium, 226
on initial conditions, 16, 146 space-fillling, 68
sensitivity coefficient, 5 spatial structure, 270
septic species longevity, 65
shock, 278 spectral analysis, 212
Seymour, R.S., 67, 74 spectral decomposition, 81, 199
Shaw, R., 16, 148 spectral exponent, 172
Shaw, R.S., 163 spectral reserve, 94
Shlesinger, M., 55 spectral techniques, 154
shuffled, 126 spectrum, 82
shuffling, 15 1/f-like, 240
signal, 78 multifractal, 197

60709_8577 -Txts#150Q.indd 329 19/10/12 4:28 PM


330 Index

narrowband, 94 stroboscope
spinal cord transfer function, 231
transection, 102 stroboscopic map, 223
spontaneous excitation, 23 Strogatz S.H., 263
squid structure function, 172
giant axon, 224 Struzik,Z.R., 105
squirrel monkey subharmonic bifurcation, 156
brain, 222 subharmonic synchronization, 227
sress, 106 super central pattern generator
SRV, 194, 206 SCPG, 105
fluctuations, 105 superior colliculus, 102
stability, 135 superior vena cava, 233
local, 17 surface of section, 216, 229, 240
stable equilibria, 123 susceptibles, 212
stationary, 5, 243 swarms, 263
statistical, 20 Swift, J., 8
artifact, 66 swinging heart, 158
statistical fluctuations sympathetic, 201
origins in AR, 66 synchronization, 262
statistical mechanics synchronize, 103
classical, 119 syncopated, 101
steady state, 31, 134, 155, 228 system
Stein, K.M., 157 dissapative, 17
Stephenson, P.H., 195 system response, 186
stimulation, 24, 223 systems
Stirling’s approximation, 185 cognitive, 17
stochastic process, 52 systems theory, 4
strange, 124 nonlinear dynamics, 21
attractor, 148 Szeto, H.H., 203
strange attractor, 118
strange attractors, 152 taco, 126
stress, 2 tail, 219
natural, 109 Takens, F., 24, 166
psychophysical, 109 tangent
stress relaxation, 190 vector, 149
stretched exponential, 190 Tauberian Theorem, 84
stretching, 126 Tauberian theorem, 238
stretching rate, 151 taxonomy, 243
stride interval temperature, 99, 270
variabililty, 24 temperature gradient, 120
stride rate variability temporal complexity, 273
SRV, 105, 205 tendrils, 220

60709_8577 -Txts#150Q.indd 330 19/10/12 4:28 PM


Index 331

tent map, 149 Tsuda, I., 17


terminal tug-of-war, 202
branches, 68 turbulence, 80
thermodynamic, 270 turbulent
thermodynamics, 48 fluid flow, 192
Thompson, D., 27, 33 process, 157
three-body problem, 80 Tuszynski, J.A., 72
threshold, 276
thyroxin, 95 uncertain, 9
time derivative, 83 uncertainty principle
time lag, 185 generalized, 120
time reversibility, 144 uncorrelated, 160
time scale unidirectional, 111
characteristic, 32 universal
time series, 23, 163, 211 scaling, 74
biomedical, 27, 81 universal constant, 143
EEG, 24 universality, 207
fractal, 212 universality assumption, 73
interbeat interval, 196 unpredictability, 119, 177
monofractal, 199 unstable, 136
random, 168, 244
SRV, 105 vaccination, 259
stride interval, 103, 205 van der Mark, J., 98
time trace, 82, 124 van der Pol
topological oscillator, 101
dimension, 42 van der Pol, B., 98
topological complexity, 273 variability
topology, 274 loss, 79
scale-free, 266, 275 variance, 181
total derivative, 121 veins, 58
Tower, D.B., 63 velocity
trachea, 29, 40 oscillator, 99
trajectory, 6, 13, 96, 239 ventilator
transcranial, 197 fractaly variable, 204
transcranial Doppler ventilators
ultrasonography, 195 mechanical, 204
transient, 137 ventricles, 233
transmission ventricular
message, 220 fibrillation, 156
trial solution, 180 ventricular myocardium, 84
tricuspal valves, 59 Verhulst equation, 138
trisecting, 40 Verhulst, P.F., 133

60709_8577 -Txts#150Q.indd 331 19/10/12 4:28 PM


332 Index

vessel-bundle, 70 Weiss G.W., 265


vestibular, 205 Wenckebach cycles, 117
Vierordt, 205 West, B.J., 8, 19, 55, 59, 66, 114,
virtual frequency, 106 194, 205
viruses, 258 West, D., 20, 64, 66, 73
visual, 205 West, G.B., 66, 71
visual neurons, 79 west11, 220
voltage, 111 white noise, 170
voltage drop, 113 fractional-differenced, 186
voltage pulses, 86 White, C.R., 67, 74
voltage-current, 111 Whitney, H., 24, 166
Wiener process, 188
wake periods, 258 Wiener, N., 80, 89, 153
Walker, S.C., 71 Wiener-Khinchine
walking, 101, 174, 205 relation, 154
three speeds, 103 Williams, S.C., 52
Warton, D.I., 75 Wilson, T., 37
waste, 74 Winfree, A.T., 98
Watts D.J., 263 Wolf, A., 17
wavelet analysis, 157 WW model, 78
WBE model, 68
weather, 3, 12 yeast extracts, 230
Weaver, W., 179 Yorke, J.A., 213
Weibel, E., 34, 59, 72, 88
Weierstrass function, 49, 180 Zebrowski, J.J., 118
extended, 53, 83 zero frequency, 137
Weierstrass, K., 48 Zhang, R., 195

60709_8577 -Txts#150Q.indd 332 19/10/12 4:28 PM

You might also like