0% found this document useful (0 votes)
19 views

Classical and Quantum Probability

We follow the development of probability theory from the beginning of the last century, emphasising that quantum theory is really a generalisation of this theory. The great achievements of probability theory, such as the theory of pro cesses, generalised random fields, estimation theory and information geometry are reviewed. Their quantum versions are then described. Keywords: Probability, sampling, processes, Markov chains, random fields Fisher information, quantum probability, quantum informatio

Uploaded by

lerhlerh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Classical and Quantum Probability

We follow the development of probability theory from the beginning of the last century, emphasising that quantum theory is really a generalisation of this theory. The great achievements of probability theory, such as the theory of pro cesses, generalised random fields, estimation theory and information geometry are reviewed. Their quantum versions are then described. Keywords: Probability, sampling, processes, Markov chains, random fields Fisher information, quantum probability, quantum informatio

Uploaded by

lerhlerh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Classical and Quantum Probability

arXiv:math-ph/0002049v1 27 Feb 2000

R. F. Streater,
Dept. of Mathematics, King’s College London,
Strand, WC2R 2LS

25 August 1999.

Abstract
We follow the development of probability theory from the beginning of the
last century, emphasising that quantum theory is really a generalisation of this
theory. The great achievements of probability theory, such as the theory of pro-
cesses, generalised random fields, estimation theory and information geometry,
are reviewed. Their quantum versions are then described.
Keywords: Probability, sampling, processes, Markov chains, random fields,
Fisher information, quantum probability, quantum information manifolds.

1 Introduction
The are few mathematical topics that are as badly taught to physicists as probability
theory. Maxwell, Boltzmann and Gibbs were using probabilistic methods long before
the subject was properly established as mathematics. Their language, of ensembles,
complexions, fluctuations and most probable state, are still used. When quantum the-
ory came along, the same notions were fitted into the new theory, sometimes leading
to confusion. We review the mathematical development of probability, emphasising
that quantum theory is a generalisation. The approach to history is in the same spirit
as used by Milligan in [1]
There are three ‘philosophies’ concerning probability. In the easy case, when there
are finitely many possible outcomes to the experiment being considered, Laplace’s
principle of equal ignorance tells us that the probability of each of the outcomes is
the same. In the case of a die with six sides, experiments suggest that the probabilities
are not all exactly equal. Nevertheless, there is not much error if we assume that the
probability of each number is 1/6. An objection to Laplace’s principle in general is
that it is not always clear that the outcome of a particular experiment is a matter of
chance, even when we do not know which outcome will turn up; it could even be that
a particular outcome is inevitable. Thus a more robust version of Laplace’s principle
might be that in events governed by chance, the probability of each possible outcome
is the same. This still leaves open the meaning of the phrase, ‘governed by chance’.

1
The difficulty of defining the ‘uniform’ distribution when variates take continuous
values is illustrated by Bertrand’s paradox ([2], p. 246). This demolished Laplace’s
principle for continuous variables.
The philosophy of Laplace applied to probability theory might be described as
Platonic. A real die is the shadow of the ideal die, which has perfect sides and exact
probabilities of 1/6 for each outcome. This has a modern form of expression: we
model the real die by the sample space Ω = {1, 2, . . . , 6} whose elements ω are called
outcomes, and assign the probability 1/6 to each. The value of a random variable f
is known if we know the outcome ω; f is therefore a real-valued function on Ω. More
generally, if the sample space is a finite set Ω, an event E is a subset of Ω; we say
that the event has occurred if the ω that occurs lies in E. The probability that E
occurs is the sum of the probabilities of the points in E:
X
p(E) = p(ω). (1)
ω∈E

We say that two events, E, F , are independent if p(E ∩ F ) = p(E)p(F ). In this


way the binomial distribution can be derived for the total shown by n dice thrown
independently, and all of Laplace’s probability theory can be derived. It can tell us
what bets to lay on an event E, even when only one trial is going to occur.
Laplace’s method has been successfully applied to statistical mechanics; the space
of states is discretised, thus avoiding Bertrand’s paradox (the choice of bins being
suggested by quantum mechanics). Each bin is said to be equally probable, and some
hypotheses about independence is postulated. Then it is shown that the complexion
(macroscopic state) given by the Gibbs distribution is not just the most probable,
but is overwhelmingly the most probable. The chance of any complexion minutely
different is put at 10−170 . The Gibbs distribution is, of course, the equilibrium state;
if it is so probable, how come systems manage to be out of equilibrium, and remain
so for years at a time? This remark is not aimed at Tolman [3], who made it clear
that the assumption of equal probabilities applies only to equilibrium, and is to be
tested against experiment; it passes the test well, but he then spoils it by adding as
a further justification, ‘without this postulate there would be nothing to correspond
to the circumstance that nature does not have any tendency to present us with sys-
tems in conditions which we regard as mechanically entirely possible but statistically
improbable’. The word ‘improbable’ is itself based on Laplace’s assumption!
The second ‘philosophy’ of probability can be described as Aristotelian; it had
taken hold by 1920, and is known as the ‘frequentist’ approach. It is essential that
we can reproduce a long run of independent trials each conducted under exactly the
same experimental conditions. In this respect, the theory makes sense only within a
scientific culture. Suppose that we have one ‘variate’, which may take continuous or
discrete values. The result of a measurement of the variate is assigned to one of a
preassigned set of ‘bins’, which are intervals on the real axis. We repeat a number
of times, to find the histogram, that is, the number ni of events (out of N trials) in

2
the ith bin. If the histogram settles down to a stable shape as we increase N , we
declare that the value of the variate is random (or, random enough). We then define
the probability of the event i to be
ni
pi = lim . (2)
N →∞ N

This approach avoids the above problems that beset the Laplace philosophy. How-
ever, it is completely useless as mathematics; a ‘definition’ should not depend on an
infinite number of future experimental results. There is not one theorem that can be
proved from this definition. Feller points out that we must avoid confusion between
a definition, and a method of measurement. There is great heuristic value to the
frequentist approach. It is easy to teach [2]; we do not prejudge the possible values
that the variate can have, or the probability of a given value; we can introduce an-
other variate Y , and observe its distribution, and its joint probability distribution
with X; we can by extending this idea get access to the joint probability distribution
of any finite number of variates; we can get some idea as to whether the variates are
random by examining a sequence of independent trials. We can even cover situations
in which two variates are not simultaneously observable, as in quantum mechanics,
by listing only the joint distributions of compatible observables, and omitting those
we cannot measure. If we measure a variate X with n different values xi with relative
frequency pi , we can construct a sample space x1 , . . . , xn , and assign the probability
pi to the occurrence of the outcome xi . Similarly, we can construct a sample space
and probability for any finite set of compatible variates if each measurement records
their values. The observed probabilities are more reliable than assuming all points
are equally probable.
However, there is one grave disadvantage of the approach, apart from not being
mathematics: it is simply a description of data, and has much less predictive power
than Laplace’s method. In particular, the method takes no position on the question as
to what are the possible variates. If Ω has |Ω| = n points, then the random variables
form a vector space, denoted A(Ω), of dimension n, so that at most n random variables
can be linearly independent. No similar constraint holds in the frequentist point of
view. Thus a variate is not the same as a random variable. In fact, it has no definition,
other than the statement that its values are random.
The frequentist approach is the safest one to use in studies involving humans;
social or financial matters are so complicated that it is not likely that a sample
space, Ω1 say, chosen to accommodate the data observed so far, can describe all the
possible new variates and the values available to them. In the frequentist approach,
faced with a new variate, Y , one simply takes the set of possible values of Y , say
Ω2 = {y1 , . . . , ym }, and uses Ω1 × Ω2 as the sample space of the enhanced problem.
In a classical system in physics or chemistry, treated by classical statistical me-
chanics, we want to follow the scientific method: we model the system, do exper-
iments, and reject the model if forced to. In that case, we make another model,
estimate its parameters, and suggest more testing experiments. We want and expect

3
to be able to make predictions about variates not measured yet. So we must reject
the frequentist approach.
The third philosophy of probability [4]was made clear by Kolmogorov, and com-
bines something of the first two; it is to regard a probability theory as a model, to be
tested against experiment. It is like Plato’s ideal, in that it is based on a specified
sample space Ω; but now the probability p is not determined by pure thought; any p
satisfying the axioms below provides us with a model.
Definition 1.1 Let Ω be a countable space. A map p : Ω → [0, 1] is a probability if
p(ω) ≥ 0 and ω p(ω) = 1.
P

The probability of an event E ⊆ Ω, and the concept of independence of two events,


are then as in Laplace’s theory and clearly depend on the choice of p.
A random variable f : Ω → R is chosen to represent the variate being observed,
the particular choice being part of the interpretation of the model. A theoretical idea,
or else the first few experiments on the variate f , allow us to get some guide-lines for Ω
and the values of p. This is the subject of estimation theory. We can judge the validity
of the model (the choices we have made for (Ω, p, f )) by comparing the predictions of
the model with the observed frequencies ni using the theory of significance tests. Both
estimation theory and significance were developed before Kolmogorov’s book. The
founders of these techniques were often frequentists; they realised that one could not
use an extreme frequentist point of view: in estimation, they often postulated that
the data had Gaussian distributions, but with unknown parameters. In significance
testing, to make a start, they assumed a probability distribution for the variate being
measured; this is called the ‘hypothesis H’, which is part of the model; it can be
rejected if the data are significantly unlikely. This has a version within Kolmogorov’s
formulation, in which we are given a probability space, the pair (p, Ω), and model the
variate with a random variable, f . To make contact with the well-established theory
of estimation and significance, we must relate the probability distribution of f to the
probability p. We now remind the reader how this is done.
Given a finite probability space (p, Ω) and a random variable f : Ω → R, the
probability distribution of f is denoted pf (i), and is determined as follows: let xi , i =
1, 2, . . . , n be the values that f takes, and let pf (i) be the probability that the event
{ω : f (ω) = xi }. That is X
pf (i) := p(ω). (3)
ω:f (ω)=xi

This is what is accessible to experiments when we measure f . The mean of f is


determined by xi and pf :
X X
Ep [f ] := p(ω)f (ω) = xi pf (i), also written p.f. (4)
ω i

Given two random variables on (Ω, p), f, g we define the joint distribution, denoted
pf,g (i, j) to be
pf,g (i, j) := p{ω : f (ω) = xi and g(ω) = yj }. (5)

4
We say two r. v. are independent if the events {f (ω) = xi } and {g(ω) = yj } are
independent for all i, j. This is equivalent to the frequentists’ version: pf,g (i, j) =
pf (i)pg (j). The joint distribution determines pf and pg as its marginals, and also all
moments, e. g. the cross-moment Ep [f g] can be shown to be ij xi yj pf,g (i, j).
P

A probability p defines a linear functional on the set A(Ω) by the expectation,


(4): f 7→ Ep [f ]. We shall call any such functional a state: it is linear and positive,
taking the value 1 on the sure function I. The dual space Ad is the set of all linear
functionals, so the states form a subset of Ad ; it is denoted Σ(Ω). Given two states p1
and p2 and 0 < λ < 1, their mixture with probabilities λ and 1−λ, p = λp1 +(1−λ)p2 ,
is again a state. So the states form a convex set.
Whether Ω is countable or not, for a random variable f on (Ω, p) the probability
of the occurrence of a single value f0 might be zero, even when there is an ω0 ∈ Ω with
f (ω0 ) = f0 ; for, p(ω0 ) might be zero. This often happens when Ω is not countable, and
f takes continuous values. Then, more information about the probability measure is
provided by the ‘cumulative’ distribution function

Pf (x) = p{ω : f (ω) < x}. (6)

This is an increasing function of x, going from 0 at x = −∞ to 1 at x = ∞. We say


that f possesses a density ρf if Pf (x) is differentiable, and we write

dPf (x)
ρf (x) = . (7)
dx
It is clear that we cannot cope with this subject without a certain amount of real
analysis.
A cumulative probability distribution Pf (x) is determined by its characteristic
function Z
Cf (λ) := eiλx dPf (x) (8)

Here we use the Stieltjes integral. Any characteristic function satisfies

1. C(λ) is continuous;

2. C(0) = 1;

3. C is of positive type: X
z i zj C(λj − λi ) ≥ 0.
ij

Conversely, any function C obeying 1, 2 and 3 is the characteristic function of a


probability distribution; this is Bochner’s theorem. In terms of the original (Ω, p)
and random variable f , the characteristic function is

Cf (λ) := Ep [eiλf ]. (9)

5
If Cf is analytic in λ around λ = 0, we can easily justify the expansion

(iλ)n Ep [f n ]/n! = (iλ)n Mn /n!.


X X
Cf (λ) =
n n

here, Mn are the nth moments of f ; for this reason, Cf acts as a moment generating
function for the r. v. f . An important variant of this is the cumulant generating
function
(iλ)n κn /n!.
X
log Cf (λ) =
n

We prefer to keep the imaginary unit in these formulas, since if we drop it the mean
Cf (λ) might not be finite. The cumulants κn are determined by induction from the
system XX
Mn = κn1 . . . κnk . (10)
k Ik

Here, Ik = I1 ∪ I2 ∪ . . . ∪ Ik is an arbitrary partition of {1, 2, . . . , n} into k parts,


including the identity partition, and nj = |Ij |, j = 1, . . . , k. The condition for inde-
pendence, pf,g (i, j) = pf (i)pg (j) for all i, j is equivalent to Cf +g = Cf Cg ; it follows
that then the cumulants of f + g are the sums of those of f and g.
For a Gaussian distribution, all the cumulants beyond the second are zero. There
are results of the following kind: if all the cumulants κn of a distribution are zero for
n ≥ N , then they are zero beyond n = 2, and so the distribution is Gaussian. These
results use the positivity of the mean of a positive polynomial in f :

z i zj f i+j ] = E[| zi f j |2 ] ≥ 0.
X X X
z i zj Mi+j = E[ (11)
ij ij j

Given a set of real numbers {Mn } satisfying the positivity condition in (11), it is not
obvious that Mn is the nth moment of a random variable f , or that if so, f is unique.
This has led to a body of work called the moment problem.
The distribution of a random variable f determines that of any differentiable
function g(f ) of f ; this is also a random variable; the density of the distribution of
g is determined by the usual rule: if g is bijective, so that f is a function of g, the
probability that g lies between y and y + dy is ρg (y)dy, and this occurs if and only
if f lies between x and x + dx, where y = g(x). Therefore ρg dy = ρf dx, giving the
relation
ρg = |(df /dg)|ρf . (12)
If g is not bijective, but has a local inverse with various branches, fi , then we have
to sum over the contribution |(dfi /dg)|ρfi of each branch.
The remarkable thing is that the methods of probability theory give good results
in many cases that are not governed by chance, such as the distribution of digits in
π. Another example is the configuration of a chaotic system at the time t, where
t is large, given the initial configuration at time zero. If the initial state is not

6
specified sufficiently accurately, then the configuration at time t seems to be governed
by chance, although it is not. It was suggested by Krylov [5] that statistical physics
is a successful method exactly in the cases when the underlying dynamics is chaotic.
This will occur when nearby initial points become exponentially far apart as time
progresses, and this is signalled by a positive real part to the dominant eigenvalue of
the linearised dynamics. This largest real part is called the Lyapunov index. We are
talking here about a chaotic theory; actual experimental measurements will always
have further uncertainty, influenced by small effects omitted from the theory. In a non-
chaotic system small forces can be omitted in the first few approximations. However,
in a chaotic system, the inclusion of one such small force can change the outcome of the
calculation at the large time t, making it appear to be random. This is well modelled
by omitting any attempt to include all the actual forces, replacing those omitted by
a ‘noise’, that is, a random term. Thus, we expect chaos to be well-modelled by a
system with increasing uncertainty, as measured by entropy. Kolmogorov, and then
Sinai took up Krylov’s cause, and were able to relate the rate of ‘entropy’ production
to the Lyapunov exponent of the dynamics. However, Ruelle interprets this [6] as an
increase in information, available as time goes by.
Laplace’s problem, of whether to assign equal probabilities to each energy-level of
a system, arises in quantum theory. Krylov takes von Neumann to task for assuming
that the density matrix for the state of a particle with spin produced by a quantum
process should, in the absence of any theory or experiment, be taken to be totally
unpolarised. Krylov says that this is not true for most known processes, as the
polarisation is found to be nonzero, small for some and large for others. Krylov’s
view is that it should be assigned a general density matrix; we can then estimate this
matrix in the light of experiments. This leads to the subject of quantum estimation, for
which there is a body of theory. Krylov believed that physics is not in the gambling
business; we do not second guess the state of the system and follow a strategy of
hedging against wrong guesses; rather, in physics we predict what will happen (with
various probabilities) at a later time, when the initial state is known.
Estimation theory has received an impetus from a modern development, informa-
tion theory. Shannon introduced the entropy of the random variable f taking values
xi as X
Sf := − pf (xi ) log pf (xi ). (13)
i

Note that Sf does not depend on the actual values that f takes. The distribution with
the maximum possible entropy is easily proved to be the uniform distribution. The
school of probability known as Bayesian therefore argues that if we know nothing
whatever about f it must be assigned the uniform distribution, called the prior.
Thus, Laplace’s intuition gets very respectable support. There is one big problem
with this: the uniform distribution for f is not in general consistent with the uniform
distribution for say g = f 3 , as we see from eq. (12); so the prior depends on the
random variable we choose to name as the one we know nothing about. This echoes

7
Bertrand’s paradox.
A quantum version of entropy was earlier given by von Neumann. For the classical
case (Ω, p) with Ω countable it reduces to
X
S(p) := − p(ω) log p(ω). (14)
ω

It does not make any reference to a random variable. We may obtain Shannon’s
entropy of a random variable f as the von Neumann entropy of pf regarded as a
probability on the space of values that f takes. Note that Sf = S(p) if f takes
different values at different points of Ω, that is, if f separates the points of Ω. We
then say that f is a sufficient statistic. Sf is in general less than S(p), and it reduces
to zero when f takes only one value. The entropies of Shannon and von Neumann
are not the same concepts, and this difference reflects their different interpretations;
the point ω is the message, and Sf is the information about the message that is on
average conveyed by measuring f ; it cannot exceed S(p), which is the entropy (missing
information) in the original probability space. Naturally, if f is sure it conveys no info
at all. Since Sf depends on the random variable f only through its distribution, it
has a meaning in the frequentist approach. To compute S(p), the model (Ω, p) must
be given, and it does not depend on f . More generally, we can define the Shannon
entropy of a set (f1 , . . . , fn ) as the von Neumann entropy of their joint distribution
on the sample space of their values. Some authors regard the Shannon entropy as the
physical entropy of a reduced description of a physical model. The trouble with this
idea is that the introduction of noise in the measurement of f causes the Shannon
entropy to decrease, instead of to increase as we would want.
A simple example of noise is that caused by a mapping T : Ω → Ω. This defines
a co-action on the set of random variables: f 7→ T ∗ f := f ◦ T , which in fact in
an endomorphism of A. If T is not bijective, there might be points that can be
distinguished by measuring f , but not by measuring T ∗ f ; thus ST ∗ f ≤ Sf . This
also holds more generally, when T ∗ is a convex linear sum of such maps, thus: T ∗ =
λi Ti∗ . It can be shown that this is the most general stochastic map on A, that
P

is, linear map, taking I to I and non-negative functions to non-negative functions.


The reduction in the information carried by f in the presence of noise is natural
in telephony. The von Neumann entropy, on the other hand, increases if we add
noise. This is achieved by a bistochastic map T , (a stochastic map whose adjoint
is also stochastic). We write it as a right action, thus: p 7→ pT . By the deep
theorem of Birkhoff [7, 8], a bistochastic matrix is a mixture of permutations. Since a
permutation Ω does not alter S(p), and −p log p is concave, we see that S(pT ) ≥ S(p).
Moreover, the von Neumann entropy is not decreased by a reduced description, unlike
the Shannon version. Thus S(p) is the correct concept to represent physical entropy
[9].
If we are given information about which ω has occurred, the probability on Ω,
called the prior, changes. Suppose that p is the prior. If the information is that the

8
sample lies in a known subset (event) Ω0 ⊆ Ω, then Bayes’s theorem on conditional
probability is used; the conditional probability is
p(E ∩ Ω0 )
p(E|Ω0 ) := . (15)
p(Ω0 )
This is called the posterior probability, and correctly describes the probability among
outcomes all of which lie in Ω0 . A conditional probability satisfies the axioms of
probability, def. (1.1).
This use of information to modify the probability p should not be confused with
estimation theory. There, we do not change p, since after the measurement of in-
dependent samples, we continue to assume that new samples are governed by the
original p. The method of estimation using the principle of maximum entropy pro-
ceeds as follows. Suppose that we know Ω, and f , with |Ω| < ∞ and we are also
told the average of f over a number of independent trials. We can vary p over the
simplex Σ(Ω) to find the point that maximises the entropy of the probability, given
the observed mean value, η say. Thus we use the method of Lagrange multipliers to
maximise X
− p(ω) log(p(ω)) subject to Ep [f ] = η.
ω
Gibbs knew that the solution to this is

p(ω) = Z −1 exp(−βf (ω)), (16)

where Z = ω e−βf (ω) , is the Lagrange multiplier for the normalisation condition
P
P
p(ω) = 1, and is called the partition function. The parameter β is the Lagrange
multiplier for the condition p.f := Ep [f ] = η, and is determined by it. Then (16) is
the least prejudiced estimate for the probability, given the mean [10, 11].
The method of maximum entropy solves an important problem in the theory of
estimation. Let X be a variate of which the distribution is known to be one of a
family, M = {pη (i)}η∈R ; we hope to estimate η by measuring X independently m
times. An estimator f is a function of the data x1 , x2 , . . . , xm that is used for this
estimate. Thus f is a function of X, and so is a random variable. Since we do not
know η, to be useful, the estimator must be independent of η. We say an estimator
is unbiased if its mean is the desired parameter, thus:
X
pη .f := pη (i)f (xi ) = η. (17)
i

Apart from being unbiased, a good estimator should have a small chance of being
far from the mean; so we are interested in estimators of minimum variance, V =
pη .[(f − η)2 ]. To any probability p ∈ M define the Fisher information as [12, 13]
2
∂ log pη

G = pη . (18)
∂η

9
We recognise this as the variance of the random variable Y = ∂ p/∂η. The Cramer-
Rao theorem puts limits on the smallness of the variance V of an estimator f :
Theorem 1.2
V ≥ G−1 . (19)
For the proof, differentiate (17) with respect to η, to get
X ∂pη (i)
f (xi ) = 1.
i
∂η

Now use ∂/∂η[ i pη (i)] = 0, and rearrange, to get


P

∂ log pη (i)
X  
pη (i) (f (xi ) − η) = 1. (20)
i
∂η

This is the correlation between the random variables Y and f ; the (positive-semidefinite)
covariance matrix is therefore !
G 1
(21)
1 V
Schwarz’s inequality then gives (19).
The minimum variance allowed by (19) occurs when the Cauchy inequality is
equality, which occurs when the factors in (20) are proportional (with ratio dependent
on η). Calling this factor −∂ξ/∂η, we see that the distribution of minimum variance
must satisfy
Z η
log pη (i) = − ∂ξ/∂η(f (xi ) − η)dη = −ξf (xi ) − ψ, (22)

showing that a necessary and sufficient condition is that {pη } be the exponential
family.
In the case of several parameters η1 , . . . ηn , we have estimators f1 , . . . , fn , which
can be taken to be linearly independent, but need not be functionally independent.
The state of maximum entropy, given the means Ep [fi ] = ηi , i = 1, . . . , n is easily
shown to be of the form

p(ω) = Z −1 exp −{ξ1 f1 (ω) + . . . ξn fn (ω)} = Z −1 exp(−f ) say, (23)

where the Lagrange multipliers ξi are determined by the given conditions on the
means. The set of probabilities of the form eq. (23) form the set M called the info
manifold, or the exponential family, determined by Span{fi }. We can regard {ξi }
or indeed f as coordinates, called canonical; or we can regard {ηi } or indeed p as
coordinates, called the expectation coordinates. In this case the Fisher information
matrix is defined to be !
ij ∂ log p ∂ log p
G := p. . (24)
∂ηi ∂ηj

10
Then the Cramer-Rao inequality (19) becomes a matrix inequality, where V is the
covariance matrix Vij := p.[(fi − ηi )(fj − ηj )]. Equality holds only if G = V −1 , which
leads to the exponential family.
Rao showed that G defines a Riemannian metric on the tangent spaces of M [14];
as such, its components depend on the coordinates chosen for the tangent space and it
transforms as a tensor under changes in variables. At the point p ∈ M, a vector in the
tangent space is given in in canonical coordinates by a random variable f in the span
of the ‘score variables’ fˆj := fi − ηj . Writing f = k ξ k fˆk introduces contravariant
P

components ξ k . These are dual to the ηj , which are covariant components. The
covariant metric is the covariance matrix

Gij = G(fˆi , fˆj ) = Ep [fˆi fˆj ]. (25)

It is the inverse of the contravariant Gij , which explains why we get equality in (19).
The Massieu function ψ := log Z, where Z is the partition function, is related to
the free energy; it is the generating function for the cumulants; so we have
∂ψ
ηj = − (26)
∂ξ j
∂2ψ
Gij := Vij = . (27)
∂ξ i ∂ξ j
The entropy is the Legendre transform of ψ, and its second variation is the Fisher
information matrix, Gij , the metric in the coordinates η.
Amari showed that M is furnished with a pair of affine flat connections, for which
the global affine coordinates are ξ i and ηi [15]. These connections are not metric
connections, but are dual relative to G. An important role in information geometry
is played by the relative information S(p|p′ ) := ω p(ω)(log p(ω) − log p′ (ω)). This
P

distinguishes between the points p and p′ in M, in that S(p|p′ ) ≥ 0 and vanishes only
when p = p′ . For a modern version, see [16].
The observables form the algebra A(Ω) in which multiplication is pointwise:
(f g)(ω) := f (ω)g(ω); the states lie in its dual. Thus, states and observables are
not the same kind of thing, and they transform as duals under stochastic maps. How-
ever, states like observables are functions of ω; to distinguish them we can write p(ω)
for a state and (ω)f for an observable. If |Ω| < ∞, either can be identified with an
element of the formal vector space spanned by Ω, thus: ω α(ω)ω ↔ α, whether α is
P

regarded as an observable or a state. Then M is the interior of the convex hull of Ω.


The permutation group of Ω acts by right action ω 7→ ωT . Its inverse ω 7→ ωT −1 is a
co-action of the group (its product law is the opposite of that of the group) and so can
be written as a left action: ω 7→ T ω := ωT −1 . These induce a right action on prob-
abilities, and a left action on observables, by pT (ω) := p(T ω) and (ω)T f := (ωT )f ,
the latter written without the dual symbol ∗ . These express associativity, as does the
dual relation pT.f = p.T f .

11
These definitions can be extended to any map T : Ω → Ω, whether invertible or
not: we define the action on probabilities using T (ω) := ({ω}T −1 ), the inverse image
of the point-set {ω}. Every algebraic endomorphism of A is of the form f 7→ T f for
some map T : Ω → Ω, and these make up exactly the extreme points of the convex
set of stochastic maps.
In infinite dimensions, there is more than one useful topology on the states and
observables. The modern view [16] is that the state p and the observable − log p are
merely alternative coordinates for a point in the info manifold. The natural class of
charts are related by monotone, convex functions, of which the stochastic maps, [17],
as well as the non-linear maps p 7→ − log p and p 7→ pα , 0 < α < 1 are examples.
An active field of research is to set up quantum analogues of all this [18, 19, 20, 21]

2 From Bachelier to Wiener


In 1900, Bachelier proposed a random model of the stock market [22]; the idea was
that the decision to buy or sell a stock is randomly taken by independent investors.
Let us suppose that the chance λ that the price goes up one unit dx is the same as that
for going down, during any unit trading period dt. Let X ∈ Z be the random price,
and p(x, t) be the probability that the price is x at time t; then the new probability
p(x, t + dt) can be unchanged, or can change due to a movement down from x + dx
or a movement up from x − dx. The probabilities of these are, respectively, 1 − 2λ, λ
and λ. Thus we get the relation

p(x, t + dt) = (1 − 2λ)p(x, t) + λp(x + dx, t) + λp(x − dx, t). (28)

Let T be the tridiagonal infinite matrix {λ, 1 − 2λ, λ}. Then the row Txy , y ∈ Z is the
conditional probability, that the price will be y at time (N + 1)dt, given that it is x
at time N dt. In fact, T is a stochastic matrix, which happens to be symmetric.
Suppose that at t = 0 the price is x0 ; then in time N dt, the price will follow the
path γ := x0 7→ x1 7→ . . . 7→ xN with probability

p(γ) = T (x0 , x1 )T (x1 , x2 ) . . . T (xN −1 , xN ). (29)

This is called the random walk on Z determined by T , starting at x0 . The set of


allowed paths starting at x0 is a finite subset of Ω = ZN . p(γ) is a probability on
Ω, and the structure is called a Markov chain. An alternative point of view is to
start with p0 ∈ Σ(Z), and to follow the path in Σ(Z) given by the time evolution.
By Bayes’s law, the probability that at time t = 1 the particle is at x1 whatever its
P
initial position, is p(x1 , 1) = x0 p(x0 , 0)T (x0 , x1 ); this can be written as the matrix
product p0 T , where pt is a row vector made from the components of p(xt , t). By
induction, the probability that at time N the particle is at x is p0 T N . In this way, a
Markov chain is described by a semi-group of stochastic maps T (t) := T t acting on

12
Σ(Λ). Obviously

T (0) = 1 (30)
T (s)T (t) = T (s + t), s, t ∈ N. (31)

One of the themes of probability theory is the relationship between a semi-group


of stochastic maps and a probabiliy on the corresponding path space. The latter
is called a dilation of the former. Since T is independent of time, the chain is said
to be stationary; if we limit the allowed space to be a finite set Λ ⊆ Z, we get a
finite Markov chain, in which case there is at least one stationary distribution p∗ ; this
means that p∗ T = p∗ , so that 1 is a left eigenvalue of T . If some power of T has all
its matrix elements positive, then the Perron-Frobenius theorem tells us that 1 is a
simple eigen-value, and all the others have modulus less than 1. One can then show
that pT n → p∗ as n → ∞; the system converges exponentially to equilibrium. We
then say that the dynamics is mixing. There are similar results in infinite dimensions,
but to get exponential convergence we need to show that there is a spectral gap. This
means that 1 is simple and lies a finite distance from the next eigenvalue of T T ∗ .
To prove this in the case at hand is usually the key to the study of the long-time
behaviour. The Markov property is that the probability of getting to x at time t + 1
depends only on where the particle was at time t, and not on the previous path. The
study of Markov chains was started in the 19th century, and is a huge subject.
Fick obtained an equation similar to (28) for the diffusion of particles in one
dimension. If dx and dt become small such that (dx)2 /dt → a, a finite limit, we
say the system is following the diffusion limit. Rearranging, and taking the diffusion
limit, Fick obtained the heat equation for the probability density, which we call ρ:

∂ρ ∂2ρ
= κ 2. (32)
∂t ∂x
Here, κ = aλ. This is not a very good model of the market; apart from the omission
of drift, the gains in price should grow with the overall price. As it is, negative prices
are possible.
The heat equation (32) can be written in the form of a conservation law:

∂ρ
+ div j(x, t) = 0 (33)
∂x
where j(x, t) = −κ∇ρ. At this stage, mathematicians did not have the continuous
version of the sample space Ω; this was to be Wiener’s great construction.
In his celebrated work of 1905 [23], Einstein also used (32) to describe the Brow-
nian motion of small particles in a warm liquid. He was mindful of Stokes’s law of
diffusion; this says that in a viscous liquid a small particle under a constant force,
such as gravity, will increase its speed towards a terminal velocity v say, which is
proportional to the force. Einstein required that in equilibrium the current vρ due

13
to this flow should balance the diffusion due to the density gradient, so that steady
state should obey
− κ∇ρ + vρ = 0. (34)
The solution to this in the case of gravity, where v = −|v| in the z-direction, is

ρ(x, y, z) = const.e−|v|z/κ , (35)

and this should be the Maxwell-Boltzmann law at the temperature Θ of the liquid,

ρ(x) = Z −1 e−mgz/(kB Θ) . (36)

Einstein thus obtained the famous Einstein relation

F = kB Θv/κ. (37)

His treatment is not complete, since he omitted the drift term in the diffusion equa-
tion! See §4, (1) in [23]. In a detailed study, Smoluchowski [24] wrote down the
diffusion equation with drift
∂ρ ∂ρ ∂ρ
=κ 2 −v . (38)
∂t ∂x ∂x
now known as the Smoluchowski equation, and is a special case of the Fokker-Planck
or backward Kolmogorov equation. He solved this by using the method of images
for several systems with boundaries, such as the mass of air above the ground, and
obtained the approach to the stationary state expected by Einstein.
It was known that one can solve eq. (38) exactly, to fit a more or less arbitrary
initial function ρ(x, 0) = f (x), by using the Green function (in one dimension)
2 /(4κt)
G(x, t) := [4πκt]−(1/2) e−(x−vt) . (39)

This satisfies eq. (38), and converges in the sense of distributions to the Dirac δ-
function as t → 0. Then
Z ∞
ρ(x, t) = G(x − y, t)f (y) dy (40)
−∞

satisfies eq. (38) and the boundary condition. The operator whose kernel is G is the
continuum analogue of the matrix T n of the Markov chain,
When the force and temperature are slowly varying, we get the coupled system
∂ρ
+ div J = 0; (41)
∂t
∂Θ
= κ′ div ∇Θ + κF.F/kB Θ. (42)
∂t
Here, J(x, t) = −κ∇(ρ + V ρ/Θ), where V is the potential giving rise to the force F .
The source term in the heat equation is F.J, the power of the external force supplied

14
to the particle, all of which is converted into heat. This system obeys the first and
second laws of thermodynamics [25].
Consider now the solution (40) to (38). Because G is positive, the density remains
positive for all time, and the conservation law shows that the integral of ρ over space
is constant. So we get a flow through the space of probabilities. The question arises,
is there a process in continuous time associated with the Smoluchowski equation?
The answer is yes, and this was the result of the work of Wiener, and later, Ito. An
alternative idea was introduced by Langevin, who considered Newton’s laws, in which
a part of the external force, denoted F , is random; friction enters as a damping force
proportional to the velocity, parametrised by γ > 0. Thus his equation is

d2 x ∂V
2
=− − γ ẋ + F (t). (43)
dt ∂x
This is the equation for a single particle, but as F is random, the position x(t) becomes
random as time goes by, even if its initial condition is given. Statistical properties of x
are determined by those of F ; the relation of these to the Smoluchowski equation were
studied by Fokker and Planck, but were fully understood only in terms of stochastic
calculus. One might assume that F is Gaussian distributed, and is of mean zero, with
independent values at different times. This would now be described as white noise.
Langevin’s work started the enormous field of stochastic differential equations.
In 1904 Lebesgue tried to set up a general theory in which every subset of [0, 1] is
assigned a measure. [26]. The very next year, G. Vitale showed that the scheme was
inconsistent [27]. Hausdorff [28] and Banach and Tarski, showed that the measure
could not be additive [29]. The point is that some sets are so bad they cannot be
assigned a measure, even a finitely additive one. This led to the concept of measurable
set. Let us start with the Borel measurable sets on [0, 1].
Let Ω = [0, 1]; let us say that a collection B of subsets of Ω form a tribe if

1. Ω ∈ B

2. whenever B ∈ B, we have B c := Ω − B ∈ B;

3. whenever A ∈ B and B ∈ B, we have A ∪ B ∈ B.

Such a collection of subsets is also called a Boolean ring, or a Boolean algebra. The
collection B is actually a ring, with multiplication given by intersection, and addition
given by symmetric difference, that is A + B := A ∪ B − A ∩ B. It is also an algebra
in the technical modern sense, but trivially in that any ring is an algebra over the
field consisting of two numbers, 0 and 1. Since this ring structure plays no role in the
theory, we prefer not to furnish B with the extra structure ‘+’, and will use the word
‘tribe’ instead.
We define a σ-tribe to be a collection B of sets Bi ⊆ Ω such that 3. above is
replaced by
3∞ . if Bi ∈ B is a countable family of disjoint sets, then ∪ Bi ∈ B.

15
The set of all subsets of a set Ω = [0, 1] is obviously a σ-tribe, and indeed satisfies
uncountable additivity as well. This σ-tribe is called the power set of Ω. But, as we
saw, there are no useful definitions of measure on the power set. Another easy case
is the collection of all countable subsets of Ω: the union of a countable collection of
countable subsets is countable. However, any countable set has length zero, since it
can be covered by a sequence of intervals of length ≤ ǫ/2, ǫ/4 ǫ/8, . . ., of total length
ǫ. Since ǫ can be anything, the set has length zero. To get some sets of non-zero
length, let us consider the tribe B0 of all finite disjoint unions of open, closed and
half-open intervals. We could add to B0 all countable unions of sets in B0 , and all
complements in Ω of sets in the tribe so obtained. Call this B1 . Then we would need
to consider the collection of countable unions of sets in B1 , and their complements,
to get a new tribe B2 , and so on. Does this end up with a well-defined σ-tribe? The
following argument does the trick. Let G be any σ-tribe containing all sets in B0 , and
let C be the set of all such σ-tribes. Then C is non-empty, as it contains the power
set at least. Then form \
B= G. (44)
G∈C

That is, B contains those subsets of Ω that lie in all σ-tribes G, and no other subsets.
In particular, B contains all subsets in B0 , B1 etc. In fact, by using the techniques of
set theory, one can prove that B is smallest σ-tribe containing all the open intervals
in Ω = [0, 1]; it is called the Borel tribe. One can ask whether we have arrived at the
power set after all, or have something without the pathological sets. That B contains
only nice sets follows the construction of a countable measure on its sets, namely the
Lebesgue measure.
A finitely additive measure on a tribe B is a map µ : B → R+ ∪ {+∞} such that

µ(A ∪ B) = µ(A) + µ(B) for all disjoint A, B ∈ B.

If µ(Ω) = 1, it is a finitely additive probability measure. To do analysis, we must be


able to take some limits, and so we now assume that B is a σ-tribe.
A probability measure on (Ω, B) is a map µ : B → R+ such that

1. µ(B) ≥ 0 for all B ∈ B;

2. µ(Ω) = 1;

3. if Bi is a countable collection of disjoint sets in B, then


X
µ(∪Bi ) = µ(Bi ).
i

Considering the tribe B0 of finite unions of disjoint open, closed and half-open inter-
vals, we can define the Lebesgue measure of B ∈ B0 to be the sum of the usual lengths
of the intervals involved. It is then proved that there is a countably additive measure

16
on the Borel σ-tribe, which agrees with the length on the intervals. This measure is
called the Lebesgue measure.
It is sometimes useful to extend the concept of measure to unbounded sets such
as R, whose total length is infinite. For this, we just drop axiom 2. above.
So much for the measure; integration theory needs a remark as well. Suppose
that we have a function y = f (x), where x ∈ [0, 1] and y is real-valued and bounded,
and we seek a way of finding the area under the graph of y against x. In Riemann’s
method of integration we divide the x-axis into a large number of small intervals,
[0, x1 ], (x1 , x2 ], . . . , (xN , 1], and define yi to be the smallest value of y in the interval
(xi , xi+1 ] and Yi to be the largest value. Now define the two approximations to
the area, known as the upper sum and the lower sum, R+ = i Yi (xi+1 − xi ) and
P

R− = i yi (xi+1 − xi ). As we refine the subdivision, R+ decreases and R− increases.


P

If the limits of these are equal, we say that the function is Riemann-integrable, and
take their common value as the area under the curve y = f (x), 0 ≤ x ≤ 1. One shows
that continuous functions are integrable, and can establish the fundamental theorem
of the calculus; a generalisation, called the Riemann-Stieltjes integral, can be defined,
if we replace xi+1 −xi by P (xi+1 )−P (xi ), where P is an increasing R
function of bounded
variation, continuous from the left. We write the integral as y(x)dP (x). To define
the integral of unbounded functions, various limiting methods were invented. The
theory is not really satisfactory.
Lebesgue introduced a new form of integration: compared with Riemann’s method,
it is done the other way round. As the first step, only positive functions are consid-
ered. Then, we divide the y-axis into intervals ([0, y1 ], (y1 , y2 ], . . . , (yN , ∞)), and for
each interval, look for the inverse image of each interval under the map f . That is, we
consider the subset of the x-axis consisting of x such that f (x) ∈ (yi , yi+1 ]. This set,
denoted by f −1 (yi , yi+1 ] := Bi , may consist of many pieces, and so will not always be
an interval. We require, however that it should be a set in the Borel σ-tribe B; if this
holds for every subdivision of the y-axis into intervals, we say that the function f is
B-measurable. The set Bi will have a ‘length’, namely, its Lebesgue measure, µ(Bi ).
We approximate the area under the graph of f by the sum
X
L(f ) := yi µ(Bi ).
i

This is positive and increases as we refine the partition of the y-axis. If its supremum
over all partitions is finite, we say that f is Lebesgue-integrable, and write
Z 1
f (x)dx = sup L(f ). (45)
0

We can integrate functions that are not positive, provided that the positive and neg-
ative parts are separately integrable, and we integrate complex functions by treating
the real and imaginary parts separately. This generalises the Riemann integral in
that any Riemann-integrable function is Lebesgue integrable, and then both versions
give the same answer.

17
Lebesgue integration has the following easy generalisation, which is important for
probability. Suppose that Ω is any set, provided with a σ-tribe B; the pair (Ω, B) is
called a measurable space. A real-valued function is said to be B-measurable if the
inverse image of every open interval lies in B:

f −1 (y1 , y2 ) := {ω ∈ Ω : y1 < f (ω) < y2 } ∈ B.

A random variable is then simply a real-valued B-measurable function on Ω. Given


a measure µ on (Ω, B), not necessarily of finite total measure, we can regard as the
same random variable f two that differ only on a set of µ-measure zero; they are
called versions of f . The set of all bounded random variables forms a commutative
algebra A(Ω) with norm kf k∞ := inf supω |f (ω)|; here, inf is taken over all versions
of f . The sets in B are called events. The integral of a positive measurable function
(with respect to the measure µ) is defined similarly to the case when Ω = R. If µ is
a probability measure, this integral is called the mean µ.f of f in the state µ. and
if µ.|f | < ∞ we write f ∈ L1 (Ω, B, µ). More generally, we write f ∈ Lp (Ω, B, µ),
1 ≤ p < ∞ if f is B-measurable and |f |p is integrable. These are Banach spaces
p 1/p
R
with norm kf kp := ( |f (ω)| dµ) . The probability of an event B is taken to be
µ(B).R Each measure µ defines an element of the dual space of A by the linear form
f 7→ f dµ.
We have remarked that the original motivation for introducing the σ-tribe was
to avoid pathology. However, the concept has been very useful in a heuristic way,
to describe the information carried by events and observables in a random theory
based on a measure space (Ω, B, µ); in particular, it is useful to consider a sub-tribe
or sub-σ-tribe, of B. Suppose that B ∈ B is an event; it is determined by its indicator
function, χB (ω) which is 1 if ω ∈ B and zero outside B. If µ(B) 6= 0, 1 and f is
measurable, we can define the conditional expectation
X
E[f |B] = f (ω)µ(ω|B). (46)
ω

We may also find the conditional probability of A ∈ B, given that B did not happen:
µ(A|B c ) = µ(A ∩ B c )/µ(B c ), and the corresponding conditional expectation

E[f |B c ] = f (ω)µ(ω|B c ).
X
(47)
ω

We may regard the pair of numbers, {E[f |B], E[f |B c ]} as defining a simple mea-
surable function on Ω, equal to E[f |B] if ω ∈ B and to E[f |B c ] if ω ∈ / B. Let us
now generalise this idea. Let B1 , . . . , Bn ∈ B be disjoint measurable sets such that
µ(Bj ) 6= 0 for all j, and µ(∪j Bj ) = 1. These sets generate a tribe, say B0 (by various
unions; there are 2n such unions). If f is measurable, the functions on Ω defined by

Ff (ω) = E[f |Bj ] if ω ∈ Bj (48)

18
are measurable relative to B0 . They take constant values, E[f |Bj ] on each Bj and so
can be written X
Ff (ω) = χj (ω)cj where cj = E[f |Bj ]. (49)
j

Conversely, every function F , measurable relative to B0 , has this form for some {cj }.
The map, f 7→ Ff , is linear and is called the conditional expectation of f given B0 .
This map leaves invariant the vector space of B0 -measurable functions, and indeed is
the orthogonal projection of L2 (Ω, B, µ) onto L2 (Ω, B0 , µ).
The tribe B0 tells how fine was the division into the sets Bj , and determines how
much detail can be obtained from the functions that are B0 -measurable. From the
fact the F is the orthogonal projection, we see that E[f |B0 ] is the best approximation
(in the L2 -sense) to f by functions that are B0 -measurable.
Consider for example the price of a stock at time t, where t is a non-negative
integer; S(t)t≥0 are then a family of random variables on Ω, and while we can find
out the prices up to the present time, we cannot know the future. Suppose that t = N
is the present. The information contained in the knowledge of the prices at N + 1
previous times, namely S(t = 0) = s0 , S(t = 1) = s1 , . . . , S(t = N ) = sN , selects in
Ω a particular level set of these functions: this is the event

{ω ∈ Ω : S(0)(ω) = s0 . . . S(N )(ω) = sN }

Since we assume that S(t) are B-measurable, this set lies in B, by the intersection
property. The same for any other possible set of values of these observations. There
is a smallest σ-tribe with respect to which all these functions are measurable, and in
fact, this σ-tribe is generated by all the level sets described above. Call this B≤N . The
set of all random variables that are B≤N -measurable is exactly the set of functions
of the data S(0), . . . , S(N ), measurable in the Lebesgue sense; they can therefore be
computed from the data we have access to.
The increasing family {B≤n } is called the filtration generated by the process. It
provides a neat formulation of the Markov condition for a process Xn ; let Bn be the
σ-tribe generated by the r. v. Xn . Then a process is called Markovian if

E[Xn |B≤m ] = E[Xn |Bm ] if n ≥ m. (50)

The idea is that the information contained in Xm , the present value, tells us as much
about the future as the whole previous history. Consider again the semigroup {T n }
of stochastic maps, acting on Σ(Z) in one time-step as in eq. (28). One can check
that if p0 is the initial probability distribution of the initial point of the path, then

p0 T n = Epn [xn |B0 ]. (51)

Here, γ = (x0 , . . . , xn ) and pn (γ) = p0 (x0 )p(γ) where p(γ) is as in (29).


Wiener [30] was able to put the Bachelier-Einstein diffusion theory on a rigorous
footing. He has to define, first, the sample space Ω; then he needs a σ-tribe B

19
and a measure on it; he also needs a family of B-measurable functions Xt (ω) whose
distribution has density of probability equal to ρ(x, t) obeying the diffusion equation.
Finally, he needs to get the continuum version of eq. (51).
Let Ω be the set of all continuous functions ω of t ≥ 0 with ω(0) = 0; these are
called ‘Brownian paths’. Let (x1 , y1 ) be an interval of the real line, which we call a
gate; we now consider the subset of paths which pass through the gate at time t1 .
This set is called the cylinder set based on (x1 , y1 ). In symbols, it is

{ω ∈ Ω : x1 < ω(t1 ) < y1 }

The ω(t) for various t are coordinates of the point ω; we have a condition on only
one of the coordinates; the rest run over the real line. Consider another cylinder set,
similarly constructed at time t2 > t1 , based on another open interval (x2 , y2 ). The
intersection of these sets is a cylinder set based on rectangle (x1 , y1 ) × (x2 , y2 ) in the
plane made by the coordinates ω(t1 ), ω(t2 ). The path ω(t) passes through the first
gate at time t1 and the second at time t2 ; it is a slalom. Consider the collection of
subsets of Ω consisting of all these cylinder sets defined by slaloms with any finite
number of gates, at any selection of different positive times. The finite unions of
these form a tribe. The smallest σ-tribe B containing all these is the one we choose,
so obtaining the measurable space (Ω, B).
We first define a finitely additive measure on the tribe of cylinder sets. It is enough
to give the measure of a general cylinder set, and to use the finite additivity. Starting
at x = 0, the probability density that a diffusing particle reaches x1 at time t1 is
taken to be the Gaussian given by the Green function; thus the probability of lying
in the interval x1 , y1 is
1
Z y1 2 /(4κt )
Prob{ω(t1 ) ∈ (x1 , y1 )|ω(0) = 0} = e−x 1
dx
(4πκt1 )1/2 x1
Z y1
= G(x, t1 )dx. (52)
x1

The probability that the path goes through two gates, (x1 , y1 ) at t1 and (x2 , y2 ) at t2
is defined to be

Prob{ω(t1 ) ∈ (x1 , y1 ) and ω(t2 ) ∈ (x2 , y2 )|ω(0) = 0}


Z y1 Z y2
= dx dx′ G(x, t1 )G(x′ − x, t2 − t1 ). (53)
x1 x2

This can be interpreted as Bayes’s theorem, in which G is the conditional probability


density. Similarly, the probability of any cylinder set, based on a finite set of gates,
can be given. The probability is the same, whether the gates are open, closed or
half-open. We would like the measure we are constructing to be at least finitely
additive. Thus we take the measure of the union of two disjoint cylinder sets to be
the sum of the measures we have just given them individually. A possible problem

20
arises if we add together infinitely many gates at time t1 to make up the whole line;
for, we would like our measure to be countably additive, and we need the consistency
condition between the two ways to define the probability of reaching the gate (x2 , y2 ):
from 0 directly, with no gate at t1 , as given by eq. (52), or as the sum over all paths
going through any complete set of disjoint gates at t1 , as got by summing eq. (53).
Indeed, we do get the same answer, because of the propagating property of G:
Z ∞
dx′ G(x − x′ , t1 )G(x′ − y, t2 − t1 ) = G(x − y, t2 ). (54)
−∞

This is a continuous version of the obvious property of the stochastic matrices T n


of a Markov chain, namely T m T n = T m+n , in which the matrix product, expressed
as the sum over an intermediate index, is replaced by the integral over the point x′ .
Thus our equation just expresses the semi-group property of the time-evolution of a
first-order equation, here the heat equation. It is seen here as the main point which
establishes the additivity of the finitely additive measure we have constructed on the
tribe of cylinder sets.
Let us define B[s,t] as the σ-tribe generated by the cylinder sets labelled by times
in the interval [s, t], and B that generated by all of these. Then Wiener proved that
there exists a unique measure on the measurable space (Ω, B) that coincides with
the measure above on the tribe of unions of such cylinder sets. This measure is now
called Wiener measure. The Wiener process starting at 0 is then the family of random
variables defines by Wt (ω) := ω(t), t ≥ 0. The process has the following properties:

1. Wt − Ws is Gaussian with mean zero and variance t − s, for t > s.

2. Wt − Ws is independent of Wv − Wu if 0 ≤ u ≤ v ≤ s ≤ t.

3. W0 = 0.

These properties characterise the process. By requiring that W0x = x we get the
Wiener process Wtx starting at x ∈ R.
We now need the concept of the symmetric Fock space Γ(H) of a Hilbert space
H. The n-fold tensor product ⊗n H is the completed span of the symbols ⊗ni=1 ψi =
ψ1 ⊗ . . . ψn with the scalar product
n
Y
h⊗ψi , ⊗φi i := hψi , φi i.
i=1

The symmetric tensor product Hn := ⊗nS H is the subset of symmetric tensors, called
the n-particle space; the zeroth tensor power is taken to be C. The Fock space Γ(H)
is the direct sum ⊕∞ n
n=0 H . This has the functorial property

Γ(H1 ) ⊗ Γ(H2 ) = Γ(H1 ) ⊕ Γ(H2 ).

As a special case, Γ(C) = C ⊕ C ⊕ . . ..

21
There is a unitary map L2 (Ω, B, µ) → Γ(L2 ([0, ∞), dt), in such a way that ⊕nj=0 Hj
is identified with the L2 -completed span of the polynomials in Wt of degree ≤ n [31].
The n-particle space is then identified successively by Gram-Schmidt orthogonalisa-
tion with the part orthogonal to the k-particle spaces, k < n. This is Wiener’s chaos
expansion [32]. In particular, the one-particle space is spanned by {Wt }t≥0 .
For each fixed t, the space L2 (Ω, B, µ) contains the random variables I, Wt ,
...Wtn ...They can act as multiplication operators successively on the vector 1, to get n
vectors. Suppose we orthogonalise them by the Gram-Schmidt procedure. Since Wt is
Gaussian, we get the Hermite polynomials in successive spaces, and any L2 function
of Wt has a convergent expansion as a sum of its components in these spaces. The
subspace we get can be identified as the Fock space over the one-dimensional space
spanned by Wt . We shall see that these polynomials are Wick-ordered powers [33] of
Wt , and that they are martingales.
Now suppose that u > s; since Wu is independent of Wu − Ws , and they are
Gaussian, they are orthogonal in the one-particle space, which can thus be written
as the direct sum L2 ([0, ∞), dt) = L2 ([0, s), dt) ⊕ L2 ([s, ∞), dt). By the functorial
property of Fock space, we therefore can write

L2 (Ω, B, µ) = L2 (Ω, B[0,s] , µ) ⊗ L2 (Ω, B≥s , µ). (55)

We can similarly split Fock space into arbitrarily many factors, corresponding to any
partition of the time axis into intervals: it has the property of a continuous tensor
product.
The continuous analogue of the semigroup (γT n )m := γm+n , m, n = 0, 1, 2 . . . of
the random walk is the left-shift of the paths: (ωTs )(t) = ω(s + t). This induces the
dual action on the observables:

Ts∗ : L2 (Ω, B, µ) → L2 (Ω, B≥s , µ), s ≥ 0. (56)

This operator is isometric but not invertible. We can also embed L2 (Ω, B, µ) in the
two-sided space Γ(L2 (−∞, ∞)), on which the left shift is unitary, and induces the
action of the group R rather than the semigroup R+ . In that case the paths are not
conditioned to pass through the origin, and only the differences, Wt − Ws make sense
as vectors or operators.

3 The Quantum Leap


The remarkable discovery of matrix mechanics by Heisenberg in 1925 is comparable
to that of the theory of relativity in 1917. Clifford had speculated that the world
might have chosen a geometry other than Euclidean. It was agreed that it was an
experimental question, and that the data agreed with Einstein’s theory. Though the
classical axioms were yet to be written down by Kolmogorov, Heisenberg, with help of
the Copenhagen interpretation, invented a generalisation of the concept of probability,

22
and physicists showed that this was the model of probability chosen by atoms and
molecules.
According to Einstein et al. [34] a concept is deemed to be an element of reality
within a specified theory if there is a mathematical object in the theory which is
assigned to the concept, and which takes a definite value (when the state of the
system is given). This is now called an observable. For example, the choice of the
zero-level of a potential function, is not an observable since it is not determined by
the state of the system. They are not here discussing random samples, which at
the time would have been described as an ensemble. In that case, they might have
conceded that a concept could be regarded as an element of reality if, in a random
selection of the system from an ensemble, there is a definite random variable assigned
to the physical concept. The interpretation of a theory is not complete unless it is
specified at the outset which mathematical objects arising in the theory correspond
to observables. Thus in a theory with randomness in classical physics, there is a
space (Ω, B) and an observable is a random variable, and an ensemble is a probability
measure on Ω. A non-random state is given by a point-measure. In this state any r.
v. has zero variance, thus satisfying EPR.
In quantum mechanics, this is not the case; an observable is a Hermitian matrix
A, or in modern terms, a self-adjoint operator on a given Hilbert space H; the possible
values one can find in a measurement are the eigenvalues of A. A wave-function is
determined by a vector ψ ∈ H; but only unit vectors are used, and eiθ ψ represents
the same state as ψ. Thus the state is the equivalence class {ψ} = {eiα ψ, α ∈ R}. If
dim H = n < ∞, such equivalence classes make up the projective space CP n−1 . An
element of CP n−1 determines the expectation value of any observable A by hψ, Aψi,
which according to the Copenhagen interpretation, is the mean value of A if measured
many times in the state {ψ}. It is seen to be independent of the representative vector
ψ ∈ {ψ}. Such a state is called a vector state. The concept of state was generalised
by von Neumann to include random mixtures of vector states . Let B(H) denote the
set of bounded operators on H; this is a complex vector space, and also ∗ -algebra,
where conjugation is given by the adjoint and multiplication is the usual product
of operators. A state is given by a positive operator ρ of trace 1, called a density
operator, and the expectation of an observable A is taken to be m1 (A) := Tr (ρA).
Any density operator determines an element of the dual space to B(H) by the map
A 7→ m1 (A). We also can define mn (A) := Tr ρAn to be the nth moment of A, and
κ2 (A) := m2 (A) − m1 (A)2 to be the second cumulant, the variance, uncertainty or
dispersion of A in the state ρ. von Neumann showed that there are no dispersion-free
states. Thus, quantum mechanics is intrinsically random. Heisenberg’s uncertainty
relation, which is a theorem, not a postulate, is the best-known facet of this:
Theorem 3.1 Let A, B, C ∈ B(H) be such that [A, B] := AB − BA = C; then in
any state ρ, we have κ2 (A)κ2 (B) ≥ m1 (C)2 /4.
There is no uncertainty relation for commuting operators A, B, and such observables
are said to be compatible. If [A, B] 6= 0, we say that A and B are complementary.

23
Segal has emphasised that the bounded observables in any quantum theory should
form the Hermitian part of a C ∗ -algebra with identity. This is a complex vector space
A with

1. a product AB is defined for all A, B ∈ A, which is distributive and associative,


but not necessarily commutative;

2. a conjugation A 7→ A∗ , which is complex-antilinear, is specified;

3. A is provided with a norm k • k which obeys Gelfand’s condition

kA∗ Ak = kAk2 ; (57)

4. A is complete in the topology given by this norm.

This concept includes all the examples we have seen so far; the set Mn (C), denoting
n × n matrices, with matrix addition and product, is a C ∗ -algebra. The ∗ operation is
Hermitian conjugate, and the norm kAk is the maximum eigenvalue of |A| = (A∗ A)1/2 .
For any Hilbert space, B(H) is also a C ∗ -algebra, and more generally, so is any von
Neumann algebra, which can be defined as any weakly closed ∗ -subalgebra of B(H)
containing the identity. Another notable example is the subset C(n) of M(n) con-
sisting of real diagonal matrices A = diag (a1 , . . . , an ). This is clearly commutative,
and the diagonal elements are the eigenvalues. Thus, each A ∈ C determines uniquely
a function i 7→ ai , 1 ≤ i ≤ n from the set Ωn = (1, 2, . . . , n) to R. Conversely,
any random variable f on Ωn defines a unique diagonal matrix diag (f (1), . . . , f (n)).
So the classical observables on Ωn can be described as a special type of quantum
mechanics, namely, the diagonal matrices. Moreover, the interpretation in classical
theory, of the values of the random variables f as possible observed values, coincides
with the quantum interpretation of the eigenvalues. Also, each n × n density ma-
trix ρ defines a unique probability measure p on Ωn , by using the diagonal elements:
p(i) = ρii , 1 ≤ i ≤ n. Clearly, a probability p can define a density matrix by the
same formula, but there are other, non-diagonal density matrices giving the same p.
If all the observables are contained in C, then the off-diagonal elements of the density
matrix are of no relevance, and all the information on the state of the system is con-
tained in p. A concept that captures the essentials of this idea, removing redundant
description, is due to Segal. Given the algebra of observables, A, we say a state on A
is a positive, normalised linear map ρ : A → C. Thus

1. ρ is complex linear;

2. ρ(I) = 1;

3. ρ(A∗ A) ≥ 0 for all A ∈ A.

24
Naturally, two constructions that lead to the same map are said to define the same
state. We should note that we only need the expectations, i.e. the first moments,
of the observables, because A itself contains all powers of A, and (as it is complete),
also elements such as eiA ; so if we know the state we know the characteristic function
of every observable, and so its distribution too.
More generally, the classical measure theory (Ω, B, µ), where µ is a positive mea-
sure, can be written as a (commutative) quantum theory by using the von Neumann
algebra L∞ (Ω, µ) acting as multiplication operators on L2 (Ω, B, µ); its normal states
correspond to (countably additive) probability measures, which vanish on µ-null sets.
Indeed, given a state ρ we can define the corresponding measure of a set B ∈ B as
ρ(χB ). In this, sets B and B ′ are indistinguishable if the differ by a µ-null set; we do
not really need Ω itself, but only the σ-tribe B, modulo this equivalence.
The set of states of a C ∗ -algebra A forms a convex set, which we shall call Σ(A)
or just Σ. The convex sum

ρ = λρ1 + (1 − λ)ρ2 , where 0 < λ < 1 (58)


represents the random mixing of the states ρ1 and ρ2 with weights λ and 1 − λ. All
expectations in the state ρ are then the same mixtures of the expectations in the
states ρ1 and ρ2 . If ρ1 6= ρ2 we say that ρ is a mixed state. If ρ cannot be written
as a mixed state (so that in any relation such as eq. (58) we must have ρ1 = ρ2 ), the
we say that ρ is a pure state. Every C ∗ -algebra possesses many pure states. For the
full matrix algebra Mn , every pure state is given by a unit ray {ψ} in the Hilbert
space Cn , using the usual quantum-mechanical expression; every density operator is
a mixture of such. This is an example of the Krein-Milman theorem, which says that
a weak∗ -compact convex set in the dual of a Banach space is generated by its extreme
points. The representation of a mixed state as eq. (58) is, in general, not unique.
For example, if H = C2 , the fully unpolarised state is (1/2)I, and this the equal
mixture of the pure states, the eigen-vectors of J3 , the spin operator in the direction
of quantisation, as well as the equal mixture of the eigenstates of J1 , or any other
spin direction. This means that all statistical properties of the observables are the
same however the state was made up. We express this by saying that the state-space
in quantum probability is in general not a simplex: in a simplex, any mixed state has
only one decomposition into pure states. In classical probability, in contrast, the state
space Σ(Ω) is a simplex. This is true in quantum probability only if A is abelian.
The density matrix contains all the information there is. Our inability to distinguish
the history of how the state was made is due to the quantum phenomenon of coherent
sums of wave-functions.
There is an important connection between states and representations of a C ∗ -
algebra. A representation π of A is a ∗ -homomorphism from A into B(H) for some
Hilbert space H. Thus, π(A) is an operator on H and the map π satisfies, for all
A, B ∈ A,
1. π(λA + B) = λπ(A) + π(B), for all λ ∈ C (linearity);

25
2. π(A∗ ) = (π(A))∗ (hermiticity).

A representation is said to be faithful if π(A) is non-zero if A 6= 0. A state ρ is said


to be faithful if ρ(A∗ A) = 0 only for A = 0. To each state ρ there is a representation
πρ , on a Hilbert space Hρ , and a unit vector ψρ ∈ Hρ , such that ρ is vector state ψρ ;
that is,
ρ(A) = hψρ , πρ (A)ψρ i, A ∈ A. (59)
If the state ρ is faithful, then so is the corresponding representation πρ . Moreover, π
is irreducible if and only if ρ is pure.
The proof of this theorem, which asserts the existence of Hρ and the homomor-
phism πρ , follows the common mathematical trick: we construct these objects out of
the material at hand. Let us do this when A has an identity and ρ is faithful. We
start with the vector space A and provide it with the scalar product

hA, Bi := ρ(A∗ B).

The completion of this space is then taken to be Hρ . The operator πρ (A) is taken
to be left-multiplication of A by A, thus: πρ (A)B := AB. This defines πρ (A) on the
dense set A ⊆ Hρ , and can be shown to be bounded. We take ψρ = I, the identity
in the algebra. One can then verify that (Hρ , πρ , ψρ ) satisfy eq. (59). A slightly more
elaborate construction can be given if there is no identity or the state is not faithful.
This realisation of the algebra is called the GNS construction, based on ρ.
It took some time before it was understood that quantum theory is a generalisation
of probability, rather than a modification of the laws of mechanics. This was not
helped by the term quantum mechanics; more, the Copenhagen interpretation is
given in terms of probability, meaning as understood at the time. Bohr has said
[35] that the interpretation of microscopic measurements must be done in classical
terms, because the measuring instruments are large, and are therefore described by
classical laws. It is true, that the springs and cogs making up a measuring instrument
themselves obey classical laws; but this does not mean that the information held on
the instrument, in the numbers indicated by the dials, obey classical statistics. If the
instrument faithfully measures an atomic observable, then the numbers indicated by
the dials should be analysed by quantum probability, however large the instrument
is.
We now present Gelfand’s theorem, which shows that any commutative quantum
theory can be viewed as a classical probability theory. We give a proof in finite
dimensions.

Theorem 3.2 Given a commutative ∗ -algebra C of finite dimension, there exists a


(finite) space Ω and an algebraic ∗ -isomorphism J from C onto A(Ω), such that for
any state ρ on C there exists a probability p on Ω, such that for any element A ∈ C
we have
ρ(A) = Ep [J(A)]. (60)

26
Proof
Since dim C = n < ∞, the dimension of the dual space is the same. There is a faithful
state ω on C; this could be for example a mixture of a basis of the state-space with
non-zero coefficients. We can therefore construct a faithful realisation of C as a matrix
algebra. In this, the GNS construction, the Hilbert space is built out of C and so is
of dimension n. A commutative collection of normal matrices can be simultaneously
diagonalised, so there is a basis in the Hilbert space such that each element of C is a
diagonal n × n matrix. Since exactly n of these diagonal matrices make up a linearly
independent set, every diagonal matrix appears. Every element of C is therefore a
sum of multiples of units {ej } of the algebra, satisfying e2j = ej and ei ej = 0, i 6= j.
In the above matrix realisation, ej is the matrix with 1 on the diagonal in position
P
j, and zero elsewhere. Thus A = aj ej . So let Ω = {ej }j=1,...n , and let JA be the
function JA(ej ) = aj . Then one verifies that J is an algebraic ∗ -isomorphism. To the
state ρ we associate the probability p(ej ) = ρ(ej ), and see easily that eq. (60) holds.
2
In this proof, instead of identifying Ω with the collection of elements ej in the
algebra, we could have taken the dual, and identified Ω with the set of characters on
C. This is the set of multiplicative states, that is, states ω obeying ω(AB) = ω(A)ω(B)
for all A, B ∈ C. The set of characters of a C ∗ -algebra is called its spectrum. Our
proof shows that there are exactly n of these, defined by ωj (ek ) = δjk . Putting A = B
we see that any character is dispersion-free. This is why the spectrum is taken by
Gelfand to be the definition of Ω in the infinite-dimensional case:

Theorem 3.3 Let C be a commutative C ∗ -algebra with identity. Then the set of
characters can be given a topology so as to form a compact Hausdorff space Ω such
that C is C ∗ -isomorphic to C(Ω), and every state on C corresponds to a finitely additive
measure on Ω (with the Borel tribe).

Bohm asked whether the observed statistics, agreeing with experiment, can be
obtained from a larger, more complicated classical theory. This is the idea behind the
attempts to introduce hidden variables. This is certainly true of the statistics of any
fixed complete commuting set of observables; for they form an abelian algebra, and so
can be represented by the classical statistics of multiplication operators on a sample
space (the spectrum of the algebra). Obviously the full non-abelian algebra cannot
be a subalgebra of an abelian algebra, so the way hidden variables are introduced
must be more elaborate than extending the algebra by adding them. However, the
deep result of J. S. Bell shows (if the dimension is 4 or higher) that the full set of
statistics predicted by quantum theory cannot be got from any underlying classical
theory. In the quantum model of two spin-half systems, Bell constructs a sum of
√ √
four correlations which in a certain state is equal to 2 2, a factor 2 larger than the
greatest value allowed in any classical theory.
Let us follow [36, 37]. Let P, Q be complementary projections, and also let P ′ , Q′
be complementary projections, while P is compatible with P ′ and with Q′ , and Q is

27
compatible with P ′ and Q′ . Define a = 2P − I, b = 2Q − I, and similarly for a′ and
b′ . For any state ρ define R by

R := ρ(aa′ + ab′ + bb′ − ba′ ) = ρ(C)

where C = a(a′ + b′ ) + b(b′ − a′ ). Then a2 = b2 = a′2 = b′2 = 1, so

C 2 = 4 + [a, b][a′ , b′ ] = 4 + 16[P, Q][P ′ , Q′ ]. (61)

Since kak = kbk = ka′ k = kb′ k = 1, it follows that

k[a, b][a′ , b′ ]k ≤ 4,

so C 2 ≤ 8 and |R|2 = |ρ(C)|2 ≤ ρ(C 2 ) ≤ 8. So in quantum theory, |R| ≤ 2 2.
If there is a joint probability space on which we can describe a, . . . , b′ by the r. v.
f, . . . g′ taking the values ±1, and a measure p on it, then R = Ep [h] where

h = f (f ′ + g′ ) + g(g′ − f ′ ).

Then these r. v. commute, so eq. (61) becomes h2 = 4, and

|R2 | = Ep [h]2 ≤ Ep [h2 ] = 4.

So |R| ≤ 2, (Bell’s inequality). Bell showed that the entangled states of the Bohm-

EPR set-up give a ρ such that R = 2 2, violating this. Thus no description by
classical probability is possible.
The famous Aspect experiment tested Bell’s inequalities. This involves observing
a system (in a pure entangled state) in a long run of measurements; the correlations
singled out by Bell, between several compatible pairs of spin observables, were mea-

sured. The experiments showed that R was just less than 2 2, in agreement with the
quantum predictions.
The upshot is that in quantum probability there is no sample space; we have the

C -algebra A, and this plays the rôle of the space of bounded functions.
Let us now examine Bohm’s claim that there is a hidden assumption in Bell’s
proof, that of ‘locality’. It is now generally agreed that the term ‘local’, referring
to the space-localisation, is not the best, and that ‘non-contextual’ is a better term;
namely that the choice of random variable f assigned to represent a certain observable
a which is being measured, does not depend on which of the other observables, a′ or b′ ,
is being measured at the same time. This is now called a non-contextual assignment.
Bohm suggested that we should allow a contextual choice of assignment of random
variable, so that the r.v. representing the observable a when a′ is also measured is
not the same as the choice of r.v. for a when it is measured with b′ . The two choices
will, however, have the same distribution. It should be said straight away that this
idea is contrary to the practice of probabilists, who would expect there to be a unique
random variable representing an observable. It also goes against the definition of

28
‘element of reality’ of EPR as extended by us to the random case. The quantum
version does not suffer from this unreality, since the mathematical object assigned to
the observable, the Hermitian matrix, does not depend on the context, i.e. is local in
Bohm’s language.
Bohm’s idea leads to a theory with very few rules. However there are some
restrictions, since the choice must be done so that all statistical measurements of
compatible observables (means, correlations, third moments etc) of the model can
be arranged to give the same answers as the quantum theory. This is achieved as
follows. Let a, a′ be compatible, generating a commutative C ∗ -algebra, C and let ρ
be a state on the full algebra A. By restriction, ρ defines a state on C. By Gelfand’s
isomorphism, we can construct a space Ω, the spectrum of C, and a measure µ on it,
such that a, a′ can be represented as multiplication operators on C(Ω), so they are
random variables, f, g. The joint probability distribution of f, g is the same as that of
the (diagonal) matrices a, a′ in the state ρ. On the other hand, Ω, µ depends on the
set a, a′ . Let us record this by denoting this Gelfand representation by Ωa,a′ , µa,a′ . If
we measure a and b′ , and proceed as Bohm suggests, then we get a different space
Ωa,b′ , the spectrum of a different algebra Ca,b′ , say. The state ρ leads to a different
measure µa,b′ . The r.v. assigned to a cannot be f this time; it must a function on
Ωa,b′ , a different space; it has the same distribution in µa,b′ as the f had in µa,a′ . In
this set-up, there is no obvious definition of a′ + b′ , as they are not functions on the
same space. This problem does not arise in the quantum formulation: there is an
underlying C ∗ -algebra, in which we can add the operators.
Bohm’s suggestion might be said to be an interpretation of quantum mechanics
in terms of classical probability [38]. However, the construction is not a probability
theory in the sense of Kolmogorov, as there is no single sample space; the theory
is preKolmogorovian, in the tradition of the frequentist school. One can generalise
the frequentist point of view, and specify that certain collections of observations
are compatible, and others are not; then we can by observation construct the joint
probabilities of each compatible set, and have no need of the sample space (the space
of joint values). A different compatible set need have no analytic relation to the first,
even though it contain common observables. Bell’s inequality need not hold, but then

neither need the quantum version, which is 2 times more generous. It is a feeble
theory, not much more that data collection, and has no predictive power. Mere data
give us no more than mere data.
Another variant of quantum mechanics, a new form of algebra called ‘quantum
logic’, was developed in [39]. New rules by which propositions can be manipulated
are given. This was worked on later by Jauch and coworkers [40], culminating in
Piron’s thesis. This says that the propositions form a lattice isomorphic to the lattice
of subspaces in a Hilbert space (but not necessarily over the complex field). Apart
from this result, quantum logic has not been very successful, and it is more productive
to keep to classical logic, but to generalise the concept of probability algebra from
commutative to non-commutative. Another alternative to quantum probability is

29
stochastic mechanics, founded but now abandonned by Nelson [41] as not being correct
physics. Thus Segal’s approach is the one we adopt here. It is well explained in
[42, 43, 44].
Quantum theory has its version of estimation theory [45, 46]. In finite dimensions,
the method of maximum likelihood is to find the density matrix ρ that maximises the
entropy, subject to given values for the means, {ηi } of observables in the subspace
of hermitian operators spanned by a named list {X1 , . . . , Xn } of slow variables. So
ηi = Tr[ρXi ]. It is well known that the answer is the Gibbs state

ρ = Z −1 exp(−H) = Z −1 exp −[ξ 1 X1 + . . . + ξ n Xn ], Z = Tr[exp(−H)]. (62)

Again, log Z is strictly convex, and its Hessian gives a Riemannian metric on the
manifold M of all faithful density operators [47, 17, 48]. In this case we get the
Kubo-Mori-Bogoliubov metric; in terms of the centred variables X̂i := Xi − ηi , the
metric is Z  1
g(X̂i , X̂j ) = Tr ρλ X̂i ρ1−λ X̂j dλ (63)
0
This is the closest point on M to any state with the given means, where ‘distance’
is measured by the relative entropy S(ρ|ρ′ ) := Tr ρ[log ρ − log ρ]. Again, the ξ j are
uniquely determined by the measured means ηi .

4 Kolmogorov and Ito


A stochastic process over a set T is a family {Xt , t ∈ T } of random variables on a
measure space (Ω, B, µ). We might have T = {0, 1, 2 . . .}, or T = R+ , when we inter-
pret t as time. From the frequentist point of view, we can observe Xt1 , Xt2 , . . . XtN
at finitely many points of time. In this way, we can test any a model as to what the
joint distribution of these r.v. is.
Kolmogorov’s existence theorem says that a family of joint (cumulative) distribu-
tions F1,2...n (x1 , . . . , xn ), given for all finite subsets of T , is the set of joint distributions
of a stochastic process over T if and only if the consistency conditions hold. Thus,
(the hatted variable is omitted):

1. For any permutation π of (1, 2, . . . , n), we have

F1,...n (x1 , . . . , xn ) = Fπ(1),...,π(n) (xπ(1) , . . . , xπ(n) )

2. For any j, we have

F1,...,n (x1 , . . . , xj = ∞, xj+1 , . . . , xn ) = F1,...,ĵ,...,n (x1 , . . . , xˆj , . . . , xn ).

If these hold, he shows that the sample space Ω may be taken to be RT , an


enormous space (of all functions ω of t); the r. v. Xt is then the function Xt (ω) = ω(t).

30
He proved the existence of a measure µ, which reproduces the given joint distributions;
the σ-tribe B has the following structure. Let Bt be the smallest σ-tribe such that all
Xs , for s ≤ t, are measurable; then this is an increasing family of σ-tribes, called a
filtration. Then B is the smallest σ-tribe containing all the Bt .
Apply this to the Brownian paths, and the measures defined by a finite set of gates
as in the last section; this proves that there is a probability theory underlying the
finite joint distributions. However, it does not prove Wiener’s theorem, in that the
sample space obtained by the Kolmogorov construction is the huge set of all functions
of time. It is then a hard problem to show that the subset of continuous functions has
measure 1. This fact is very important for specialists in Brownian motion, but is not
a general feature of processes covered by Kolmogorov’s theorem, and is not needed
to construct the usual Lp spaces of functional analysis. Without Wiener’s version
we lose the power of the path-wise methods, and also lots of intuition. The modern
method is to get the cow off the ice using Kolmogorov, and supplement it with further
estimates, on tightness and radonifying maps, if we need to find smaller carrier spaces
for the measure [49, 50]. After Kolmogorov’s treatise, the subject could develop ‘in
the usual professional mathematical way’, to use Segal’s phrase. That is, theorems
could be stated and proved, and then sharpened. The most important of these were
the laws of large numbers, the zero-one laws, the central limit theorems, the theory
of large deviations, the classification of all processes with independent increments,
martingales, and stochastic integration.
The conditional expectation Et := E[•|Bt ] takes a random variable in L2 (Ω, B, µ)
into one in L2 (Ω, Bt , µ); since it is the identity on the latter space, and is Hermitian,
it must be the orthogonal projection onto L2 (Ω, Bt , µ). None of these ideas depends
on which version of the sample space we have.
The concept of conditional expectation can be extended to integrable r.v., thus:
Definition 4.1 Let (Ω, B, µ) be a probability space, and let B0 be a sub-σ-tribe of B.
Let X be a random variable with E[X] < ∞. Then there exists a B0 -measurable r.v.
Y , written E[X|B0 ], such that for each set B ∈ B0 we have
Z Z
Y dµ = X dµ (64)
B B

Further, if Ŷ is another r.v. with these properties, then Ŷ = Y almost everywhere.


See [29] for a proof, and other things.
A martingale is a stochastic process Xt on (Ω, Bt≥0 , µ) such that Xt is integrable,
and
E[Xt |Xs ] = E[Xs ] for all t ≥ s. (65)
A martingale is a fair game. For example, consider the independent tosses of a
fair coin, and let Xn = Hn − Tn , where Hn is the number of heads and Tn is the
number of tails at the nth toss. Let SN = Nj=1 Xj . Then SN is a martingale [51], p
P

202.

31
There are four concepts of convergence of a sequence {Xn } to X in the space of
random variables on a probability space (Ω, B, µ).

1. We say Xn → X almost surely (or, almost everywhere) if

µ{ω : Xn (ω) → X(ω)} = 1.

2. We say Xn → X in k • kr if

kXn − Xkr → 0 as n → ∞.

3. We say that Xn → X in probability if

µ{ω : |Xn (ω) − X(ω)| > ǫ} → 0 as n → ∞ for all ǫ > 0.

4. We say Xn → X in law if

µ{ω : Xn ≤ x} → µ{ω : X ≤ x} for all x at which FX (x) := µ{X ≤ x}

is continuous.

These concepts are not equivalent; (1) and (2) are not comparable, but (1) or (2)
imply (3), which implies (4) [51]. Convergence in law can be related to convergence
of the characteristic functions of Xn to that of X; we see that if Xn converges to X
in law implies that Xn also converges to Y in law if Y has the same distribution as
X. This shows that convergence in law in a very feeble concept. The four concepts
of convergence do not depend on the version of sample space adopted, and so are the
same whether we use Wiener space or Kolmogorov’s abstract construction.
For a given µ, we can complete the σ-tribe B to include all subsets of sets of µ-
measure zero; then the events that can happen are described by the quotient σ-tribe,
in which events which differ by a set of measure zero are identified. This idea is
not wise when we are interested in measures with different sets of zero measure, as
happens when we condition a Wiener path to pass through a given point. The Dirac
measure on R is a simple example of the trouble we get into. If two measures µ1 , µ2
have the same sets of zero measure in (Ω, B), we say that they are equivalent. If
µ1 (B) = 0 whenever µ2 (B) = 0 we say that µ1 is absolutely continuous relative to µ2 ;
1
R
in that case there exists a function w ∈ L (Ω, B, µ2 ) such that µ1 (B) = B ρ(ω)dµ2
for all B ∈ B. We write w = dµ1 /dµ2 , the Radon-Nikodym derivative. This is the
abstract version of eq. (12). We shall be interested in other measures, singular relative
to a given one. Then the best formalism is to start with an abelian C ∗ -algebra A and
consider its states.
Estimation is assisted by the law of large numbers. Let X be a random variable
on a probability space, whose mean η we wish to find, making use of a random
experiment which is believed to be well modelled by X. We set up a sequence of

32
independent copies Xn of X, and consider the stochastic process {Xn } on e.g. the
probability space constructed by the theorem of Kolmogorov. The strong law of large
numbers says that if E(X) = η and E(X 2 ) < ∞, then putting Sn = N j=1 Sj we have
P

SN /N → ηI in k • k2 .
If E|X| < ∞, we get almost sure convergence. These are necessary and sufficient
conditions. Weaker conditions ensure that the sum converges in probability [51]. This
is called the weak law. Note that the meaning of convergence uses the measure on the
Kolmogorov space, so for almost all sequences, randomly chosen, we get convergence
to the mean. It does not say how fast the convergence is. For example if Xn is the
number of heads minus the number of tails, at the nth toss of a fair coin, then SN
is the number of heads in N tosses minus the number of tails, and SN /N converges
almost surely to zero. If we know that Sm 6= 0 after m results, the law does not say
that the bias evens out in the long run. Sn is a martingale, and its expected value for
N > m is its present value Sm . It is SN /N , which converges; the bias at time m gets
divided by N , and so goes away for large N .
Another famous limit law is the central limit theorem; if the standard deviation
X is 1 and the mean is zero, then one can show that

Sn /( n) → N (0, 1) in law.
Versions of this were known to Bernouilli and Gauss, if we assume that the moment-
generating function exists. It explains the ubiquity of the normal distribution; many
random processes are the sums of small and independent random things, and so tend
to be Gaussian. The theory of large deviations tells us something about the rate of
convergence of Sn ; this stuff is deeper [52, 53, 54, 55]. There is also a large body of
work on sums of nearly independent random variables, and also on the cases where
the variances are not all equal.
Doob proved that martingales often converge; e. g.,
Theorem 4.2 If {Sn } is a martingale with E(Sn2 ) < M < ∞ for some M and all n,
then there exists a random variable S such that Sn → S almost surely.
Consider now a process (Xt , Ω, Bt≥0 , µ) in continuous time with independent in-
crements; that is, Xt − Xs is independent of Xr for r < s < t. Since we can write
Xt − Xs as the sum of more and more independent differences, we might expect that
Xt − Xs must be Gaussian, by the central limit theorem. However, this is not the case
since the distributions of the difference Xt − Xs might change as the interval is made
smaller. This question led Levy to characterise all processes that are stationary and
have independent increments. This can be done by showing that the characteristic
function
C(λ) := E[exp i(Xt − Xs )λ]
should not only be of positive type, but so should any fractional power. Such a
function is called infinitely divisible, and so is the corresponding random variable.

33
The necessity is easy to see; we can write Xt − Xs as the sum of N identical and
independent random variables, namely, the increments for time intervals (t − s)/N ;
then the characteristic function of this sum is the product of the N characteristic
functions (which are all equal, by stationarity) of these increments, and so C has an
N th root that is of positive type. This condition is also sufficient, to which we shall
return. The characteristic function of the Gaussian is infinitely divisible, and so is
that for the Poisson distribution. This means that Gaussian and Poisson processes
with independent increments exist. Levy found that by mixing these he got some new
processes (Levy processes), and he found the most general form of the characteristic
function, which is
Z
2
log C(λ) = −aλ + ibλ + dσ(α)[eiαλ − (1 + iαλ)Z(α)] (66)

1
α2 dσ(α) < ∞. There are some further
R
where a ≥ 0, b is real, dσ(α) ≥ 0 obeys −1
conditions on the weight dσ at infinity [56]. If σ = 0 we get the Gaussian, and if σ
has a discontinuity, we get a Poisson process. These can be understood in terms of
Hilbert space cohomology of R, as in §(5).
During this period, physicists and engineers studied stochastic differential equa-
tions, similar to the Langevin equation. Often the random force was chosen to be
the derivative of Brownian motion, called white noise. Since Bt is at best continuous,
this work lacked rigour, and remained poorly defined even after appeal to Dirac’s
generalised concept of function. This sorry state of affairs was cleared up by Ito.
Let W (t) be Brownian motion starting at zero. At first sight, an equation for an
unknown X(t) similar to the Langevin equation, of the form eq. (43)

dXt dWt
= a(t) + b , for almost all ω
dt dt
makes no sense, since for almost all ω ∈ Ω, W (t) is not differentiable. The equation
does make sense if written in the form: find a family of random variables, Rt
{X(t)},
such that for a given initial random variable X(0), the r. v. X(t)− X(0)− 0 a(s)ds is
the known r.v. Wt for almost all ω. This does not prove that there is such a process,
but it is does make sense. For the more general case when a, b depend on the unknown
X(t), the integral form is
Z t Z t
X(t) − X(0) = a(s, X(s))ds + b(s, X(s))dW (s). (67)
0 0

The last expression, called a stochastic integral, looks like a Stieltjes integral, but the
needed condition of bounded variation on W (s) do not hold. Solve the equation by
iteration (Picard’s method); we see that at each stage, the approximation to X(t)
is a function of W (s) only for s ≤ t. So it would appear that we need only give a
meaning to the stochastic integral for the cases where X(t) is a function of W (s) for
s ≤ t, and so the same holds for b(t, X(t)). This can be neatly put in terms of the

34
filtration Bt generated by the Wiener process: for all t ≥ 0, X(t) and so b(t, X(t)) is
measurable relative to the σ-tribe Bt . This makes sense physically; it says that we
can know the present configuration X(t) if we know the initial configuration X(0)
and the outcomes of all the randomness, Ws , s ≤ t, so far. A random function of
time, f is said to be adapted to the filtration Bt if f (t) is Bt -measurable for all t ≥ 0.
Let f (t) be an adapted process in the time interval 0 ≤ t ≤ T , which is simple: that
is there is a finite partition 0 = t0 , t1 , . . . < tn = T such that f (t) = fj for tj−1 ≤ t < tj
for all integers j ∈ (1, . . . , N ). Here, fj is a random variable independent of time, and
equality of random variables means almost everywhere. Following Ito, we can define
the stochastic integral of an adapted simple function f to be the random variable
Z T X
f (t)dWt := fj (Wtj+1 − Wtj ). (68)
0 j

Note that the increment dW is in the future of the random variable fj that multiplies
it. The mapping, f 7→ 0T f (t)dWt takes the linear space of simple adapted functions
R

into the space of random variables, and is clearly a linear map. The brilliant remark
of Ito is then that the following identity, called Ito’s isometry, holds:
Z T Z T
2
E[| f (t)dWt | ] = E[|f (t)|2 ]dt (69)
0 0

Proof:
Z T
f (t)dWt |2 ] =
XX
E[| E[fi (Wti+1 − Wti )fj (Wtj+1 − Wtj )]
0 i j

E[|fi |2 (Wti+1 − Wti )2


X
=
i
X
+ 2 fi fj (Wti+1 − Wti )(Wtj+1 − Wtj )].
i<j

Now, the future increment Wti+1 − Wti is independent of fi , which is adapted, i.e. a
function of earlier W (s). So the expectation value in the first term factorises:

E[|fi |2 (Wti+1 − Wti )2 ] = E[|fi |2 ]E[(Wti+1 − Wti )2 ]


= E[fi |2 ](ti+1 − ti ),

by the property of Brownian motion. This gives the desired term in eq. (69). It
remains to show that the remaining double sum vanishes. This is true, because the
factor (Wtj+1 −Wtj ) for j > i is independent of the remaining factors fi fj (Wti+1 −Wti )
and so the expectation of the product is the product of the expectations; but the
expectation of the future increment of Wt is zero. 2
Ito’s isometry is a mapping from the set of simple adapted processes to random
variables; by a simple theorem of normed spaces, it can be extended by continuity to

35
a linear isometry (unitary transformation) between the completions of both sides in
the norms given. The completion of simple functions in the norm
Z T
kf k2 = E[|f (t)|2 ]dt (70)
0

is the space of processes such that E[|f |2 ] is Lebesgue integrable; so Ito can define
the stochastic, or Ito integral, of all processes with this property; it is the limit in this
norm of simple adapted processes approximating it. Naturally, we must prove that
the adapted simple processes are L2 -dense in the square-integrable adapted processes;
this is not difficult, since the projection Et is a bounded operator and maps onto the
space of Bt -adapted square-integrable processes.
We can now give a meaning to the question, do there exist solutions to the stochas-
tic differential equation
dXt dWt
= a(Xt , t) + b(Xt , t) ? (71)
dt dt
We say the a process Xt satisfies this equation if, on substituting Xs in the integrals
in eq. (67) we get back Xt − X0 .
For a wide class of functions a and b of two variables we can then get a convergent
iterated approximation, the Picard series, which converges to a process Xt obeying
the (integral form of) the stochastic differential equation. This holds for example if
a(x, y) is uniformly Lipschitz in y in a region, and b(x, y) is uniformly elliptic in y
and measurable in x, y. This result can be improved and generalised, so that vector-
valued stochastic processes can be studied, and the noise can be of a much more
general martingale than Wt . This can be reworded as a ‘martingale problem’ [57].
The converse to Ito integration should be a form of differentiation: it is called
(Ito) stochastic differentiation; we may say that the process f (Wt , t) is the stochastic
derivative of 0t f (Ws , s) dWs . The Ito integral is always a martingale, and every
R

martingale is a stochastic integral, and so has a stochastic derivative, namely the


integrand in its representation as an Ito integral. One can show that this is unique.
It is interesting to form the repeated stochastic integrals
Z t
Wt = dWs
0
Z t
: Wt2 := Wt2 − t = 2 Ws dWs
0
Z t
: Wt3 : = 3 : Ws2 : dWs
0
... ...

in which the Wick ordered (Hermite polynomials) occurring in the Wiener chaos
are the successive stochastic integrals. They are all contained in the exponential
1 2
martingale eλXt − 2 λ t . The second one illustrates the Doob-Meyer decomposition:

36
Wt2 is a submartingale, and is written as the sum of a martingale, : Wt2 :, and an
increasing function, t of bounded variation.
Manipulation of stochastic integration can be summarised by the Ito multipli-
cation table: keep all differentials in dt up to first order, using dt.dW = 0 and
dW.dW = dt. From this, we can get the important relation between a certain
parabolic partial differential equations known as Kolmogorov’s forward equation,
and the corresponding stochastic differential equation. Suppose that Xt satisfies the
stochastic differential equation eq. (71), with initial r.v. equal to X0 , which has law
p(x). Let p(x, t) be the law of Xt ; then it can be shown that p(x, t) is smooth and
satisfies the parabolic equation
∂p 1 ∂ ∂p ∂
 
= b(x, t)2 + (a(x, t)p), (72)
∂t 2 ∂x ∂x ∂x

with initial condition p(x, 0) = p(x). To see why this is, we note that if f (x) is any
smooth
R
function, we can apply Ito’s lemma to the random process f (Xt ). We recover
p(x, t)f (x) dx as E[f (Xt )|Xt = x]. We now expand f (Xt+dt ) in a Taylor series
about Xt up to second order in dW :

∂f 1 ∂2f
f (Xt+dt ) = f (Xt ) + dX + (dX)2 . (73)
∂x 2 ∂x2
Eq. (71) tells us that (dXt )2 = b2 dt and dX = a dt + b dW . Here, dW is the for-
ward difference. Then the expectation vanishes: E[f ′ b dW |Xt − x] = E[f ′ b|Xt =
x]E[dW |Xt = x] = 0, since dW is independent of f ′ b at time t, and has zero expec-
tation. So, taking the conditional expectation of eq. (73),

∂f 1
E[f (Xt+dt − f (Xt )|Xt = x] = E[ a] dt + E[b2 f ′′ |Xt = x] dt. (74)
∂x 2
Since a, b, f, f ′ , f ′′ are functions of Xt , t they become sure functions, evaluated at x
under the conditioning; thus we get the equation for the increment f (x, t + dt) :=
E[f (Xt+dt |Xt = x]:

(f (x, t + dt) − f (x))/dt := Lf = (1/2)b(x, t)2 f ′′ + a(x, t)f ′ .

This is Kolmogorov’s backward equation, which applies to the dynamics of the process.
To get the dynamics of the probability density, we take the dual operator L∗ , defined
by Z Z
p(x, t)Lf (x, t) dx = L∗ p(x, t)f (x, t)dx

which on integration by parts, and discarding the boundary term at ∞ gives


1 ∂ ∂ ∂
 

L f := b(x, t)2 f + (a(x, t)f ) .
2 ∂x ∂x ∂x

37
Since f was arbitrary, we see that p(x, t) satisfies the forward equation in the weak
sense (after smoothing with a test-function f ). It is known from the theory of elliptic
regularity, that any weak solution is a strong solution. If a and b are constants, we
arrive at the Smoluchowski equation, and the continuum version of (51):

p(x, t) = E[p(Xt , 0)|X(0) = x].

This representation for the solution of the pde gives an immediate proof that the
solution remains non-negative if the initial condition is non-negative, since p(Xt , 0) ≥
0; also, one sees that the time-evolution must be a contraction in the L∞ -norm, and
the L2 -norm as the conditional expectation is a projection.
Sometimes, we can rewrite the solution Xt in terms of time-translation ω 7→ ωTt if
we modify the measure [58]. Suppose µ′ is absolutely continuous relative to µ. Then
there exists an adapted process u(t) in (Ω, B, µ) such that

dXt = dWt + u(t)dt, X0 = 0, (75)

has a weak solution Xt whose law is the same as Yt (ω) := ω(t) as a r. v. on (Ω, B, µ′ ).
Then the Radon-Nikodym derivative is
Z t Z t 
′ = 2
dµ /dµ exp u(s)dXs − (1/2) ku(s)k ds . (76)
0 0

Conversely, if u is such that the r. h. s. of (76) has Wiener expectation 1, (as will
happen if u is bounded), then there exists an absolutely continuous measure µ′ given
by (76), such that Tt∗ on (Ω, B, µ′ ) produces a weak solution to (75). This is the
Girsanov-Cameron-Martin theorem.
This change of measure is closely linked to the change of ground state in the
corresponding quantum theory, when an interaction is introduced. We see this in the
Feynman-Kac formula, below.
One can, using similar methods, integrate adapted functions relative to dM ,
where M is any martingale. The stochastic integral has other variants, such as the
Stratonovitch version [32, 41]; one can also integrate non-adapted processes, subject
to other conditions (Skorokhod), or use another noise which is not quite a martingale
[59, 60]. The Ito version has an interesting interpretation in mathematical finance.
Suppose that the price of an asset is a random process St , and it obeys the Ito equation

dSt = a(St , t)dt + b(St , t)dWt .

If we choose to hold ϕ(t) units of this asset, our portfolio at time t is worth ϕ(t)St .
The change in the value of our portfolio in time dt is d(ϕ(t)St ), and we evaluate this
as ϕ(t)dSt , because we do not change our holding ϕ(t) until after we have seen the
change in the asset price. Here dSt = St+dtR − St , so the total change in the asset over
the time-interval [0, T ] is the Ito integral 0T ϕ(s)dSs , in which ϕ is adapted and the
stochastic increment is the forward difference.

38
We now give a brief account of the Feynman-Kac formula [61]. Feynman re-
lated Rthe quantum transition amplitude hψ, e−iHt φi to the integral over histories of
hψ, ei L(s) ds φi where L is the Lagrangian [62]. The trouble is, the Feynman ‘integral’
over histories is not based on measures, but on oscillatory integrals, and these rarely
converge. In quantum physics, the spectrum of the energy is bounded below (at least
at zero temperature). This expresses the stability of the theory. It follows that the
unitary time-evolution group e−iHt has an analytic continuation to complex times
with negative imaginary part. In particular this is true of all the matrix elements of
this operator. This is the underlying fact used in Euclidean quantum field theory, but
also holds for quantum systems without any large symmetry group; only invariance
under time-evolution is needed. In particular, we can consider the group for negative
imaginary times, giving a semigroup e−Ht . The large-time behaviour of this is very
good. This was used by Nelson [63] to study certain perturbations of the free Hamil-
tonian: it is easier to study perturbations of a contraction semigroup than a unitary
group.
2
Theorem 4.3 Let H0 = − 21 ∂x∂
2 and V be a real-valued C
∞ -function of x ∈ R,

vanishing at ∞. Then H0 + V is self-adjoint on Dom H0 and


Z  Z t 
hψ, e−(H0 +V )t ϕi = ψ(ω(0))ϕ(ω(t)) exp − V (ω(s)) ds dµ. (77)
0

For the proof, see [63] or [64]. For a version within quantum probability, see [65].
In this way, we construct an interacting theory in terms of a path integral using the
Wiener measure µ, weighted with an exponential function. The similarity with the
Gibbs state of a system of paths in a potential V is noteworthy. Suppose that V = 0
outside a region Λ, and converges to +∞ inside Λ. Then we see from Feynman-Kac
formula that the measure vanishes on all paths that enter the region Λ. After a
normalisation, the weighted measure thus becomes the conditional Wiener measure,
µ( . |ω(t) ∈
/ Λ for all t). The formula then solves the heat equation subject to the con-
dition of no-flow through the boundary ∂Λ. We do not need to find this conditioned
measure to use the formula; we can, for example, use the Monte Carlo method, and
sample paths by computer, rejecting any that enter Λ; we can also use the conditioned
measure to get results on monotonicity, since e. g. if the region Λ is enlarged, obvi-
ously more paths are allowed, and so the integral of a positive integrand is increased.
This relation with pde’s has developed into the subject called potential theory [51],
and is one of the tools used in constructive quantum field theory [66, 67, 68, 69].
Dyson saw the usefulness of using imaginary time in quantum field theory [70].
Schwinger [71] had introduced the idea of the Euclidean quantum field as a way
of avoiding the difficulties of Lorentz invariance; these are replaced by invariance
under O(4), the orthogonal group; since we analytically continue all the time-ordered
functions to imaginary time, time t gets replaced by it, often attributed to Minkowski.
In fact, Minkowski did not know about the consequences of positive energy; he did not

39
analytically continue anything, but simply replaced time by −ix4 , where x4 = it. This
means that he considered the complex O(4), and the invariance group was a particular
subgroup L of it consisting of matrices some of whose entries were complex. In fact,
L is isomorphic to the real Lorentz group, and is thus non-compact. Nothing has
been gained by Minkowski’s trick. Indeed, lots of confusion arose in electromagnetic
texts up until recently, where other four-vectors such as Aµ were regarded as having
a complex zeroth component. Schwinger’s programme of Euclidean field theory is a
special case of a theory developed by Wightman [72, 73], in which the expectation
values of the field are proved to have an analytic continuation in all the space-time
components, into a domain that includes real position variables and purely imaginary
time.
Symanzik [74] started the mathematical programme of Euclidean quantum field
theory. Glimm and Jaffe developed constructive quantum field theory using their
theory of the perturbation of contraction semigroups. This is almost a Euclidean
point of view. A beautiful probabilistic version of the subject resulted from Nelson’s
rewrite of Symanzik’s programme. Let us outline this for the quantum mechanics of
an oscillator.
We start with the self-adjoint Hamiltonian

1 ∂2
H = H0 + V = (− 2 + q 2 − 1) (78)
2 ∂qj

Then the lowest eigenvalue, say 0, is simple; let U (t) = e−iHt and let ψ0 be the
eigenfunction of the eigenvalue 0. Then ψ0 > 0 holds. That is, there are no nodes
in the ground state, a kind of Perron-Frobenius theorem. It is then convenient to
replace the Hilbert space of the theory, H = L2 (R, dq) by the unitarily equivalent
space H′ = L2 (R, |ψ0 (q)|2 dq). The unitary map W : H → H′ is given by (W ψ)(q) =
ψ(q)/ψ0 (q). This is obviously organised so that W ψ0 = 1, the unit constant function
in H′ . An observable A, acting on H, is converted to A′ = W AW −1 . The operator q
commutes with W , so is unchanged; but its canonical conjugate, p does not commute
with W , and neither does q(t) := U (t)qU (−t), so these operators do not take the
usual Schrödinger form on H′ .
The positivity of the energy ensures that the Wightman function h1, q(t1 ) . . . q(tn )1i
has an analytic continuation to purely imaginary times,

tj = isj , such that sj − sj+1 > 0, sj ∈ R, j = 1, . . . n − 1. (79)

Define the Schwinger function

Sn (s1 , . . . , sn ) = Wn (is1 , . . . , isn ) (80)

at points given by eq. (79); we take Sn to be defined by symmetry in the other


regions; since the wn are symmetric at real points, the n! analytic functions coincide
at a common boundary of real dimension n. So by the edge-of-the-wedge theorem

40
[73] there is one common analytic function coinciding with these Schwinger functions.
Obviously, Sn determines Wn , by the uniqueness of analytic continuation.
Then two properties hold: there is a stochastic process X(t) such that Sn is the
nth moment:
Sn (s1 , . . . , sn ) = E[X(s1 ) . . . X(sn )];
Moreover, the process is stationary and Markovian; that is

E[Xt |B≤s ] = E[Xt |Bs ], for t ≥ s. (81)

Here, B≤s is the σ-tribe generated by Xr , r ≤ s, and Bs that generated by Xs .


Neither of these properties is true for a general Hamiltonian theory, so they reflect
somehow the Lagrangian origins of the theory.
We can recover the physical Hilbert space as the initial space, L2 (Ω, B0 , µ) gen-
erated by powers X(0) acting on the vacuum, ψ0 which is the function 1. Also q is
then multiplication by X(0). The Hamiltonian can be recovered by the identity (c.f.
(51))
e−Ht P (q)ψ0 = E[P (X(t))|B0 ] (82)
for any polynomial P . This is the continuous version of the fact that the transition
matrix of a Markov chain can be recovered as the conditional probability of one
time-step. We find

hψ0 , q(t1 )q(t2 )ψ0 i = (1/2) exp{i(t1 − t2 )}. (83)

This leads by analytic continuation to

S(s1 , s2 ) = (1/2) exp{−|s1 − s2 |} = E[X(s1 )X(s2 )] (84)

where X(t) is the Ornstein-Uhlenbeck process.


Nelson was able to follow this programme for the free quantised field, and so
rewrite the problem of finding solutions to relativistic quantum fields in terms of
generalised random fields. A selection of good reading on this subject is [75, 76, 77,
78, 79].

5 Quantum Processes
Is friction a classical concept? ‘There is no friction in quantum systems: the ground
state of the atom does not grind to a halt. The introduction of friction, e. g. the
term −γ ẋ in Newton’s laws, is to account for atomic phenomena such as radiation
of moving charges, in a very crude way. Such effects are treated exactly in quantum
mechanics, and therefore frictional terms do not appear’. The view is still widespread
but not universal among physicists. Friction does not appear in classical mechanics
either if it is not put in.

41
A quantum process is, in a general way, a Hilbert space H and a family of self-
adjoint operators {A(t)}t≥0 on H. A quantum field used as noise appeared in [80].
Senitzky obtained the approximate dynamics of a quantum oscillator by reduction
from the dynamics of a larger conservative system. He arrived at the following
quantum Langevin equation with a Gaussian positive-energy quantum driving term
(ϕ(t), π(t)) (the noise):

dQ(t) dP (t)
= ωP (t) − γQ(t) + ϕ(t) = −ωQ(t) − γP (t) + π(t). (85)
dt dt
He noticed that without the ‘noise’, the Heisenberg commutation relations fade with
time: [Q(t), P (t)] = ie−2γt ; he considered this to be inconsistent with quantum me-
chanics. With the noise, the solutions obey [Q(t), P (t)] ≈ i for all time. The noise
was a free quantum field with constant energy spectrum from 0 to ∞. This does not
quite satisfy the requirement that the Heisenberg cummutation relations should hold
for all time. In [81] we found the general exact solution to this problem. A special
case is
ϕ(t) = 2−1/2 (a(t) + a∗ (t)), π(t) = i2−1/2 (a(t) − a∗ (t)),
where Z ∞
1/2
a(t) = (2γ/π) e−ikt a(k) dk, [a(k), a∗ (k′ )] = δ(k − k′ ).
ω
This has a constant energy spectrum from ω to ∞. The feature of this solution, and
Senitzky’s approximate solution, is the relationship between the dissipation γ and the
correlation of the quantum noise, which at zero temperature is
2γ iω(t−s) 1
ha(s)a∗ (t)i = e .
π t − s + iǫ
This is called the fluctuation-dissipation theorem.
Lax [82] used noise with all frequencies, with two-point function
γ
ha(s)a∗ (t)i = δ(t − s).
π
This is closer to the classical white noise, in that the increments to the process are
independent, and the field obeys a quantum version of the Markov property. It was to
be used later by Hudson and Parthasarathy in a rigorous body of theory [83, 84]. As
physics, it was criticised by Kubo and others, as violating the KMS condition, which
comes from the axiom of positive energy [43]. The correct treatment (at non-zero
temperature) was obtained by Ford et al., [85] by taking the limit of one oscillator
coupled to a large system of oscillators (or a string [86]). This was truly the quantum
Langevin equation, in that the noise is added only to the equation for P and not
to Q. This can also be obtained [87] as a singular limit of the asymmetric solution
given in [81]. The quantum noises in [85, 81] are not martingales, and have not got
independent increments. They do fit in to the axiomatic scheme offered in [88]. In

42
[89], Ford emphasizes the role played by causality. Instead of eq. (43), he considers
the equation with memory

..
Z t
mx+ µ(t − s)ẋ(s) ds + V ′ (x) = F (t). (86)
−∞

The fact that the dissipation due to the future must be zero leads us to consider
only those µ which vanish for negative argument. Perhaps this is a lesson for those
[90, 91, 92, 65, 83, 84] who like to work on Lax’s version.
The first work to use the words ‘continuous tensor products’ (CPT) was [90]. The
notable conclusion was that the theory can always be embedded in a boson Fock space;
the Wiener chaos is an example of this. We start with a definition of current algebra,
or better, current group. Let G be a Lie group, with Lie algebra G, and denote by
D(G) the set of C ∞ -maps from Rn into G, being the identity outside a compact set.
We can furnish D(G), the current group, with a group law by pointwise multiplication:
f g(x) := f (x)g(x). This group has a Lie algebra, denoted D(G), which is the set of
all C ∞ -maps F : Rn → G, of compact support, under the pointwise bracket [92]

[F (f ), G(g)] := [F, G](f g).

The problem is to find representations of the current groups and the current algebras,
by unitary or self-adjoint operators respectively.
Guichardet [93] proposed a construction for the tensor product of Banach spaces
or algebras, labelled by a continuous index. The first thing is to define, if possible,
the continuous product of f (x) over x ∈ Rn , when f has compact support. He tries
Y Z 
f (x) := exp log f (x) dx . (87)
x

For Hilbert spaces, we wish to define the scalar product between two fields of vectors
ψ(x) and φ(x). We put f (x) = hψ(x), φ(x)i and use eq. (87), provided that f (x) = 1
outside a compact set and we take log 1 = 0 (the principal branch). We then need
to be able to extend the scalar product to linear combinations of product vectors. In
[94], we give an example of a non-existent Hilbert continuous product, in that the
positivity fails on linear combinations. Guichardet presents a class of Hilbert spaces
for which the construction works, and writes the Fock representation of the free field
in these terms. To explain his examples, let H be a Hilbert space, and Γ(H) the Fock
space over H. We define the map exp H → Γ(H) by

exp φ := 1 ⊕ φ ⊕ 2−1/2 φ ⊗ φ ⊕ . . . ⊕ (n!)−1/2 ⊗n φ . . . (88)

The exp φ ∈ Γ(H) is called the coherent state determined by the one-particle state φ.
One shows that they form a total set (their span is dense) in Γ(H); clearly,

hexp φ, exp ψi = exphφ, ψi. (89)

43
In [93], the Hilbert spaces Hx at each point is itself the Fock space Γ(H) of a Hilbert
space H, and the family I consists of coherent states at each point. This is a special
case of the construction given below.
The case of current groups was treated in [95, 90]. We give here a special case
when the continuous label is R, interpreted as time; we start with (H, U, ψ), where H
is a Hilbert space, ψ ∈ H, and U is a representation of G on H such {U (g)ψ, g ∈ G}
has dense span. The triple (H, U, ψ) is called a cyclic representation of G.
We say that (H1 , U1 , ψ1 ) and (H1 , U2 , ψ2 ) are cyclic equivalent if there exists a
unitary isomorphism W : H1 → H2 such that for all g ∈ G,

W U1 (g)W −1 = U2 (g); W ψ1 = ψ2 .

A cyclic representation gives us a function on the group, analogous to the character-


istic function of a random variable. Indeed, it reduces to the characteristic function
when the group is R. Thus
C(g) := hψ, U (g)ψi. (90)
Let Span G denote the complex vector space of finite formal sums of elements of G.
Then C is continuous and of positive type on Span G, which determines (H, U, ψ)
up to cyclic equivalence. Conversely, a continuous function C of positive type on G
determines a cyclic representation (H, π, ψ) related to C by eq. (90). The construction
is very similar to the proof of the GNS representation. First, we construct the vector
space, Span G, and furnish it with the scalar product, determined by its values on the
linearly independent elements g1 , g2 , . . ., by

hgi , gj i = C(gi−1 gj );

we complete Span G in the norm (or, if a semi-norm with kernel K, we complete the
quotient Span G/K), giving the space H. Then we choose ψ to be the identity of
the group. The operator U (g) can be defined on Span G as left multiplication; this
is easily shown to be unitary, and so can be extended to the whole space to get the
representation U of G.
In an infinite tensor product over a discrete index, von Neumann was able to end
up with a separable Hilbert space only by labelling a special vector, say ψx in each
factor Hx , and then considering products ⊗φx of vectors in a subset ∆ that at infinity
are close to ψx . Only then does the infinite product hφx , ψx i converge. The tensor
Q

product then carries the labels {ψ(x), ∆}. Guichardet used a similar idea for the
continuous product. We are less ambitious, in that we ask for the tensor product of
a cyclic representation (H, U, ψ) of a group. We use the same representation at each
point of the time axis, because we want to get a stationary quantum process. We
then define the function C : D(G) → C as
Y
C(g( . ) := hψ, U (g(x))ψi, (91)
x

44
which is well defined if we choose at each x one branch of the logarithm. To get
a representation of the current group, it is necessary and sufficient that this be of
positive type on the current group, in which case we say that the CTP exists. We
also want the function to be extendable to step functions, constant in an interval [s, t]
and the identity outside. For such a g( . ), we divide an interval [s, t] into an arbitrary
number, N , of equal intervals; then C(g) is the product of N equal factors, each a
characteristic function on G. Thus C has the property that it has an N th root that is
also a characteristic function. Such C is called infinitely divisible. By the relation of
characteristic functions to cyclic representations, we are able to transfer the concept
of ∞-divisibility to cyclic representations:

Definition 5.1 Let (H, U, ψ) be a cyclic representation of a group G. We say [95]


that it is ∞-divisible if, for any integer N > 0, there is another cyclic representation
(H1/n , U 1/n , ψ 1/n ), called the nth -root, such that (H, U, ψ) is cyclically equivalent to
 
⊗H1/n , ⊗U 1/n , ⊗ψ 1/n

where the tensor product is over N factors, and the resulting representation is re-
stricted to the cyclic subspace spanned by the group acting on the product vector
⊗ψ 1/n .

We see immediately that if for some n the nth root of the representation exists, then it
is unique (up to cyclic equivalence). For, the characteristic function of two nth -roots,
C1 , C2 say, both satisfy Cin = C, and so their ratio is ωn , an nth -root of unity. But
this violates positivity unless ωn = 1. The converse also holds: if C is the product of
n functions of positive type, then C itself is of positive type. In [95] we assumed that
C(g) never vanishes; we prove this later.
Following [95] we can now give the criterion for the positivity of the scalar product
in a continuous tensor product ⊗ψ,∆ Hx of cyclic group representations, relative to
the cyclic vector ψ and the set of states ∆ := {U (g)ψ : g ∈ G}.

Theorem 5.2 The following are equivalent.

1. The function C(g) is a continuous function of positive type on G with C(e) = 1,


and is ∞-divisible.

2. There exists an ∞-divisible cyclic representation (H, U, ψ) of G such that C(g) =


hψ, U (g)ψi.
Nψ,∆
3. exists.

4. C(e) = 1 and a branch of log C(g) is a conditionally positive function on G.

In (3) and (4) the branch of the logarithm is determined by which root of C is of
positive type. Only the item (4) needs explanation. A function F (g) on a group is

45
said to be conditionally positive if

z i zj F (gi−1 gj ) ≥ 0
X

ij

for all n-tuples (g1 , . . . , gn ) of group elements and all complex n-tuples
(z1 , . . . , zn ) summing to zero: i zi = 0.
P

To sketch the proof, if C is ∞-divisible, and C = eF , then C s is also of positive


type, for all small s > 0. Then
X
z i zj (1 + sFij + . . .) ≥ 0, (92)
ij
P
and so if i zi = 0, we get that F is conditionally positive semidefinite. For the
converse, if F is conditionally positive definite, then eF is of positive type for all
s > 0, see [56], page 280.
The following result is called an Araki-Woods embedding theorem [95], because of
the similarity with [90], (but with different hypotheses). We remark that under the
above conditions F is conditionally positive semidefinite; then the function

hg, hi := F (g−1 h) − F (g) − F (h−1 ) (93)

is of positive type, and so can be used to define a semi-definite form on Span G by


sesquilinearity.
Let K be the (separated, completed) Hilbert space formed using this as scalar
product on Span G. Let G0 be the subgroup of G such that U (g)ψ = eiλ ψ for some
real λ. We see that hg, hi vanishes on Span G0 , and defines a scalar product on
Span G/(Span G0 ), (perhaps after identifying vectors of zero norm with zero). We
then complete this to give a Hilbert space, K. We see that the equivalence class of
the identity e ∈ G is the zero vector in K. The original cyclic representation (H, U, ψ)
can then be embedded in the Fock space over K, as follows: define the map W from
H to Γ(K) by its action on the total set U (G)ψ:

W (U (g)ψ) = C(g) exp[g], g ∈ G. (94)

One easily sees that this preserves the scalar product, using (93). Thus it can be
extended by linearity and continuity to H. We see that the cyclic vector ψ is mapped
to the ‘vacuum’ vector ψ0 of the Fock space. As for the group action, we use the fact
that G/G0 is a g-space, with left multiplication τg [h] = [gh]. This defines an action
exp{τg } on the Fock space as usual, by its actions on the coherent vectors:

exp{τg } exp[h] := exp[gh].

Define an operator U ′ closely related to exp{τg }:

U ′ (g)C(h) exp[h] := C(gh) exp[gh] (95)

46
Then by calculation one sees that (H, U, ψ) is cyclically equivalent to the cyclic sub-
space of (K, U ′ , ψ0 ); W intertwines U and U ′ and maps ψ to ψ0 = exp[e]. From the
unitarity of U ′ we see that |C(g)|2 = e−h[g],[g]i 6= 0.
The Gaussian measure is ∞-divisible, and the representation of the translation
2
group, U (λ), with Gaussian cyclic vector ψ(x) = (2π)−1/4 e−x /4 , is ∞-divisible. The
corresponding CTP contains Brownian motion §2; the continuous product ⊗t0 U (λ) is
the exponential martingale. A representation of the oscillator group is ∞-divisible,
and the CTP of this is the free non-relativistic quantised fields [92].
H. Araki independently obtained similar results [96]. Instead of ∞-divisible cyclic
representations of groups, Araki started with a factorizable representation of current
algebra. He remarked that, putting [g] = φg the map V (g)φh := φgh − φg is a unitary
representation of G; this is proved on the vectors φh , φk by use of (93). The equation
expresses that the map g 7→ φg ∈ K is a one-cocycle of the group, with values in K.
We briefly explain this.
So, let G be a group, and let K be a Hilbert space on which G acts by unitary
operators g 7→ V (g). We shall write the left action φ 7→ V (g)φ as left multiplication,
φ 7→ gφ. The right action, which appears in the general theory of group cohomology,
is taken to be trivial: φg = φ. An n-cochain with values in K is a map from Gn into
K, that is, it is a function of n group elements with values in K, thus: φ(g1 , . . . , gn ).
We shall need only the 0-cochains, which make up the space C 0 := K of vectors
independent of g, and the 1-cochains, which are vector fields φ(g) ∈ K defined on the
group. These make up the vector space C 1 . We shall also need the 2-co-chains, when
K = C; these are complex-valued functions of two group elements. We see that the
cochains of any degree k form a vector space C k . Fundamental to any cohomology
theory is the coboundary operator, which is a linear map, δ : C k → C k+1 , so increasing
the degree of the cochain. It obeys δ2 = 0. In the case of a group G and a left and
right action of G on K, δ is the linear map defined on C 0 by

(δφ0 )(g) = gφ0 − φ0 g.

On C 1 , δ is the linear map defined by

(δφ1 )(g1 , g2 ) = g1 φ1 (g2 ) − φ1 (g1 g2 )+ φ1 (g1 )g2 .

On C 2 , δ is the linear map defined by

(δφ2 )(g1 , g2 , g3 ) = g1 φ2 (g2 , g3 ) − φ2 (g1 g2 , g3 ) + φ2 (g1 , g2 g3 ) − φ2 (g1 , g2 )g3 .

The vector space of cocycles of degree k in a vector space K, with left and right
actions τ1 , τ2 , is denoted Z k (G, K, τ1 , τ2 ). One checks that δ2 = 0. A coboundary of
degree k is a vector function of the form δψ, where φ is a cochain of degree k − 1.
The coboundaries of degree k form the vector space B k (G, V, τ1 , τ2 ). Since δ2 = 0, we
see that every coboundary is a cocycle. If the converse holds, the cohomology group
H k := Z k /B k , is trivial. One sees that if φ is a one-cocycle in C 1 (G, K, V ), then
hφ(g1−1 ), φ(g2 )i is a two-cocycle in C 2 (G, C, I).

47
A 2-cocycle σ(g, h) with values in the unit circle is also called a multiplier for
the group. A multiplier representation of a group G is a map g 7→ U (g), g ∈ G,
such that U (g)U (h) = σ(g, h)U (gh) for all g, h ∈ G. Although Wigner’s analysis of
symmetry in quantum mechanics leads naturally to multiplier representations, their
occurrence is sometimes called an ‘anomaly’ by physicists. When the CTP exists, we
can represent the element g( . ) of the current group by the operator (⊗U )g , defined
on the product vectors ⊗x U (h(x))ψx by
(⊗U )g (⊗x U (h(x))ψx ) := ⊗x U (g(x)h(x))ψx , (96)
R
The space of the CTP is then Γ( ⊕ exp Kdx) and ∆ consists of coherent states of the
form exp φg(x) . So we obtain a local representation of the current algebra. We get a
multiplier when the branch of the logarithm in (91) obtained by the group law differs
from the one needed to give a function of positive type on the group. This gives rise
to an anomaly.
Araki showed that if φ is the cocycle defined by the ∞-divisible representation
U , then it is necessary that Imhφ(g1−1 ), φ(g2 )i be a coboundary. Conversely, given
a cocycle φ with this property, it comes from an ∞-divisible representation. He
proved that if G is compact, then any cocycle is a coboundary, i. e. of the form
φg = (V (g) − I)χ for some χ ∈ K. Use of a coboundary leads to a CTP of the
form assumed by Guichardet [93]. Araki was able to obtain analogues of the Levy
formula (66) for various groups; for the group R this takes on a new meaning, as
the decomposition of a cocycle into its parts coming from primitive cocycles, some
algebraic and some topological. The topological cocycles are of the form (V (g) − I)χ;
it is not a coboundary because χ is not in K, but lies in a larger space that admits an
extension of V ; the V (g) − I brings the vector back into K. Some groups, e. g. R,
also have cocycles called algebraic by Araki. For example, in the case G = R, take
K = C, and V (a) = I for all a ∈ G. The cocycle is φ(a) = a. Then hφa , φb i = ab is
real, and C(λ) = exp{− 21 λ2 }, the characteristic function of the Gaussian distribution.
The Poisson part of the Levy formula comes from the coboundaries, and the Levy
processes from the topological cocycles.
The question arises, given K, V and a cocycle g 7→ φg , can we construct a CTP?
We can construct (H, U (g), ψ) from C, which can be regarded as a function such that
C(e) = 1 and the map C(h) exp φh 7→ C(gh) exp φgh is unitary. The next big step
was by Parthasarathy and Schmidt [97], who showed that given a cocycle there is
indeed an ∞-divisible representation associated with it, but that it is a multiplier
representation, with an ∞-divisible multiplier σ. The corresponding function C(g) is
σ-positive. This means that
z i zj σ(gi−1 , gj )C(gi−1 gj ) ≥ 0.
X
(97)
ij

Naturally, this gives to a multiplier representation of the current group in general,


and they found the multiplier; this leads to a tidier theory than [96], since the condi-
tion for the absence of multiplier can be dropped. Since the physical interpretation

48
of a symmetry group leads (according to Wigner[98]) to the ambiguity of the induced
unitary representation up to a coboundary, the projective theory is certainly the right
setting. Holevo has presented some similar concepts at the level of the algebra of ob-
servables, and found applications in quantum theory [45]. Notable in the development
was the work of Gelfand, et al. [99] who used a cocycle of SL(2, R) to construct a
factorisable representation of the corresponding current group. The whole theory is
well explained in [100, 101].
A theory of processes with independent increments and values in a Lie algebra
G was developed in [102], extended to multiplier representations by Mathon [103]
and to Clifford algebras in [104]. Corresponding central limit theorems were proved
by Hudson, and Cushen and Hudson [105, 106]. A Lie process can be obtained
by differentiation of the corresponding object for a Lie group. For example, near
the identity any group element g lies on a one-parameter subgroup generated by an
X ∈ G, and we write (Exp means the exponential map from G to G, not the Fock
map) g(t) = Exp tX, g(0) = e, g(1) = g; given a representation U (g) we get a
representation of G by π(X) = d/dt[U (g(t)]t=0 . By Stone’s theorem, X is self-sdjoint.
However, given a cyclic vector ψ for U it does not follow that ψ is cyclic for π,
because of domain questions. Let E be the universal enveloping algebra of G. This is
the nonabelian polynomial algebra, modulo the ideal generated by the commutators
XY − Y X − [X, Y ]. Here, [X, Y ] ∈ G is the Lie product, a polynomial of degree 1. A
cyclic representation (H, π, ψ) is determined (up to equivalence) by a positive linear
functional, or state, on E:

X1 X2 . . . Xn 7→ hψ, π(X1 )π(X2 ) . . . π(Xn )ψi = Wn (X1 . . . Xn ).

These are the noncommutative moments, or Wightman functions; they determine a


representation, by the Wightman reconstruction theorem [73]. They are generated
by the characteristic function

C(λ) = hψ, U (Expλ1 X1 ) . . . U (Expλn Xn )ψi, λ ∈ Rn . (98)

Here, {Xj } is a basis of the Lie algebra, and any moment out of order is determined by
a derivative of C and use of the commutation relations. The truncated functions WT
are generated by log C, [107] and are related to W by a formula similar to eq. (10), re-
lating cumulants to the moments. Two cyclic representations with the same W , or the
same WT , are cyclic equivalent. The cumulants of exp U (the Fock construction) are
the same as the moments of U ; this follows from exp U (g) exp U (h)ψ) = exp(U (gh)ψ)
and (98).
Given two representations U1 , U2 of G, their tensor product U1 ⊗ U2 , restricted
to the diagonal subgroup of G × G, gives the representation π1 ⊗ I + I ⊗ π2 of G.
This led to the use of a coproduct, though it was not recognised as such until [108].
Whereas a product on an algebra A is a linear map A ⊗ A → A, a coproduct is a
map A → A ⊗ A. For Lie algebras the coproduct is X 7→ X ⊗ I + I ⊗ X. Then we
say that a cyclic representation (H, π, ψ) is ∞-divisible if for each N there is another,

49
(H1/N , π 1/n , ψ 1/N ) such that (H, π, ψ) is cyclically equivalent to π 1/N ⊗I +I ⊗π 1−1/N .
Starting at N = 2 this gives the concept of rational powers of π.
The differentiation of a CTP representation ⊗t Ut (g(t)) of the current group leads
to an ultralocal field [109, 110]. These are such that the truncated Wightman functions
have the form
Z
WT (X1 (f1 ) . . . Xn (fn )) = κn (X1 . . . Xn ) f1 (t) . . . fn (t) dt. (99)

Here, {κn } are the cumulants of π = dU . The commutative analogue was analysed
in [56]. For Lie algebras, we found [102]:
Theorem 5.3 The following are equivalent;
1) Eq. (99) defines a representation of D(G).
2) The κn are the cumulants of some ∞-divisible cyclic representation of G.
3) The κn are positive semi-definite on E1 , the subalgebra of E with identity omitted.
We note that (3) is the expression of conditional positivity at the algebraic level.
Since the cumulants of exp U are the moments of U , we can get a set of κn that obey
the positivity (3) by using the moments of exp U . These happen to have a positive
extension to E: any conditionally positive functional is positive. Th. (5.3) has a
cohomological version, which we outline.
Let E be an associative algebra with identity, K a linear space and τ a represen-
tation of E on K. The p-cochain group C p (E, K, τ ) is the linear space of p-multilinear
maps φ : E × . . . E → K. The coboundary operator δ : C p → C p+1 is given by
(−1)j φ(X1 , . . . , Xj Xj+1 , . . . , Xp+1 ).
X
(δφ)(X1 , . . . , Xp+1 ) = τ (X1 )φ(X2 , . . . , Xp+1 )+

Then δ2 = 0 and we define as usual the cocycle group Z ∗ := ker δ and the coboundary
group B ∗ := Ran δ, and the cohomology as H ∗ := Z ∗ /B ∗ . (∗ means for any p). We
see that a 1-cocycle is a map φ : E → K that satisfies φ(XY ) = τ (X)φ(Y ), and a
1-coboundary is a cocycle of the form φ(X) = τ (X)φ0 for some φ0 ∈ K.
The states on E1 are positive elements of B 2 (E1 , C, 0). Thus if (H, π, ψ) is ∞-
divisible, then its cumulants WT define a state on E1 , and thus a scalar product:
hX, Y i := WT (X ∗ Y ). Here we define X ∗ = −X, since we want π to represent
the generators iX of one-parameter subgroups by hermitian operators. Define K as
the separated prehilbert space obtained from E1 as usual. Let φ : E1 → K be the
embedding obtained from this, and define a *-action τ of G on monomials by
τ (X)φ(X1 . . . Xn ) := φ(XX1 . . . Xn ).
This states that φ is a 1-cocycle. We then show that there is a bijection between the
set of ∞-divisible cyclic representations (H, π, ψ) of G and the triples (τ, φ, χ), where
τ is a hermitian representation of G on a prehilbert space K, χ is a real character,
and φ ∈ Z 1 (E1 , K, τ ) such that
γ := Imhφ(X), φ(Y )i ∈ B 2 (E1 , R, 0). (100)

50
In this bijection, H is embedded in Γ(K), ψ is mapped to the Fock vacuum, and π is
related to exp τ [102]. So this is the Araki-Woods embedding theorem in this case.
If (100) fails then we get a projective representation of G, with multiplier σ related
to the cocycle γ [103, 101]. We see that a cocycle for R is defined by a function
χ ∈ L1 (R) such that xχ ∈ L2 (R). We thus see the origin of the condition near α = 0
in (66).
In [104] we show that for Clifford algebras, the only possible ∞-divisible states
are Gaussian (all cumulants above the second vanish). Here the coproduct is that of
Chevalley, A 7→ A ⊗ I + (−1)F I ⊗ A where F is the degree of A, for elements of even
or odd degree.
The algebraic theory was extended to associative algebras (that were not envelop-
ing algebras of Lie algebras) by Hegerfeldt, who applied it to classify ∞-divisible
quantum fields [107].
Goldin et al. have, independently of this work, constructed representations of a
vector form of charge-current algebra, starting with the Fock space creation-annihilation
operators [111]; they have been able to identify the representations in terms of the
general anlysis of semi-direct products.
Schürmann [108] introduced the concept of infinite divisibility for a representation
of a Hopf algebra, and obtained essentially all the results of [102, 103, 104] in this
more general setting. Stochastic integrals for these processes were also constructed.
For a clear account, see [112].
Voiculecsu developed the algebraic side into a subject called ‘free probability’
[113], as it lives in full Fock space, without symmetry or antsymmetry.
Albeverio and Hoegh-Krohn [114] have constructed representations of current
groups, and been able to replace the independence at every point by a covariance
similar to the Nelson free field.

6 Quantum Stochastic Semigroups


These models of non-commutative noise, or quantum noise, are possible driving ran-
dom terms for noisy quantum dynamics. What should we be looking for in a nonequi-
librium stochastic quantum dynamics? From 1970, E. B. Davies made progress in
formulating stochastic quantum dynamics [115]. Suppose that the C ∗ -algebra of ob-
servables is A. We look at the Fokker-Planck equation in the classical case, and we see
that we might expect a quantum stochastic process to be determined by a semigroup
(in continuous or discrete time) of maps Tt from the state space Σ(A) to itself. It
must map positive operators, the density matrices, to positive operators, and preserve
the trace. We also do not want it to map a normal state to one of the finitely additive
ones, so we require a stochastic map to obey

1. T maps Σ to itself;

2. T is linear;

51
3. In continuous time, k(Tt − I)Ak1 → 0 as t → 0.

We can throw the action onto to algebra, to get the dual action T ∗ : A → A, by the
requirement that for A ∈ A,

hT ρ, Ai = hρ, T ∗ Ai for all ρ ∈ Σ.

T ∗ is automatically normal. We see that if A is abelian, then our conditions reduce


to the properties needed for a classical stochastic process. It is obvious that a uni-
tary time-evolution gives us a one-parameter family of stochastic maps, which can be
extended to a group by including the inverses. We can get a large class of stochas-
tic maps by forming mixtures of unitary groups; thus if τj is a family of invertible
dynamics, then T = j λj τj is stochastic if λj ≥ 0 and λj = 1. Any stochastic
P P

map is non-invertible if it is not unitary, and so is in this sense dissipative [115], p 25.
In addition, in the quantum case, Kraus [116] has argued that to get a satisfactory
interpretation of the semigroup, T must be completely positive. We say that a map
T : A 7→ A is n-positive if T ⊗ In is positive on the algebra A ⊗ Mn . This is needed,
since if our quantum system is described by the algebra A, and there is an n-state
quantum system far away, then the combined system will be described by A ⊗ Mn ,
and the dynamics on the combined system could be T ⊗ In . This must be positivity
preserving, or else some state of the combined system will evolve to give negative
probabilities. Since we want to avoid this for all n, we want T to be n-positive for
all n = 1, 2 . . .. Such a condition is called complete positivity. It should be said that
any positive map on an abelian algebra is always completely positive, so this concept
only seriously arises in quantum probability.
Kraus showed that a map T is completely positive if and only if T (A) is a sum of
maps of the form Sn∗ ASn , where the Sn are bounded; [115], p. 140.
The great result in the subject is the classification of continuous semi-groups
of completely positive maps. In finite dimensions this was achieved in [117], and
independently, by Lindblad, [118] whose result holds for norm-continuous dynamics
on C ∗ algebras. Their result is the quantum analogue of the heat equation, i. e. it is
a dynamical equation for the density matrix. For a simple derivation, see [119]. The
result is:

Theorem 6.1 Let Tt be a semigroup of completely positive stochastic maps on Mn .


Then there exists a Hermitian matrix H and matrices Sj such that the generator of
the semigroup has the form
1
Sj∗ ASj , Sj∗ Sj .
X X
Z(A) = i[H, A] − (RA + AR) + where R = (101)
2 j j

This can be thrown onto the density matrices by duality. The first term i[H, A] is non-
dissipative, and is called the hamiltonian term. The second term is the dissipation.

52
It is very interesting that the first two terms of the Heisenberg expansion of the
dynamics are of this form. Thus,
  1
eiHt Ae−iHt − A t−1 = i[H, A] − [H, [H, A]]t + O(t2 )
2
1
= i[H, A] − (AS 2 + S 2 A) + SAS
2
where S = Ht1/2 ,

up to O(t), so it is of the form eq. (101) with R = S 2 . In the anti-van Hove limit [9]
we replace S by λH.
It has been remarked that the commutator A 7→ i[H, A] is a derivation of the
operator algebra, and so has many of the properties of a derivative. The double
commutator has many of the properties of the second derivative, including some
positivity, which mimics the positive spectrum of −∆ and the positivity improving
properties of e∆t . Lindblad has analysed continuous semigroups of cp maps, with
generator L, in terms of the ‘dissipation operator’, being minus the coboundary of L:

D(A, B) := −δL(A, B) = L(AB) − L(A)B − AL(B). (102)

He proves that Tt := exp(iLt) is a continuous semigroup of cp maps if and only if D


is positive in the sense that

Ci∗ D(A∗i , Aj )Cj ≥ 0


X
for all Ai , Cj ∈ A. (103)
ij

Note the formal similarity with [96, 95, 102, 97]. Fannes and Quaegebeur [120] have
defined the concept of ∞-divisible completely positive mappings on groups, in which
the function C(g) is replaced by a cp operator. They prove an Araki-Woods embed-
ding theorem for such structures.
Recall that for Markov chains, Brownian motion and Euclidean field theory, we
can express the given semigroup as an isometric time-translation, followed by the
conditional expectation onto the initial space. By using two-sided time, the isometries
can be replaced by a unitary group. The finding of the appropriate unitary group is
called the dilation of the semi-group. It is not unique, but there is a unique minimal
one. [121]. It would be nice to interpret the dilated system as representing the full
physics of system plus environment, with a unitary evolution; the projection onto
a subspace represents our loss of information due to incomplete knowledge. The
ambiguity of the dilation then shows that several different models give the same
(crude) coarse-grained dynamics. However, it will rarely be the case that a dilation
has the good properties, such as positivity of the energy, needed for this interpretation.
This is illustrated in the quantum case, which in finite dimensions takes the form
[115]

53
Theorem 6.2 Let Tt be a semigroup of cp stochastic maps on Mn acting on H. Then
there exists a Hilbert space K, a pure state ρ on H ⊗ K and a one parameter unitary
group Vt on H ⊗ K such that

Tt (A) = Eρ [Vt∗ (A ⊗ I)Vt ]

for all A ∈ M and all t ∈ R.

This is proved by putting together Theorem (4.2) and §7.2 of [115]. Note that
the Hilbert space K is constructed by adding Wiener noise, and so is not finite-
dimensional. The semi-group has been dilated to a unitary group on the Wiener
space with two-sided time; the generator of time-evolution is not bounded below,
since it has white spectrum. This does not represent an environment at any finite
temperature. A special case is the dilation of the semigroup given by the anti-van
Hove limit. In that case the process is given by
Z
2 /(2λ2 t)
X(t) = (2πtλ2 )−1/2 e−s U (s + t)XU (−s − t) ds. (104)

This has the interpretation as the Heisenberg evolution, but with the time t slightly
uncertain, and getting more uncertain in the future. This interpretation is only a
slight variation on the methods used in the justification of the microcanonical state
by ergodic theory. There, it is said that the atomic times are so small that we
never measure an observable at a particular time; rather, we measure the average
over the time 0 ≤ s ≤ t of the measurement thus: A = t−1 0t A(s)ds. Since t is so
R

large compared with the atomic processes, we take the limit t → ∞. This idea is
a non-starter for non-equilibrium statistical mechanics, since if the limit exists it is
time-independent. Instead, we may say that we cannot measure an observable at an
exact time, but form the weighted average, with Gaussian weight, around the desired
time t. The uncertainty in the Gaussian is λ2 t, growing with time. λ is the dissipation
parameter. In models it turns out to be the hopping parameter of the atomic system.
Some authors limit the concept of quantum stochastic process to the case where
the possible observed path of measurements themselves make up a classical process.
The grounds for this is that the observations (in a set of repeated experiments) have
actually been seen; these form the quantum record; take them to form a sample
space. However, this is not true. The process X(t) at different times might not
commute, so the measurement of X(t) alters the state (by collapse), and subsequent
measurements are not those predicted by X(t+s), s > 0, as computed using the given
initial state. It needs conditioning to the new information, and quantum conditional
expectations only commute on abelian subalgebras. Moreover, one can measure X(t)
in one sampling and Y (t) in another, where X and Y do not commute. No classical
model would predict the statistics of the process; the classical theorist is liable to be
hit by the EPR paradox in acute form. We regard X(t) as the observable seen at time
t when no measurement has been made in {s : 0 < s < t}. So we cannot agree with

54
the idea that the randomness itself is caused by the reduction of the wave-function
due to continuous measurement; it might be due to interaction with a large other
body, but not one designed to measure any particular observable.
Davies’s dilation of the Lindblad semigroup uses a number of independent Wiener
processes to provide the set-up. The question arises whether there is a relation be-
tween quantum dynamical semigroups and a class of quantum stochastic differential
equations, similar to the relation between the Fokker-Planck equation (72) and the
sde (71). For this, we need a quantum version of Ito’s integral. In 1956, Umegaki
defined the concept of conditional expectation in non-commutative integration theory
[122]. Let A be a von Neumann algebra with a semi-finite trace, and say an oper-
ator A is integrable if Tr |A| < ∞. The vector space of integrable operators can be
completed to form the space L1 (A). Segal and Nelson showed that there is a closed
operator representing an element of the completion. Let At be an increasing family
of subalgebras which generate A and are right continuous [123], such that the trace,
restricted to each At is semi-finite. Then a conditional expectation relative to the
trace is a linear map M : L1 (A) → L1 (At ), t ≥ 0, such that

Tr(XA) = Tr(Mt (X)A) for all A ∈ At , X ∈ L1 (A).

A martingale is a process Xt of integrable operators such that

Ms Xt = Xs

for all 0 ≤ s ≤ t. This concept can be generalised to a filtration of algebra with


specified state, rather than trace.
Cuculescu [124] proved a martingale convergence theorem for discrete time. Bar-
nett [123] obtained a martingale theorem for continuous time. This work persuaded
us to look for examples of noncommuting martingales. Soon we found plenty within
the theory of continuous tensor products [125]. Let (H, U, ψ) be an ∞-divisible rep-
resentation of a Lie group G, and consider ⊗∞ t=0 Ht relative to the vector ⊗ψt and the
set ∆ of coherent vectors. Here, all factors are the same. To g ∈ G we associate the
family of unitary operators

Vt (g) := ⊗t0 U (g) ⊗∞


t I. (105)

We call such an operator simple, localised in [0, t]. Let At be the algebra generated by
{Vs (g)} with 0 ≤ s ≤ t and g ∈ G. Then for s < t define the map Ms : At → As by
continuous linear extension of Ms ⊗tr=0 Vr (g) = ⊗sr=0 Vr (g). Then Ms is a conditional
expectation, and relative to Ms , the family Vt is a martingale. Applied to G = R
with ψ a Gaussian state, Vt is the exponential martingale of Brownian motion. When
G is the oscillator group, the lie algebra is spanned by P, Q, H and a central element
I. There is a representation by self-adjoint operators on L2 (R), with the ground state
of the harmonic oscillator as cyclic vector. This is infinitely divisible, and the unitary
operators in (105) are copies of the exponential martingale eiWt for the subgroups

55
generated by P and Q, and is the Poisson exponential martingale for the subgroup
generated by H [126]. This became known as the gauge process [84]. All these
martingales are defined on the total set of coherent states. Since they are unitary,
they can be extended to an everywhere-defined unitary group, the generators of which
are self-adjoint operators. This is the main technique of the Hudson-Parthasarathy
calculus [127, 83, 84]
Examples of martingales with trace were given in [128]. Consider the Fock Fermi
operators b(f ), b ∗ (g) with anticommutation relations [b(f ), b∗ (g)] = hf, gi for f, g ∈
L2 (R+ ). The algebra generated by these and the Fock condition b(f )|0i = 0 is
represented on antisymmetric Fock space over L2 (R+ ) as the W ∗ -algebra generated
by the Fermi field ψ(f ) = b(f ) + b∗ (f ) acting on the Fock vacuum |0i. The Clifford
process is the set of operators

Ψ(t) := ψ(ξ[0,t] ). (106)

The non-commutative integration theory [129, 130, 131], taking the place of measure
theory, is that based on the hyperfinite von Neumann factor of type II1 , which is
furnished with a faithful trace ϕ(A) = h0|A|0i. The completion of A in the norm
kAk = ϕ(A∗ A)1/2 is denoted L2 (A, ϕ). The projection Mt from L2 (A, ϕ) onto
L2 (At , ϕ) is the same as the projection from Γ(L2 [0, ∞]) onto Γ(L2 [0, t]); it obeys
the laws for a conditional expectation, and Ψ(t) is a martingale.
The increments of Ψ(t) are independent, but anti-commute. Otherwise, all the
properties are analogous to Brownian motion. The isometric time-evolution analogous
to the left shift of the classical theory is that given by the map Us : Ψ(t) 7→ Ψ(s + t).
The antisymmetric Fock space over L2 (R) carries a unitary extension of Us , namely
the second quantisation of translation in R. We define an adapted process h(t) to be
a family of operators such that h(t) ∈ At ; it is simple if it can be expressed as
n
X
h= hk−1 χ[tk−1 ,tk ) on [0, t). (107)
k=1

We then define the stochastic integral of any simple adapted process, relative to Ψ,
to be that constructed in the manner of Ito, with the forward difference in dΨ:
Z t n
X
f (s)dΨ(s) := hk−1 (Ψ(tk ) − Ψ(tk−1 )) . (108)
0 k=1

As in Ito’s theory, what make it work is an isometry property:


Rt
Theorem 6.3 If h(t) is a simple process made up of L2 operators, then 0 h(s)dΨ(s) ∈
L2 , and Z t Z t
k h(s)dΨ(s)k22 = kh(s)k22 ds.
0 0

56
The proof [128] is similar to Ito’s. We use this to construct the integral of square-
integrable adapted processes, and some Lp processes, by extension to the completion
of the space of simple adapted processes. The stochastic integral is the quantised field
Ψ, smeared with an operator h rather than a test-function. There is a Doob-Meyer
theorem: Mt2 is the sum of a martingale, denoted by [Mt , Mt ] in classical theory,
(NOT the commutator!) and an increasing process of bounded variation, denoted
hMt , Mt i. Any stochastic integral is a martingale, and we show the converse, that
any L2 martingale of mean zero is a stochastic integral. We also define the stochastic
integral N (t) = 0t h(s)dM (s), where h is adapted and square-integrable relative to
R

hMt , Mt i. Here, M is an L2 -martingale. This representation of N is unique; we then


write h as the stochastic derivative: h = ∂N/∂M . We show that we can change
variables in the integral: the stochastic Radon-Nikodym theorem [123].
We are able to show [132] that the quantum sde

dXt = F (Xt , t)dMt + dMt G(Xt , t) + H(Xt , t)dt (109)

has a solution in L2 (A, ϕ) for F, G, H continuous, adapted and locally uniformly


Lipschitz, for any martingale Mt of degree n, and that the solution obeys the Markov
property [133].
Manipulations of differentials are similar to the Ito calculus: (dt)2 = 0 = (dt)(dΨ);
(dΨ)2 = dt. Pisier and Xu have obtained ‘Burkholder-Gundy’ inequalities within this
theory [134].
The central state ϕ of the Clifford algebra corresponds physically to an infinite
temperature. For the CCR and CAR algebras, we constructed the stochastic integrals
starting with quasifree states with no Fock part, using the non-central state in place
of the trace [135, 136]. This theory is somewhat technical (‘unreadable’ [137]).
The general Lindblad semigroup can be dilated [138] using the flow defined by a
solution to a quantum stochastic equation in the sense of Hudson and Parthasarathy
[83, 84, 139]. It was extended to some unbounded cases by Belavkin [140]. We now
give a brief account of this, following Frigerio [141].
Let Tt = exp(Lt) be a semigroup of completely positive normal stochastic maps
on the algebra B(H).
Theorem 6.4 There exists a Hilbert space F, a group {αt : t ∈ R} of ∗ -automorphisms
of B(H ⊗ F) and a conditional expectation E0 of B(H ⊗ F) onto B(H) ⊗ IF such that

Tt (X) ⊗ IF = E0 [αt (X ⊗ IF 0], X ∈ B(H), t ∈ R. (110)

The evolution αt is a perturbation of the ‘free evolution’ α0t on B(H), of the form

αt ( . ) = U (t)α0t ( . )U (t)∗ , (111)

where {U (t) : t ∈ R} satisfies the cocycle condition

U (t)α0t (U (s)) = U (s + t), t, s ∈ r, (112)

57
is unitary and is the solution of a qsde in the sense of [83, 84]. We give the details in
the simplest case, eq. (101) with only one term S in the sum. We take F = Γ(L2 R),
with total the set of coherent vectors exp φ : φ ∈ L2 (R) ∩ L1 (R). We define the
annihilation process, creation process and gauge process on this total set by
Z t
A(t) exp φ = ( φ(s)ds) exp φ (113)
0
d  
A∗ (t) exp φ = exp φ + ǫχ[0,t] |ǫ=0 (114)

d
Λ(t) exp φ = exp (eǫχ[0,t] φ) |ǫ=0 . (115)

The conditional expectation Mt is as for the CTP, ⊗ts=0 (Fs ), based on the Fock
vacuum, and A(t), A∗ (t) are the creators and annihilators defined by the generators
P, Q of the Heisenberg subgroup of the oscillator group; Λ is the number operator.
We identify any operator X in B(H) with its ampliation X ⊗ IF , and any operator
Y with domain D ⊆ F with the algebraic tensor product IH ⊗ Y . A family U (t) is
found by solving the qsde

dU (t) = U (t) [iS ∗ dA(t) + iS dA∗ (t) + (iH − S ∗ S/2)dt] , (116)

with the initial condition U (0) = I. The structure of the equation is designed to
ensure that the solution, defined on the set of coherent states, is continuous, unitary,
and adapted. The term S ∗ S/2 arises as the Ito correction, or as due to the Wick
ordering [142]. To ensure that αt obeys the group law, the usual free evolution α0 on
F, the second quantisation of the translation group on L2 (R), is chosen. It is then
proved that α0−s [U ∗ (s)U (s + t)] satisfies the same qsde as U (t), and so, by uniqueness,
must be U (t). So U satisfies the cocycle condition. On multiplying out, we see that
αy is a group.
The theorem for a semigroup with a finite number of operators Sj follows a similar
line. 2
There is a fermionic version of this dilation [143].
Quantum stochastic calculus has become a mature field of mathematics. The
approach of [83], rather than [128], has the disadvantage that the stochastic integrals
are defined as operators only on a dense set. It is not always clear that they have a
unique closed extension. This is overcome in [83, 84] by limiting the class of equations
to those with unitary solutions. Another help in the analysis is by the use of Maassen
kernels [144]. Alternatively, [145] one may give a meaning to these objects as maps
between test-functions and distributions, using white-noise analysis.
One problem with this work, and this includes [123] as well, is that the spectrum
of the noise is white, so that random negative energy is added as well as positive
energy. We saw that positive energy seems to exclude martingales [80, 81]. In fact,
the KMS condition excludes the existence of a conditional expectation except in
trivial cases. It has been remarked that it also excludes the Markov property and

58
the regression theorem [146]. Lindblad has remarked [147] that for the oscillator, the
KMS condition is not compatible with the axioms of dynamical semigroups. So to
model random external forces in a real system, coupled to a heat-bath, the white noise
sde is an approximation, that might be good if the time-interval is large compared
with the memory time. These ideas are used to describe quantum systems like lasers,
which are subject to external forces; this was the original intention of Senitzky and
Lax. The modern version is described in [148]. Since external forces introduce energy
and entropy into a system, such models have two drawbacks:

1. The first law of thermodynamics is not obeyed.

2. The second law of thermodynamics is not obeyed.

This is the starting point of [149, 150, 9]. One step of the linear dynamics is given
by a bistochastic map ρ 7→ ρT , so that entropy increases. We require that T ∗ maps
any spectral projection of the energy to itself; this will preserve energy. To reduce
the description, we then project the new state ρT onto the information manifold M
defined by the set of slow variables, to get the state ρT Q. To preserve mean energy,
the energy must be a slow variable. The map Q is nonlinear and is interpreted as the
thermalisation of the fast variables. Thus, after the map T , the system itself decides
to find the best estimate ρT Q to ρT within M. The resulting map gives a nonlinear
motion through the manifold, obeying the first and second laws of thermodynamics.
This theory, called statistical dynamics, is still being explored [25, 152, 9].

References
[1] Milligan, Spike. ‘Adolf Hitler: my part in his downfall’, Penguin Books, 1972.

[2] Decker, M., and W. A. Woyczynski, Introductory Statistics and Random


Phenomena, Birkhauser, 1998.

[3] Tolman, R. C., Principles of Statistical Mechanics, Oxford Univ. Press,


1938.

[4] Kolmogorov, A. N., Grundbegriffe der Wahrscheinlichkeitsrechnung,


Springer-Verlag, Berlin, 1933.

[5] Krylov, N. S., Works on the Foundations of Statistical Physics, Princeton


University Press, 1979.

[6] Ruelle. D. ‘Characteristic Exponents for a viscous fluid..’, Commun. Math. Phys.,
93, 285-300, 1984. Thermodynamic Formalism, Encyc. of Math., 5, Addison-
Wesley, 1978.

[7] Birkhoff, G. D., Univ. Nac. Tucuman Rev., Ser. A5, 147-, 1946.

59
[8] Ando, T. Linear Algebra and its Appl., 118, 163-248, 1983.
[9] R. F. Streater, Statistical Dynamics, Imperial College Press, 1995.
[10] Ingarden, R. S., ‘Information theory and variational principles in statistical
physics’, Bull. Acad. Pol. Sci., 11, 541-547, 1963.

[11] Jaynes, E. T., ‘Information theory and statistical mechanics”, Phys. Rev., 106,
620-630 and 108, 171-190, 1957.
[12] Rao, C. R. ‘Information and accuracy attainable in the estimation of statistical
parameters’, Bull. Calcutta Math. Soc., 37, 81-91, 1945.
[13] Fisher, R. A., ‘Theory of statistical estimation’, Proc. Camb. Phil. Soc., 22,
700-725, 1925.

[14] Ingarden, R. S., ‘Information geometry in functional spaces’ Int. J. Engineering


Sci., 19, 1609-1633, 1981.
[15] Amari, S.-I., Differential-Geometric Methods in Statistics, Lecture Notes
in Statistics, 28, Springer-Verlag, 1985.
[16] Pistone, G. and C. Sempi, Annals of Stats., 23, 1543-1561, 1995.

[17] Chentsov, N. N., Statistical Decision and Optimal Inference, Nauka,


Moscow, 1972; Trans. Amer Math Soc, 53, 1982.

[18] Hasagawa, H. and D. Petz, ‘Non-commutative extension of information geometry,


II’, pp 109-118 in Quantum Communication, Computing and Measure-
ment, Eds. Hirota et al., Plenum Press, N. Y., 1997.
[19] Streater, R. F. ‘Information manifold for relatively bounded forms’, to appear in
the commemoration volume for N. N. Bogoliubov, (ed. A. A. Slavnov), Steklov
Institute. Archive: math-ph/9910035.
[20] Streater, R. F., ‘The analytic quantum info manifold’, to appear in Stochastic
processes, physics and geometry: new interplays, eds. F. Geszetesy, S.
Paycha and H. Holden; Can. Math. Soc. Archive math-ph/9910036.
[21] Grasselli, M., and R. F. Streater, ‘Quantum info manifold for epsilon-bounded
forms’, Archive math-ph/9910031.

[22] Bachelier, L., ‘Théorie de la speculation’, Ann. Sci. Ecole Norm. Sup., 17, 21-86,
1900. Reprint edited by Jacques Gabay, Gauthier-Villars, Paris, 1995.
[23] Einstein, A., Annalen der Physik, 17, 549-560, 1905.
[24] Chandrasekhar, S., M. Kac and R. Smoluchowski, Marian Smoluchowski,
Polish Scientific Publ., Warsaw, 1986.

60
[25] Streater, R. F. Jour. Stat. Phys., 88, 447-469, 1997. Rep. on Math. Phys., 40,
557-564, 1997.

[26] H. Lebesgue, ‘Lecons sur l’integration et la recherche des fonctions primitive’,


Gauthier-Villars, Paris, 1904.

[27] G. Vitale, Sul problema della mesura dei gruppe di punti di una retta, Bologna,
1905.

[28] F. Hausdorff, Grundzüge der Mengenlehre, W. de Gruyter, Leipzig, 1914.


Reprint, Chelsea, N. Y. 1949, 1965.

[29] Williams, D., Probability with Martingales, Cambridge Univ. Press, 1991.

[30] Wiener, N., J. Maths. and Phys., 2, 132, 1923.

[31] Segal, I. E., ‘Tensor algebras over Hilbert space I’, Trans. Amer. Math. Soc., 81,
106-143, 1956.

[32] Wiener, N., ‘The homogeneous chaos’, Amer. J. Math., bf 55, 897-936, 1938.

[33] Wick, G. C., ‘Evaluation of the collision matrix’, Phys. Rev., 80, 268-, 1950.

[34] Einstein, A., B. Podolsky, and N. Rosen, Phys. Rev. 47, 777-80, 1935.

[35] Schilpp, P. A., (ed) Albert Einstein: Philosopher-Scientist, Open Court,


La Salle Ill., 1969. p. 210.

[36] Landau, L. J., ‘On the Violation of Bell’s inequality in quantum theory’, Phys.
Lett., A120, 54-56, 1987.

[37] Kochen, S., and E. P. Specker, Jour. Math. Mech. 17, 59-67, 1967.

[38] Garden, R. Modern Logic and Quantum Mechanics, Adam Hilger, Bristol,
1984.

[39] Birkhoff, G. D., and J. von Neumann, ‘Logic of quantum mechanics’, Ann. of
Math., 37, 823-843, 1936.

[40] Jauch, J. M., Foundations of Quantum Mechanics, Addison Wesley, 1968.

[41] Nelson, E., Dynamical Theories of Brownian Motion, Princeton Univ.


Press, 1967.

[42] Emch, G. G., Algebraic Methods in Statistical Mechanics and Quantum


Field Theory, Wiley, 1972.

[43] Haag, R., Local Quantum Physics, 2nd ed., Springer-Verlag, 1996

61
[44] Horuzhy, S. S., Introduction to Algebraic Quantum Field Theory, Kluwer
Academic, 1990.

[45] Holevo, A. S. Probabilistic and Statistical Aspects of Quantum Theory,


North Holland, 1982.

[46] Ohya, M., and D. Petz, Quantum Entropy and its Use, Springer-Verlag,
Heidelberg, 1993.

[47] Ingarden, R. S., H. Janyszek, A. Kossakowski and T. Kawaguchi, Ibid, 37, 105-
111, 1982.

[48] Petz, D., and G. Toth, Lett. Math. Phys., 27, 205-216, 1993.

[49] Gross, L., ‘Abstract Wiener spaces’, Proc. 5th Berkeley Symp. in math. stat. and
prob. theory, 31-42, 1965-66.

[50] Schwartz, L., Radon Measures on Arbitrary Toplological Space and


Cylindrical Measures, Oxford Univ. Press, 1973.

[51] Grimmett, G. R., and D. R. Stirzaker, Probability and Random Processes,


Oxford, 1982.

[52] Varadhan, S. R. S., Large Deviations and Applications, SIAM, Philadelphia,


1984.

[53] Donsker, M. D., and S. R. S. Varadhan, Phys. Rep., 3, 235-237, 1981.

[54] Lewis, J. T. ‘Large deviation principle in statistical mechanics’. pp 141-155, LNM


1325, Ed. A. Truman and I. M. Davies, Springer-Verlag, 1988.

[55] Stroock, D. W., An Introduction to the Theory of Large Deviations,


Springer-Verlag, 1984.

[56] Gelfand, I. M. and N. Ya. Vilenkin, Generalised Functions IV, Academic


Press, 1964.

[57] Stroock, D. and S. R. S. Varadhan, Multidimensional Diffusion Processes,


Springer-Verlag, 1979.

[58] Williams, D., ‘To begin at the beginning’, 1-55 in Stochastic Integrals, Ed. D.
Williams, LNM 851, Springer-Verlag, 1981.

[59] McShane, E. J., Stochastic Calculus and Stochastic Models, Academic


Press, N. Y., 1974.

[60] Barnett, C., R. F. Streater and I. F. Wilde, ‘Quantum stochastic integrals under
standing hypotheses’, J. of Math. Anal. and Applications, 127, 181-192, 1987.

62
[61] Kac, M., ‘Some connections between probability theory and differential and in-
tegral equations’, Proc. 2nd Berkeley Symp. J. Neyman (ed.), University of Calif
Press, Berkeley, 1951.

[62] Feynman, R. P., Rev. Mod. Phys., 20, 267-, 1948.

[63] Nelson, E., ‘Feynman Integrals and the Schrödinger Equation’, Jour. Math.
Phys., 5, 332-343, 1964.

[64] Simon, B., Functional Integration and Quantum Physics, Academic Press,
N. Y., 1979.

[65] Hudson, R. L., P. D. F. Ion and K. R. Parthasarathy, ‘Time-orthogonal unitary


dilations...’Commun. Math. Phys., 83, 261-280, 1982.

[66] Glimm, J., and A. M. Jaffe, Quantum Physics, Springer-Verlag, Second Ed.,
1987.

[67] Glimm, J., and A. M. Jaffe, ‘Probability applied to physics’, Univ. Arkansas
Lect. Notes in Math., 2, Fayetteville, 1978.

[68] Fröhlich, J., ‘Schwinger functions and their generating functionals, I’, Helv. Phys.
Acta, 47, 265-306, 1974. II, ‘Markovian and generalized path space measures on
S’, Adv. in Math., 23, 119-180, 1977.

[69] Guerra, F., L. Rosen and B. Simon, ‘The P (ϕ)2 Euclidean quantum field theory
as classical statistical mechanics’, Annals of Math., 101, 111-259, 1975.

[70] Dyson, F. J., ‘The S-matrix in quantum electrodynamics’, Phys. Rev., 75, 1736-
1755, 1949.

[71] Schwinger, J. Phys. Rev, 82, 664-, 1951.

[72] Jost, R. General Theory of Quantized Fields, Amer. Math. Soc., Providence,
1965.

[73] Streater, R. F., and A. S. Wightman, PCT, Spin and Statistics, and All
That, Benjamin-Cummings, 1964.

[74] Symanzik, K., ‘Euclidean quantum field theory, pp 152-226 in Local Quantum
Theory, R. Jost (ed.), Academic Press, 1969.

[75] Minlos, R. A. Trudy Mosk. Mat. Obs. 8, 497-518, 1959.

[76] Wong, E., Ann. Math. Stat., 40, 1625-1634, 1969.

[77] Nelson, E., ‘The Free Markov Field’, J. Functl. Anal., 12, 211-227, 1973.

63
[78] Simon, B., The P (ϕ)2 Euclidean (Quantum) Field Theory, Princeton Univ.
Press, 1974.

[79] Gross, L., ‘The free Euclidean Proca and electromagnetic fields’, pp 69-82 in:
Functional Integration and its Applications, ed. A. M. Arthurs, Oxford,
1975.

[80] Senitzsky, I. R., ‘Dissipation in Quantum Mechanics: The harmonic oscillator


I,II’. Phys. Rev., 119, 670-679, 1960; ibid, 124, 642-648, 1961.

[81] Streater, R. F., ‘Damped oscillator with quantum noise’, J. Phys., A15, 1477-
1486, 1982.

[82] Lax, M., Phys. Rev., 145, 111-129, 1965.

[83] Hudson, R. L., and K. R. Parthasarathy, ‘Quantum Ito’s formula and stochastic
evolutions’, Commun. Math. Phys., 93, 301-303, 1984.

[84] Parthasarathy, K. An Introduction to Quantum Stochastic Calculus,


Birkhäuser, Basel, 1992.

[85] Ford, G. W., M. Kac and P. Mazur, J. Math. Phys., 6, 505-515, 1965.

[86] Lewis, J. T., and L. C. Thomas, ‘How to make a heat bath’, 97-123 in Functional
Integration and its Applications, Ed. A. M. Arthurs, Clarendon Press, Ox-
ford, 1975.

[87] Hasagawa, H., J. R. Klauder and M. Lakshmanan, J. of Phys., A14, L123-L128,


1985.

[88] Accardi, L., A. Frigerio and J. T. Lewis, ‘Quantum stochastic processes’, Proc.
Res. Inst. Math. Sci., 18, 97-133, 1982.

[89] Ford, G. W., ‘Temperature-dependent Lamb shift of a quantum oscillator’, pp


202-206 in Quantum Probability and Applications II, LNM 1136, Eds. L.
Accardi and W. von Waldenfels, Springer-Verlag, 1985.

[90] Araki, H., and E. J. Woods, Proc. R. I. M. S., Kyoto, 2, No.2, 1966.

[91] Streater, R. F. Nuovo Cimento, 53, 487-495, 1968.

[92] Streater, R. F. Nuovo Cimento 53, 487-495, 1968.

[93] Guichardet, A., Commun. Math. Phys., 5, 262-, 1967.

[94] Dubin, D. A. and R. F. Streater, Nuovo Cimento 50, 154-157, 1967.

64
[95] Streater. R. F. ‘Current commutation relations, continuous tensor products, and
infinitely divisible group representations’, pp 247-263 in Local Quantum The-
ory, Ed. R. Jost, Academic Press, 1969.

[96] Araki, H. ‘Factorizable representations of current algebras’ Proc. R. I. M. S.,


Kyoto, 5, 361-422, 1970/71.

[97] Parthasarathy, K., and Schmidt, K., ‘Positive definite kernels, continuous tensor
products, and and central limit theorems of probability theory’, Lecture Notes
in Maths., 272, 1972.

[98] Wigner, E. P., Group Theory and Applications to Atomic Spectroscopy,


First Ed. (German), Viewig., Brannschweig, 1931.

[99] Gelfand, I. M., M. I. Graev and A. M. Vershik, ‘Representations of the group


SL(2, R), where R is a ring of functions.’ Uspehi-Mat-Nauk 28, 83-128, 1973.

[100] Guichardet, A., Symmetric Hilbert Space and Related Topics, LNM 261,
Springer-Verlag, 1972.

[101] Erven, J., and B.-J. Falkowski, Low Order Cohomology and Applications,
Lecture Notes in Mathematics, 877, Springer-Verlag, 1981.

[102] Streater, R. F. ‘Infinitely divisible representations of Lie algebras’, Wahrschein.


ver. Geb., 19, 67-80, 1971.

[103] Mathon, D., ‘Infinitely divisible projective representations of the Lie algebras’,
Proc. Camb. Phil. Soc., 72, 357-368, 1972.

[104] Mathon, D. and R. F. Streater, ‘Infinitely divisible representations of Clifford


algebras’, Zeits Wahr. verw. Geb., 20, 308-316, 1971.

[105] Cushen, C. D., and R. L. Hudson, ‘A quantum mechanical central limit theo-
rem’, J. Appl. Prob., 8, 454-469, 1971.

[106] Hudson, R. L., ‘A quantum-mechanical central limit theorem for anti-


commuting variables’, J. Appl. Prob., 10, 502-509, 1973.

[107] Hegerfeldt, G. C., ‘Prime field decompositions and infinitely divisible states on
Borchers’ tensor algebra’, Commun. Math. Phys., 45, 137-151, 1975.

[108] Schürmann, M., ‘Positive and conditionally positive linear functionals on coal-
gebras’, 475-492 in: Quantum Probability II, Eds. L. Accardi and W. von Walden-
fels, LNM 1136, 1985. ‘Infinitely divisible states on cocommutative bialgebras’,
Proc. probability Measures on Groups IX, Oberwohlfach, 1988. White Noise on
Bialgebras, Lecture Notes in Math., 1544, Springer-Verlag, 1993.

[109] Araki, H., Ph. D. Thesis, Princeton, 1960 (unpublished).

65
[110] Klauder, J. R., ‘Fock space revisited’, Jour. Math. Phys., 11, 609-630, 1970;
‘Ultralocal scalar field models’, Commun. Math Phys., 18, 307-318, 1970.
[111] Goldin, G. A., R. Menikoff and D. H. Sharp, J. Math. Phys., 21, 650-, 1980.
[112] Meyer, P.-A., Quantum Probability for Probabilists Lecture Notes in
Mathematics 1538, Springer-Verlag, 1991.

[113] Voiculescu, D., ‘Addition of certain non-commuting random variables’, J.


Functl. Anal., 66, 323-346, 1986.
[114] Albeverio, S. and Hoegh-Krohn, ‘Some Markov processes and Markov fields’, pp
497-540, in Stochastic Integrals, ed. D. Williams, LNM 851, Springer-Verlag,
1981.
[115] Davies, E. B., Quantum Theory of Open Systems, Academic Press, 1976.

[116] Kraus, K., ‘General State Changes in Quantum Theory’, Ann. Phys., 64, 311-
335, 1971.

[117] Gorini, V., A. Kossakowski and E. C. G. Sudarshan, ‘Completely positive dy-


namical semigroups of n-level systems’, J. Math. Phys., 17, 821-825, 1976.
[118] Lindblad, G., Commun. Math. Phys., 48, 119-130, 1976.
[119] Landau, L. J., and R. F. Streater, ‘On Birkhoff’s theorem...’, Linear Algebra
and its applications, 193, 107-127, 1993.

[120] Fannes, M., and J. Quaegebure, ‘Infinite divisibility and central limit theorems
for completely positive mappings’, Quantum Probability and Applications,
LNM 1136, Ed. L. Accardi and W. von Waldenfels, Springer-Verlag, 1985.
[121] Stinespring, W. F., ‘Positive functions on C ∗ -algebras’, Proc. Amer. Math. Soc.,
6, 242-247, 1955.
[122] Umegaki, H., ‘Conditional expectation in an operator algebra’, Tohoku Math.
J., 8, 86-100, 1956.

[123] Barnett, C., ‘Supermartingales on semi-finite von Neumann algebras’, J. Lond.


Math. Soc., 24, 175-181, 1981.

[124] Cuculescu, I., ‘Martingales on von Neumann algebras’, J. Multivariate Analysis,


1, 17-27, 1971.
[125] Hudson, R. L. and R. F. Streater, ‘Examples of Quantum Martingales’, Phys.
Lett., 85A, 64-67, 1981.
[126] Streater, R. F., and A. Wulfsohn, ‘Continuous tensor products of Hilbert spaces
and generalised random fields’, Nuovo Cimento, 57, 330-339, 1968.

66
[127] Hudson, R. L. and R. F. Streater, ‘Noncommutative martingales and stochastic
integrals in Fock space’, pp 216-222 in Stochastic Processes in Quantum
Theory and Statistical Physics, Lecture Notes in Physics, 173, eds S. Al-
beverio, Ph. Combe and M. Sirugue-Collin, Springer-Verlag, 1981.

[128] Barnett, C., R. F. Streater and I. F. Wilde, ‘The Ito-Clifford integral’, J. Functl.
Anal., 48, 172-212, 1982.

[129] Segal. I. E. ‘A non-commutative extension of abstract integration’, Ann. of


Math., 57, 401-457, and 595-596, 1953.

[130] Segal, I. E., ‘Tensor algebras over Hilbert space II’, Annals of Math., 63, 160-
175, 1956.

[131] R. A. Kunze and I. E. Segal, Integrals and Operators, Springer-Verlag, 1978.

[132] Barnett, C., R. F. Streater and I. F. Wilde, ‘The Ito-Cilfford integral II’ J.
Lond. Math. Soc., 27, 373-384, 1983.

[133] Barnett, C., R. F. Streater and I. F. Wilde, ‘The Ito-Clifford integral IV’ J.
Operator Theory, 11, 255-271, 1984.

[134] Pisier, G., and Q. Xu, ‘Non-commutative martingale inequalities’, Commun.


Math. Phys., 189, 667-698, 1997.

[135] Barnett, C., R. F. Streater and I. F. Wilde, ‘Quasi-free quantum stochastic


integrals for the CAR and CCR’, J. Funct. Anal., 52, 19-47, 1983.

[136] Lindsay, M. and I. F. Wilde, ‘On non-Fock boson stochastic integrals’, J. Functl.
Anal., 65, 76-82, 1986.

[137] Meyer, P.-A., Private communication, Oberwohlfach, 1985.

[138] Evans, M. P., and R. L. Hudson, ‘Multidimensional quantum diffusions’, LNM,


1303, 69-88, Springer-Verlag, 1988.

[139] Vincent-Smith, G. F., ‘Unitary quantum stochastic evolutions’, Proc. London


Math. Soc. 63, 401-425, 1991.

[140] Belavkin, V. P., ‘Quantum stochastic positive evolutions’, Commun. Math.


Phys., 184, 533-566, 1997. ‘On stochastic generators of completely positive co-
cycles’, Russian J. of Math. Phys., 3, 523-528, 1995.

[141] Frigerio, A., ‘Construction of stationary quantum Markov processes’, pp 207-


222 in Quantum Probability and Applications II, LNM 1136, (eds. L.
Accardi and W. von Waldenfels), Springer-Verlag, 1985.

67
[142] Hudson, R. L. and R. F. Streater, ‘Ito’s formula is the chain rule with Wick
ordering,’ Phys. Lett., 86A, 277-279, 1981.

[143] Applebaum, D. and R. L. Hudson, ‘Fermion Ito’s formula and stochastic evo-
lutions’, Commun. Math Phys., 96, 473-496, 1984.

[144] Maassen, H. ‘Quantum Markov processes on Fock space described by integral


kernels’, Quantum Probability II, 361-374, 1985, Lecture Notes in Mathe-
matics 1136, eds. L. Accardi and W. von Waldenfels, Springer-Verlag.

[145] Obata, N., ’Wick product of white noise operators and quantum stochastic
differential equations’, J. Math. Soc. Japan, 51, 613-641, 1999.

[146] Talkner, P. ‘Failure of the quantum regression theorem’, Ann. of Phys., 167,
390-436, 1986.

[147] Lindblad, G. ‘Brownian Motion of quantum harmonic oscillators’, J. Math.


Phys., 39, 2763-2780, 1998.

[148] Alicki, R. and K. Lendl, Quantum Dynamical Semigroups and Applica-


tions, Lecture Notes in Physics, 286, Springer-Verlag, 1987.

[149] Alicki, R., and J. Messer, ‘Nonlinear quantum dynamical semigroups for many-
body open systems’, J. Stat. Phys., 32, 299-312, 1983.

[150] Balian, R., Y. Alhassid and R. Reinhardt, ‘Dissipation in many-body systems:


a geometric approach based on information theory’, Physics Reports, 131, 1-146,
1986.

[151] Ingarden, R. S., Y. Sato, K. Sagura and T. Kawaguchi, Tensor, 33, 347-, 1979.

[152] Streater, R. F. ‘Onsager symmetry in statistical dynamics’, Open Systems and


Information Dynamics, 6, 87-100, 1999. ‘The Soret and Dufour Effects in Statisti-
cal Dynamics’, Proc. Roy. Soc., A 456, 205-221, 2000. Archive math-ph/9910043.

68

You might also like