Non-Equilibrium Dynamics, Thermalization and Entropy Production
Non-Equilibrium Dynamics, Thermalization and Entropy Production
E-mail: [email protected]
STATPHYS-KOLKATA VII Conference Proceedings
Abstract. This paper addresses fundamental aspects of statistical mechanics such as the
motivation of a classical state space with spontaneous transitions, the meaning of non-
equilibrium in the context of thermalization, and the justification of these concepts from the
quantum-mechanical point of view. After an introductory part we focus on the problem of
entropy production in non-equilibrium systems. In particular, the generally accepted formula
for entropy production in the environment is analyzed from a critical perspective. It is shown
that this formula is only valid in the limit of separated time scales of the system’s and the
environmental degrees of freedom. Finally, we present an alternative simple proof of the
fluctuation theorem.
1. Introduction
Classical non-relativistic statistical physics is based on a certain set of postulates. Starting point
is a physical entity, called system, which is characterized by a set Ωsys of possible configurations.
Although this configuration space could be continuous, it is useful to think of it as a countable set
of discrete microstates s ∈ Ωsys . Being classical means that the actual configuration of the system
is a matter of objective reality, i.e. at any time t the system is in a well-defined configuration
s(t). The system is assumed to evolve in time by spontaneous transitions s → s′ which
occur randomly with certain transition rates wss′ > 0. This commonly accepted framework
is believed to subsume the emerging classical behavior of a complex quantum system subjected
to decoherence.
Although in a classical system the trajectory of microscopic configurations is in principle
measurable, an external observer is usually not able to access this information in detail. The
observer would instead express his/her partial knowledge in terms of the probability Ps (t) to
find the system at time t in the configuration s. In contrast to the unpredictable trajectory s(t),
the probability
P distribution Ps (t) evolves deterministically according to the master equation
Ṗs (t) = s′ ∈Ωsys (Ps′ (t)ws′ s − Ps (t)wss′ ).
Figure 1. Cartoon of a complex statistical system as a space Ωsys of configurations (red dots). At any time
the system is in one of the configurations and evolves by spontaneous transitions (indicated by arrows) selected
randomly according to specific transition rates.
(i) The system is characterized by a certain set Ωsys of configurations s ∈ Ωsys , also called
microstates. Usually the configuration space is implicitly specified by the definition of
a model. For example, in a reaction-diffusion model this space is the set of all possible
particle configurations while in a growth model the microstates are identified with the
possible configurations of an interface.
(ii) The states are classical, i.e., at any time the system is in one particular configuration s(t).
(iii) The system evolves randomly by instantaneous transitions s → s′ occurring spontaneously
with certain transition rates wss′ ≥ 0. In numerical simulations, this dynamics is
approximated by random-sequential update algorithms.
Starting with an initial configuration s0 the system evolves randomly through an unpredictable
sequence of configurations s0 → s1 → s2 → . . . by instantaneous transitions. These transition
take place at certain points of time t1 , t2 , . . . which are distributed according to a Poisson
distribution like shot noise. Such a sequence of transitions is called a stochastic path.
Although the actual stochastic path of the system is unpredictable, the probability Ps (t) to
find the system in configuration s at time t evolves deterministically according to the master
equation
d X
Ps (t) = Js s (t) − Jss (t) ,
′ ′ (1)
dt ′ s ∈Ω
where
Jss′ (t) = Ps (t)wss′ (2)
is the probability current flowing from configuration s to configuration s′ . The system is said
For simplicity we will assume that this stationary state is unique and independent of the initial
state. Note that systems with a finite configuration space always relax into a stationary state
while for systems with an infinite or continuous configuration space a stationary state may not
exist. Moreover, we will assume that the dynamics of the system under consideration is ergodic,
i.e., the network of transitions is connected so that each configuration can be reached.
Detailed balance
A stationary system is said to thermalize if it evolves into a stationary state which obeys detailed
balance. This means that the probability currents between all pairs of configurations cancel, i.e.
or equivalently
wss′ Ps′
= ∀s, s′ , (4)
ws′ s Ps
where Ps is the stationary probability distribution. It is worth being noted that detailed balance
can be defined in an alternative way without knowing the stationary probability distribution.
To see this let us consider a closed loop of three transitions s1 → s2 → s3 → s1 . For these
transitions Eq. (4) provides a system of three equations. By multiplying all equations one can
eliminate the probabilities Psi , arriving at the condition ws1 s2 ws2 s3 ws3 s1 = ws1 s3 ws3 s2 ws2 s1 . A
similar result is obtained for any closed loop of transitions, hence the condition of detailed
balance can be recast as Y Y
wsi si+1 = wsi+1 si (5)
i i
for all closed loops in Ωsys . A system with this property is said to have a balanced dynamics.
Note that in a system with balanced dynamics we may rescale a pair of opposite rates
by wss′ → Λwss′ and ws′ s → Λws′ s without breaking the detailed balance condition. This
intervention changes the dynamics of the model and therewith its relaxation, but the stationary
state (if existent and unique) remains the same. This statement holds even if pairs of opposite
rates are set to zero as long as this manipulation does not break ergodicity.
Entropy
Entropy is probably the most fundamental concept of statistical physics. From the information-
theoretic point of view, the entropy of a system is defined as the amount of information (measured
in bits) which is necessary to describe the configuration of the system. Since the description of
a highly ordered configuration requires less information than a disordered one, entropy can be
viewed as a measure of disorder.
The amount of information which is necessary to describe a configuration depends on the
already existing partial knowledge of the observer at a given time. For example, deterministic
systems with a given initial configuration have no entropy because the observer can compute
the entire trajectory in advance, having complete knowledge of the configuration as a function
of time even without measuring it. Contrarily, in stochastic systems the observer has only a
partial knowledge about the system expressed in terms of the probability distribution Ps (t). In
this situation the amount of information needed to specify a particular configuration s ∈ Ωsys is
− log2 Ps (t) bits, meaning that rare configurations have more entropy than frequent ones.
Different scientific communities define entropy with different prefactors. In information
science one uses the logarithm to base 2 so that entropy is directly measured in bits.
Mathematicians instead prefer a natural logarithm while physicists are accustomed to put an
historically motivated prefactor kB in front, giving entropy the unit of an energy. In what follows
we set kB = 1, defining the entropy of an individual configuration s as
Since this entropy depends on the actual configuration s, it will fluctuate along the stochastic
path. However, its expectation value, expressing the observer’s average lack of information,
evolves deterministically and is given by
X
Ssys (t) = hSsys (t, s)is = − Ps (t) ln Ps (t) , (8)
s∈Ωsys
where h. . .i denotes the ensemble average over independent realizations of randomness. Apart
from the prefactor, this is just the usual definition of Shannon’s entropy [1, 2].
Up to this point entropy is just an information-theoretic concept for the description of
configurations. The point where entropy takes on a physical meaning is the equal a priori
postulate, stating that an isolated system thermalizes in such a way that the entropy takes the
largest possible value Ssys = ln |Ωsys |. As is well-known, all other thermodynamic ensembles can
be derived from this postulate.
The numerical determination of entropies is a nontrivial task because of the highly non-linear
influence of the logarithm. To measure an entropy numerically, one first has to estimate the
probabilities Ps (t). The resulting symmetrically distributed sampling errors in finite data sets
are amplified by the logarithm, leading to a considerable systematic bias in entropy estimates.
Various methods have been suggested to reduce this bias on the expense of the statistical error,
see e.g. [3, 4].
Subsystems
In most physical situations the system under consideration is not isolated, instead it interacts
with the environment. In this case the usual approach of statistical physics is to consider the
system combined with the environment as a composite system. This superordinate total system
is then assumed to be isolated, following the same rules as outlined above. To distinguish the
total system from its parts, we will use the suffixes ‘tot’ for the total system while ‘sys’ and ’env’
refer to the embedded subsystem and its environment, respectively.
The total system is characterized by a certain space Ωtot of classical configurations c ∈ Ωtot
(not to be confused with system configurations s ∈ Ωsys ). The number of these configurations
may be enormous and they are usually not accessible in experiments, but in principle there
should be a corresponding probability distribution Pc (t) evolving by a master equation
d X
Pc (t) = Jc′ c (t) − Jcc′ (t) , Jcc′ (t) = Pc (t)wcc′ (9)
dt
c′ ∈Ωtot
Let us now consider an embedded subsystem. Obviously, for every classical configuration
c ∈ Ωtot of the total system we will find the subsystem in a well-defined unique configuration
s ∈ Ωsys . Conversely, for a given configuration of the subsystem s ∈ Ωsys the environment (and
therewith the total system) can be in many different states. This relationship can be expressed
in terms of a surjective map π : Ωtot → Ωsys which projects every configuration c of the total
system onto the corresponding configuration s of the subsystem, as sketched in schematically
Fig. 2.
The projection π divides the space Ωtot into sectors π −1 (c) ⊂ Ωtot which consist of all
configurations which are mapped onto the same s. Therefore, the probability to find the
subsystem in configuration s ∈ Ωsys is the sum over all probabilities in the corresponding sector,
i.e. X
Ps (t) = Pc (t) , (10)
c(s)
where the sum runs over all configurations c ∈ Ωtot with π(c) = s. Likewise, the projected
probability current Jss′ in the subsystem flowing from configuration s to configuration s′ is the
sum of all corresponding probability currents in the total system:
XX X X
Jss′ (t) = Jcc′ (t) = Pc (t) wcc′ . (11)
c(s) c′ (s′ ) c(s) c′ (s′ )
In contrast to the transition rates of the total system, which are usually assumed to be constant,
the effective transition rates in the subsystem may depend on time. With these time-dependent
rates the subsystem evolves according to the master equation
d X
Ps (t) = Js′ s (t) − Jss′ (t) Jss′ (t) = Ps (t)wss′ (t) . (13)
dt
s′ ∈Ωsys
From the subsystems point of view this time dependence reflects the unknown dynamics in the
environment. Moreover, ergodicity plays a subtle role: Even if the dynamics of the total system
was ergodic, the dynamics within the sectors π −1 (c) is generally non-ergodic and may decompose
into several ergodic subsectors. As we will see in Sect. 4, this allows the environmental entropy
to increase even if the subsystem is stationary.
Since in Nature isolated systems are expected to thermalize, we can conclude that conversely
a non-thermalizing system must always interact with the environment. This means that an
external drive is needed to prevent the system from thermalizing, maintaining its non-vanishing
probability currents. On the other hand, the total system composed of laboratory system and
environment should thermalize. This raises the question how a thermalizing total system can
contain a non-thermalizing subsystem?
The answer to this question is given in Eq. (12). Even if the total system was predetermined
to thermalize, meaning that the rates wcc′ obey Eq. (5), one can easily show that the effective
rates wss′ of transitions in the subsystem are generally not balanced. Therefore, a thermalizing
‘Universe’ may in fact contain subsystem out of thermal equilibrium. The apparent contradiction
is resolved by the observation that the projected rates wss′ (t) depend on time: Although these
rates may initially violate detailed balance, they will slowly change as the ‘Universe’ continues to
thermalize, eventually converging to values where they do obey detailed balance. This process
reflects our everyday experience that any non-thermalizing system will eventually thermalize
when the external drive runs out of power.
t t
−→ ~v → −~v −→
t t t t
−→ . . . −→ −→ . . . −→
Figure 3. Schematic depiction of the two objections against irreversibility in systems based on Newtonian
mechanics. According to Loschmidt’s time reversal objection, for every process that increases the entropy of a
system, say the spreading of particles in a container after removal of a wall, there exists a corresponding time-
reversed process, which would reduce the entropy of the system. According to Ponicaré’s recurrence objection,
whenever the entropy of a system increased, the recurrent nature of Newtonian evolution implies that the entropy
has to decrease after a sufficiently long period of time. The two objections illustrate that the Second Law in its
usual form is incompatible with Newtonian mechanics. Equivalent objections apply to quantum mechanics which
is also time reversal invariant and recurrent.
in the subsystem. The result can be expressed as a rigorous inequality bounding the off-diagonal
entries in the density matrix of the subsystem. Its proof does not rely on any special properties
of the interaction, it rather follows directly from the full unitary evolution of the total system
without any approximations. The inequality is meaningful only if the interaction with the
environment is weak, meaning that it must be smaller (or at least not significantly larger) than
the gaps between the energy levels of the system.
Can this result explain why the energy eigenbasis is the correct choice for the state space
in the classical description of an atom absorbing and emitting light? Thinking of the atom
plus the surrounding light field as a quantum system subjected to decoherence by interaction
with other atoms of the gas it does. In fact, the energy gaps of a few eV are several orders of
magnitude larger than the thermal energy of the atoms (about 25 meV at room temperature),
which sets the relevant energy scale for the coupling to other atoms of the gas. The weak
perturbation caused by thermal scattering in the gas naturally leads to a decoherence into the
energy eigenbasis, meaning that the classical description in terms of a probability Ps (t) to be in
the energy eigenstate s at time t describes the state of the system almost completely.
Thermalization
One of the most important applications of equilibrium statistical physics is to calculate the
properties of systems at a well defined temperature.The standard assumption going into these
calculations is that the state of such a system is described by a Gibbs state. The Gibbs state
and the canonical ensemble can be derived from the equal a priory probability postulate under
certain assumptions about the density of states of a bath with which the system can exchange
energy. Alternatively it is possible to justify the Gibbs state by using Jaynes’ maximum entropy
principle, showing that the Gibbs state is the state that maximizes the conditional entropy given
a fixed energy expectation value. However, it remains unclear how, and under which conditions,
subsystems of quantum systems actually thermalize, by which, in this section, we mean that it
equilibrates towards a Gibbs states with a well defined temperature. Note that it is not easy to
relate the notion of thermalization we use in this section to the detailed balance condition used
throughout the rest of this article.
Earlier works attempting to solve this problem [18–20] either rely on certain unproven
hypotheses such as the so-called eigenstate thermalization hypothesis, or they are restricted
to quite special situations such as coupling Hamiltonians of a special form, or they merely prove
typicality arguments instead of dynamical relaxation towards a Gibbs state. Although the results
obtained in these papers are very useful and have significantly improved our understanding of
the process of thermalization, they do not yet draw a complete and coherent picture.
An attempt to settle the question of thermalization will be made in a forthcoming paper [21].
As discussed above we already know conditions under which we can rigorously guarantee
equilibration [12, 15, 16]. What remains to be done is to identify a set of conditions under
which one can guarantee that the equilibrium state of a subsystem is close to a Gibbs state.
By using a novel perturbation theory argument and carefully bounding all the errors in an
approximation similar to that of [20] one can indeed identify such a set of sufficient (and more
or less necessary) conditions, that can be summarized in a non-technical way as follows:
(i) The energy content and the Hilbert space dimension of the bath must be much larger than
the respective quantities of the system.
(ii) The coupling between them must be strong enough, in particular much stronger than the
gaps of the decoupled Hamiltonian. This ensures that the eigenbasis of the full Hamiltonian
is sufficiently entangled (a lack of entanglement provably prevents thermalization [17]).
(iii) At the same time, the coupling must be weak in the sense that it is much smaller than the
energy uncertainty of the initial state. This is a natural counterpart to the weak coupling
assumption known from classical statistical mechanics. item[(iv)] The energy uncertainty
of the initial state must be small compared to the energy content and at the same time
large compared to the level spacing. Moreover, the energy distribution must satisfy certain
technical smoothness conditions.
(v) The spectrum of the bath must be well approximable by an exponential on the scale of
the energy uncertainty and the density of states must grow faster than exponential. This
property of the bath is ultimately the reason for the exponential form of the Gibbs state and
is also required in the classical derivation of the canonical ensemble. Most natural many
particle systems have this property.
Figure 4. Entropy production: A non-thermalizing system cannot exist on its own but must be driven from
the outside. The external drive, that keeps the system away from thermal equilibrium, inevitably increases the
entropy in the environment.
In summary, one can say that more or less the same conditions that are used in the classical
derivation of the canonical ensemble appear naturally in the proof of dynamical thermalization.
Time scales
The most important open problem for the approach described above is that rigorous bound on
the time scales for decoherence/equilibration/thermalization are not yet known. The results
derived in [12–17] only tell us that decoherence/equilibration/thermalization must eventually
happen under the given conditions, but they do not tell us how long it takes. In general this
seems to be tough question, but for exactly solvable models the time scales can be derived [22].
4. Entropy production
Returning to the classical framework, let us now study the problem of entropy production. As
outlined in Sect. 2, thermalizing systems (i.e. systems with balanced rates relaxing into thermal
equilibrium) can contain subsystems which are out of thermal equilibrium in the sense that
the transition rates wss′ do not obey detailed balance. The apparent contradiction is resolved
by observing that the effective rates in the subsystem are generally time-dependent and will
eventually adjust in such a way that the subsystem thermalizes as well. However, for a limited
time it is possible to keep them constant in such a way that they violate detailed balance. This
is exactly what happens in experiments far from equilibrium – typically they rely on external
power and will quickly thermalize as soon as power is turned off.
The external drive which is necessary to keep a subsystem away from thermal equilibrium
will on average increase the entropy in the environment, as sketched in Fig. 4. In the following
we discuss various attempts to quantify this entropy production.
Entropy changes
For a subsystem embedded in an environment we distinguish three types of configurational
entropies, namely, the configurational entropy of the total system (‘Universe’), the entropy of
the subsystem (experiment) and the entropy in its environment:
S̄env (t) = hSenv (t, c)is = Stot (t) − Ssys (t) . (19)
d X Pc (t)
S̄tot (t) = Jcc′ (t) ln , (20)
dt Pc′ (t)
c,c′ ∈Ωtot
d X Ps (t)
S̄sys (t) = Jss′ (t) ln , (21)
dt Ps′ (t)
s,s′ ∈Ωsys
Γ : c0 → c1 → c2 → . . . at times t0 , t1 , t2 , . . . (22)
Whenever π(ci ) 6= π(ci+1 ) a transition in the total system implies a transition in the subsystem,
as sketched in Fig. 2. Denoting the corresponding transition times by tni , the projected stochastic
path of the subsystem γ = π[Γ] reads
where si = π(cni ). Along their respective stochastic paths the configurational entropies of the
total system and the subsystem are given by
Γ
Stot (t) = − ln Pc(t) (t) , (24)
γ
Ssys (t) = − ln Ps(t) (t) . (25)
How do these quantities change with time? Following Ref. [23] the temporal evolution of the
configurational entropy is made up of a continuous contribution caused by the deterministic
evolution of the master equation and a discontinuous contribution occurring whenever the system
hops to a different configuration. This means that the time derivative of the systems entropy is
given by
d γ Ṗs(t) (t) X Psj (t)
Ssys (t) = − − δ(t − tnj ) ln . (26)
dt Ps(t) (t) Psj−1 (t)
j
Similarly, the total entropy of the ‘Universe’ is expected to change as
d γ X ws s (t)
Senv (t) = − δ(t − tnj ) ln j j+1 . (29)
dt wsj+1 sj (t)
j
This formula tells us that each transition s → s′ in the subsystem causes an instantaneous
change of the environmental entropy by the log ratio of the forward rate wss′ divided by the
backward rate ws′ s . Together with Eq. (26) this formula would imply that the total entropy
changes according to
This expression differs significantly from the exact formula (27) so that it can be only meaningful
in an effective sense under certain conditions or in a particular limit.
Before discussing these underlying assumptions in detail, we like to note that Eq. (29) is
indeed very elegant. It does not require any knowledge about the nature of the environment,
instead it depends exclusively on the stochastic path of the subsystem and the corresponding
transition rates. Moreover, this quantity can be computed very easily in numerical simulations:
Whenever the program selects the move s → s′ , all what has to be done is to increase the
environmental entropy variable by ln(wss′ /ws′ s )1 . Note that the logarithmic ratio of the rates
requires each transition to be reversible.
To motivate formula (29) heuristically, Seifert argues the corresponding averages of the
entropy production reproduce a well-known result in the literature. More specifically, he shows
that Eq. (29) averaged over many possible paths gives the expression.
d X wss′ (t)
S̄env (t) = Jss′ (t) ln . (31)
dt ws′ s (t)
s,s′ ∈Ω sys
Combined with Eq. (21) one obtains the average entropy production in the total system
d X Ps (t)wss′ (t)
S̄tot (t) = Jss′ (t) ln . (32)
dt Ps′ (t)ws′ s (t)
s,s′ ∈Ω sys
1
In order to avoid unnecessary floating point operations, it is useful to store all possible log ratios of the rates
in an array.
This formula was first introduced by Schnakenberg [25] and has been frequently used in chemistry
and physics [26]. It is in fact very interesting to see how Schnakenberg derived this formula. As
described in detail in Appendix A, he considered a fictitious chemical system of homogenized
interacting substances which resemble the dynamics of the master equation in terms of particle
concentrations. Applying standard methods of thermodynamics, he was able to prove Eq. (32).
The rational behind this derivation is to assume that the environment is always close to thermal
equilibrium.
Obviously, only those terms in the second sum will contribute where π(cn−1 ) 6= π(cn ), i.e. where
n = nj , hence the sum can be reorganized as
which now depends only on the stochastic path γ of the subsystem. This result, saying that
the entropy increase is given by the logarithmic ratio of the number of available configurations,
is very plausible under the assumption of instantaneous thermalization of the environment. It
Figure 5. Configurational entropy of an isolated system, its temporal derivative and the corresponding
probability distribution of entropy differences.
remains to be shown that this ratio is related to the ratio of the effective rates. In fact, inserting
(33) into (12) we obtain
P P P P
c(s) Pc (t) c′ (s′ ) wcc′ c(s) c′ (s′ ) wcc′
wss′ (t) = P = , (37)
c(s) Pc (t) Ns (t)
hence wss′ /ws′ s = Ns′ /Ns . Inserting this relationship into Eq. (36) we arrive at the formula for
the effective entropy production (29). This proves that this formula is valid under the assumption
that the environmental degrees of freedom thermalize instantaneously after each transition of
the subsystem.
This means that the sum of random variables obeying the fluctuation relation will again obey
the fluctuation relation. The remaining proof consists of two steps:
(i) First we prove the fluctuation theorem for a single transition. According to Eq. (27), an
individual transition c → c′ changes the configurational entropy of an isolated system by
∆Stot = ln Jcc′ − ln Jc′ c . Since transition occurs with frequency Pc wcc′ = Jcc′ , we have
P (∆Stot ) = Jcc′ and similarly P (−∆Stot ) = Jc′ c for the reverse transition. Therefore the
fluctuation relation
∆Stot = ln P (∆Stot )/P (−∆Stot ) (40)
holds trivially for a single transition.
(ii) The entropy change ∆S over a finite time is the sum of entropy changes caused by individual
transition. Summing random variables means to convolve their probability distributions.
Since the fluctuation theorem is invariant under convolution, it follows that this sum will
automatically obey the fluctuation relation as well.
6. Concluding remarks
In this paper we have addressed several aspects of classical non-equilibrium statistical physics,
describing its general setup and its justification from the quantum perspective. In particular,
we have focused on the problem of entropy production. As we have pointed out, the commonly
accepted formula for entropy production in the environment ∆Senv = ln(wss′ /ws′ s ) holds only
in situations where the environment thermalizes almost immediately after each transition of the
subsystem. Whether this separation of time scales is valid in realistic situations remains to be
seen. Moreover, we have suggested a conjecture that this formula gives a lower bound to the
average entropy production in the environment.
Appendix A. Tracing the historical route to entropy production
It is instructive to retrace how the formula for entropy production was derived by Schnakenberg
in 1976 [25]. To quantify entropy production, Schnakenberg considers a fictitious chemical
system that mimics the dynamics of the master equation. This fictitious system is based on the
following assumptions:
Under isothermal and isochoric conditions the chemical reactions change the particle numbers
Ni in such a way that the Helmholtz free energy F is maximized. In chemistry the corresponding
thermodynamic current is called the extent of reaction ξcc′ , which is defined as the expectation
value of the accumulated number of forward reactions Xc → Xc′ minus the number of backward
reactions Xc ← Xc′ . Note that ξcc′ does not account for fluctuations, instead it is understood
as a macroscopic deterministic quantity that grows continuously as
˙ ′ = Nc wcc′ − Nc′ wc′ c .
ξcc (A.2)
˙′.
X
Ḟ = Acc′ ξcc (A.4)
cc′
The affinity is related to the chemical potential of the involved substances as follows. On the one
hand, the reaction changes the particle number by Ṅc = −ξP ˙ ˙
cc′ and Nc′ =P+ξcc˙ ′ . On the other
∂F
hand, the change of the free energy can be expressed as Ḟ = c ∂Nc Ṅc = c µc Ṅc . Comparing
this expression with Eq. (A.4) the affinity can be expressed as
Nc′
Acc′ = µ0c − µ0c′ + kB T ln . (A.8)
Nc
The fictitious chemical system relaxes towards an equilibrium state that corresponds to the
stationary state of the original master equation. In this state the particle numbers Nc attain
certain stationary equilibrium values Nceq . Moreover, the thermodynamic flux and its conjugate
force vanish in equilibrium:
Aeq eq
cc′ = ξcc′ = 0 . (A.9)
Because of Aeq
cc′ = 0 we have
Nceq
µ0c − µ0c′ = kB T ln
′
, (A.10)
Nceq
which allows one to express the affinity as
Nc′ Nceq
Acc′ = kB T ln . (A.11)
Nc Nceq
′
eq
On the other hand, ξcc ′ = 0 implies that
Nceq wc′ c
eq = . (A.12)
Nc ′ wcc′
Inserting this relation into Eqs. (A.11) and (A.4) the change of the free energy (caused by all
reactions Xc ⇋ Xc′ ) is given by
X Nc′ wc′ c
Ḟ = kB T ξcc′ ln . (A.13)
Nc wcc′
cc′
Since temperature T and internal energy U of the mixture remain constant the variation of the
free energy F = U − T S is fully absorbed in a change of the entropy, i.e. Ḟ = −T Ṡ. This allows
one to derive a formula for the entropy production
X [Xc ]wcc′
Ṡ = −kB ξcc′ ln , (A.14)
[Xc′ ]wc′ c
cc′
References
[1] Jaynes E, 1957, Information theory and statistical mechanics I, Phys. Rev. 106 620.
[2] Jaynes E, 1957, Information theory and statistical mechanics II, Phys. Rev. 108 171.
[3] Schürmann T and Grassberger P, 1996 Chaos 6 414.
[4] Bonachela JA, Hinrichsen H and Muñoz MA, 2008 J. Phys. A: Math. Theor. 41 202001.
[5] Uffink J, Compendium of the foundations of classical statistical physics,
http://philsci-archive.pitt.edu/2691/.
[6] Schrödinger E, 1927 Ann. Phys. 388, 956; von Neumann J, 1929 Z. Phys. 57, 30.
[7] Bloch I, Dalibard J and Zwerger W, 2008 Rev. Mod. Phys. 80 885.
[8] Trotzky S, Chen Y-A, Flesch A, McCulloch IP, Schollwöck U, Eisert J, and Bloch I, 2011 preprint
arXiv:1101.2659.
[9] Zurek WH, 2003 Rev. Mod. Phys. 715.
[10] Joos E, Zeh H, Kiefer C, Giulini D, Kupsch J, Stamatescu IO, 1996 Decoherence and the Appearance of a
Classical World in Quantum Theory (Springer, Berlin).
[11] Schlosshauer M, 2007 Decoherence and the Quantum to Classical Transition (Springer, Berlin).
[12] Linden N, Popescu S, Short AJ, and Winter A, 2009 Phys. Rev. E 79, 061103.
[13] Linden N, Popescu S, Short AJ, and Winter A, 2010 New J. Phys. 12, 055021.
[14] Gogolin C, 2010 Phys. Rev. E 81, 051127.
[15] Reimann P, 2008 Phys. Rev. Lett. 101, 190403.
[16] Short AJ, 2010 preprint arXiv:1012.4622.
[17] Gogolin C, Müller MP, and Eisert J, 2011 Phys. Rev. Lett. 106, 040401.
[18] Srednicki M, 1994 Phys. Rev. E 50 888. .
[19] Tasaki H, 1998 Phys. Rev. Lett 80 1373.
[20] Goldstein S, 2006 Phys. Rev. Lett. 96 050403.
[21] Riera A, Gogolin C, and Eisert J, in preparation.
[22] Cramer M, Dawson CM, Eisert J, and Osborne TJ, 2008 Phys. Rev. Lett. 100, 030602; Cramer M and Eisert
J, 2010 New J. Phys. 12, 055020.
[23] Seifert U 2005 Phys. Rev. Lett 95, 040602.
[24] Andrieux D and Gaspard P 2004 J. Chem. Phys. 121 6167
[25] Schnakenberg J 1976 Rev. Mod. Phys. 48 571
[26] Jiu-li L, Van den Broeck C, and Nicolis G, 1984 Z. Phys. B: Condens. Matter 56, 165.
[27] Evans D J, Cohen E G D and Morriss GP, 1993 Phys. Rev. Lett. 71 2401.
[28] Evans D J and Searles D J, 1994 Phys. Rev. E 50 1645.
[29] Gallavotti G and Cohen E G D, 1995 Phys. Rev. Lett. 74 2694.
[30] Kurchan J, 1998 J. Phys. A: Math. Gen. 31 3719.
[31] Lebowitz J L and Spohn H, 1999 J. Stat. Phys. 95 333.
[32] Maes C, 1999 J. Stat. Phys. 95 367.
[33] Jiang D-Q, Qian M and Qian M-P, 2004 Mathematical Theory of Nonequilibrium Steady States (Berlin:
Springer)
[34] Harris R J and Schütz G M, 2007 J. Stat. Mech. P07020.
[35] Kurchan J, 2007 J. Stat. Mech. P07005.