Skip to main content
Entropy logoLink to Entropy
. 2020 Oct 7;22(10):1137. doi: 10.3390/e22101137

Intrinsic and Extrinsic Thermodynamics for Stochastic Population Processes with Multi-Level Large-Deviation Structure

Eric Smith 1,2,3,4
PMCID: PMC7597283  PMID: 33286906

Abstract

A set of core features is set forth as the essence of a thermodynamic description, which derive from large-deviation properties in systems with hierarchies of timescales, but which are not dependent upon conservation laws or microscopic reversibility in the substrate hosting the process. The most fundamental elements are the concept of a macrostate in relation to the large-deviation entropy, and the decomposition of contributions to irreversibility among interacting subsystems, which is the origin of the dependence on a concept of heat in both classical and stochastic thermodynamics. A natural decomposition that is known to exist, into a relative entropy and a housekeeping entropy rate, is taken here to define respectively the intensive thermodynamics of a system and an extensive thermodynamic vector embedding the system in its context. Both intensive and extensive components are functions of Hartley information of the momentary system stationary state, which is information about the joint effect of system processes on its contribution to irreversibility. Results are derived for stochastic chemical reaction networks, including a Legendre duality for the housekeeping entropy rate to thermodynamically characterize fully-irreversible processes on an equal footing with those at the opposite limit of detailed-balance. The work is meant to encourage development of inherent thermodynamic descriptions for rule-based systems and the living state, which are not conceived as reductive explanations to heat flows.

Keywords: non-equilibrium thermodynamics, large-deviation theory, stochastic processes, chemical reaction networks, emergence of macroworlds

1. Introduction

The statistical derivations underlying most thermodynamic phenomena are understood to be widely applicable, and are mostly developed in general terms. Yet, where thermodynamics is offered as an ontology to understand new patterns and causes in nature—the thermodynamics of computation [1,2,3,4] or stochastic thermodynamics [5], or where these methods are taken to define a foundation for the statistical physics of reproduction or adaptation [6,7]—these problems are framed in terms of two properties particular to the domain of mechanics: the conservation of energy and microscopic reversibility. Other applications of the same mathematics, with designations such as intensity of choice [8,9] tipping points, or early warning signs [10], are recognized as analogies to thermodynamics, on the understanding that they only become the “thermodynamics of” something when they derive its causes from energy conservation connected to the entropies of heat.

Particularly active today, stochastic thermodynamics is the modern realization of a program to create a non-equilibrium thermodynamics that began with Onsager [11,12] and took much of its modern form under Prigogine and coworkers [13,14]. Since the beginning, its core method has been to derive rules or constraints for non-stationary thermal dynamics from dissipation of free energies defined by Gibbs equilibria, from any combination of thermal baths or asymptotic reservoirs.

The parallel and contemporaneous development of thermodynamics of computation can be viewed as a quasistatic analysis with discrete changes in the boundary conditions on a thermal bath corresponding to logical events in algorithms. Early stochastic thermodynamics combined elements of both traditions, with the quantities termed “non-equilibrium entropies” corresponding to the information entropies over computer states, distinct from quasistatic entropies associated with heat in a locally-equilibrium environment, with boundary conditions altered by the explicitly-modeled stochastic state transitions. More recently, stochastic thermodynamics incorporated time-reversal methods originally developed for measures in dynamical systems [15,16,17,18] leading to a variety of fluctuation theorems [19,20,21] and non-equilibrium work relations [22,23,24], still, however, relating path probability ratios either to dissipation of heat or to differences in equilibrium free energies.

This accepted detachment of mathematics from phenomenology, with thermodynamic phenomena interpreted in their historical terms and mathematics kept interpretation-free, contrasts with the way statistical mechanics was allowed to expand the ontological categories of physics at the end of the 19th century. Although the large-deviation rate function underlies the much earlier solution of the gambler’s ruin problem of Pascal, Fermat, and Bernoulli [25], the kinetic theory of heat [26,27,28] was not put forth as an analogy to gambling, used merely to describe patterns in the “actual” fluid caloric. Thermodynamics, instead, took the experienced phenomena involving heat, and where there had formerly been only names and metaphors to refer to them, it brought into existence concepts capturing their essential nature, not dissolving their reality as phenomena [29], but endowing it with a semantics.

(Note, however, that as late as 1957, Jaynes [30,31] needed to assert that the information entropy of Shannon referred to the same quantity as the physical entropy of Gibbs, and was not merely the identical mathematical function. Even the adoption of “entropy production” to refer to changes in the state-variable entropy by irreversible transformations—along the course of which the state-variable entropy is not even defined—is a retreat to a substance syntax. Lieb and Yngvason [32] observe that “caloric” accounts for physics at a macroscopic level just as well as “heat” does. This author would prefer the less euphonious but categorically better expression “loss of large-deviation accessibility” to replace “entropy production”. In limits where the large-deviation scale factor becomes infinite, a notion of adiabatic accessibility as an all-or-nothing distinction has been used as a formal foundation for a classical thermodynamics of equilibrium [32] and non-equilibrium [33,34] systems. Here the attracting manifold of the exponential family that defines large-deviation scale factors will be the central object of interest, precisely so that the scale factor does not need to be made infinite, so “loss of large-deviation accessibility” is meant in probability and not in absolute terms.)

The distinction is between formalization in the service of reduction to remain within an existing ontology, and formalization as the foundation for discovery of new conceptual primitives. Through generalizations and extensions that could not have been imagined in the late 19th century [35,36,37,38] (revisited in Section 7), it went on to do the same for our fundamental theory of objects and interactions, the nature of the vacuum and the hierarchy of matter, and the presence of stable macro-worlds at all.

This paper is written with the view that the essence of a thermodynamic description is not found in its connection to conservation laws, microscopic reversibility, or the equilibrium state relations they entail, despite the central role those play in the fields mentioned [2,5,6,7]. At the same time, it grants an argument that has been maintained across a half-century of enormous growth in both statistical methods and applications [39,40]: that thermodynamics should not be conflated with its statistical methods. The focus will therefore be on the patterns and relations that make a phenomenon essentially thermodynamic, in which statistical mechanics made it possible to articulate as concepts. The paper proposes a sequence of these and exhibits constructions of them unconnected to energy conservation or microscopic reversibility.

Essential concepts are of three kinds: (1) the nature and origin of macrostates; (2) the roles of entropy in relation to irreversibility and fluctuation; and (3) the natural apportionment of irreversibility between a system and its environment, which defines an intrinsic thermodynamics for the system and an extrinsic thermodynamics that embeds the system in its context, in analogy to the way differential geometry defines an intrinsic curvature for a manifold distinct from extrinsic curvatures that may embed the manifold in another manifold of higher dimension. (This analogy is not a reference to natural information geometries [41,42] that can also be constructed, though perhaps those geometries would provide an additional layer of semantics to the constructions here.)

All of these concepts originate in the large-deviation properties of stochastic processes with multi-level timescale structure. They will be demonstrated here using a simple class of stochastic population processes, further specified as chemical reaction network models as more complex relations need to be presented.

To summarize briefly and intuitively, both microstate and macrostate are concepts applied relative to the dynamical level under consideration in a stochastic process. Within a given level, they are different classes of entities: macrostates are a special class of distributions over microstates, specificiable by a fixed collection of statistical central tendencies decoupled from the exact scale that separates micro and macro. The separation of scale from structure is the defining scaling relation for large-deviations [43]. The more formal construction is explained in Section 3.2.

The roles of entropy are more difficult to summarize in short-hand, as a number of distinct patterns are present in the relation of thermal descriptions at the mesoscale and its associated macroscale. It will be necessary in the following to distinguish the roles of entropy as a functional on general distributions, introduced in Section 2.3, versus entropy as a state function on macrostates, and on the Lyapunov role of entropy in the 2nd law [28,39] versus its large-deviation role in fluctuations [44], through which the state-function entropy is most generally definable [43]. The latter two roles and their difference are treated starting in Section 4.2. Both Shannon’s information entropy [45] and the older Hartley function [46] will appear here as they do in stochastic thermodynamics generally [5]. The different meanings these functions carry, which sound contradictory when described and can be difficult to compare in derivations with different aims, become clear as each appears in the course of a single calculation. Most important, they have unambiguous roles in relation to either large deviations or system decomposition without need of a reference to heat.

1.1. On the History of the Development of Ideas Used Here, and a Transition from Mechanical to Large-Deviation Perspectives

This section provides a brief summary of ideas and literature where results presented below were first derived, and an explanation of what the current paper contributes in addition. In the interest of brevity, some technical references are made to methods that have not been didactically introduced above, but are demonstrated in later sections. It may be easier to read the constructive sections of the paper and then revisit this summary to follow the motivation for connections claimed here.

1.1.1. On the Additive Decomposition of Total Entropy Change

The large body of work accumulated since the late 1990s in what could be called the thermodynamics of the mesoscale contains most technical results on entropy decomposition used below, derived in the language of fluctuation theorems and generally framed in energy equivalents. Fluctuation theorems and non-equilibrium work relations connect historically salient questions about limits to work that grow naturally out of mechanics, with a distinct set of questions about partitioning of sources of irreversibility that are more central from a large-deviation perspective.

Several authors have argued [21,32,47,48,49] that the founding efforts to formalize a 2nd law recognize not one, but two categories or origins of irreversibility. One is the formulation of Clausius, related to irreversibility of transients, the other is due to Kelvin and Planck and related to constitutional irreversibility of transitions in driven systems.

Recognizing that constitutional irreversibility interferes with extension of the usual Gibbs free energy change to characterize transients, Oono and Paniconi [50] argued for an additive separation of the two components of dissipation, introducing housekeeping heat (constitutive) phenomenologically in terms of quasistatic transformations through non-equilibrium steady states, with excess heat (transient) left to extend Gibbs free energy to non-equilibrium relaxation. The non-negativity of housekeeping heat, independent of the presence or rate of transients, was established with a fluctuation theorem by Hatano and Sasa [51], with subsequent treatment by Speck and Seifert [52] and others [21,53]. (Understanding of the essence of a non-equilibrium or generalized thermodynamics has developed on one hand within a deliberately phenomenological track as in [40,50] and many parts of [54], and in parallel with the support of underlying statistical mechanics as in [21,51,52,53] and most literature on stochastic thermodynamics.)

The division of total entropy change into two individually non-negative terms, and the perspective it gives on the generalization from equilibrium to non-equilibrium thermodynamics, were subsequently studied by numerous authors [21,47,48,49,54,55,56,57,58]. Chemical reaction systems of the kind studied here in Section 4 were considered in [54,57], and multi-level systems defined by separation of timescales were considered in [55]. The centrality of the large-deviation function for the stationary state was emphasized in all of [49,54,55,56,57]. While these treatments are carried out within a context of energy conservation and develop interpretations in terms of work and heat, it was noted that the fluctuation theorems themselves do not depend on conservation laws. The work overall represents a shift from a primarily mechanical framing to one emphasizing fluctuation accessibility.

Within classical thermodynamics, the lack of a definite sign for changes in Shannon entropy is the key feature of the theory [59] enabling thermal or chemical work or refrigeration. The Kelvin–Planck formulation of irreversibility does not arise, as driven systems are excluded from the scope of the theory, and the Clausius formulation does not arise, as the theory is organized around the sub-manifold of adiabatically interconvertible macrostates [39]. (For a modern treatment of equilibrium see [32], followed by discussion of the limits of equivalent methods for non-equilibrium in [33,34].)

When an explicit irreversible dynamics is introduced, however, the lack of definite sign change for both Shannon entropy and rejected environmental heat appears not as a feature but as a “deficiency that these quantities themselves do not obey a fluctuation theorem” [21]. Retrospectively, we see that for classical thermodynamics the deficiency was remedied by Gibbs [60] in the construction of the free energy as a relative entropy with a measure defined in terms of the constitutive parameters [61]. Adiabatic transformations exchange Shannon entropy for entropy associated with support in the Gibbs measure, now regarded as a property of the system, while the Gibbs free energy as a whole evolves unidirectionally in a spontaneous process under detailed balance. With respect to the construction of relative entropies, the Hartley function of non-equilibrium steady states of [50,51] then becomes the correct generalization of the Gibbs measure away from detailed balance.

It is emphasized in [48] that the log-steady-state distribution is a “non-local” function of the constitutive parameters, possibly never realized by the dynamics, and in [48,55] that thermodynamics derived from the rate equations within a system become conditionally independent of the origins of these constitutive parameters outside the system or at faster timescales, given the steady-state measure. The current paper encapsulates these observations in the view of entropy relative to the stationary state as the intrinsic thermodynamics closing the description of a mesoscale system, and of the housekeeping heat as embedding the system extrinsically in its environment. The intrinsic/extrinsic characterization substitutes for the non-adiabatic/adiabatic dichotomy emphasized in [21,50] that grew out of the organization of equilibrium thermodynamics in terms of adiabatic accessibility.

The same shift away from the classical thermodynamic emphasis on potentials generalizing the mechanical potential energy underlies our use of the steady-state Hartley measure as a connection rather than as a term in a time-dependent potential. In [47], the time derivative of the steady-state measure needed to be separated from the relative entropy in order to arrive at a fluctuation theorem. Within an energy framework, the reference measure is called a “boundary entropy” (their Equation (15)), and its time derivative a “driving entropy production” (their Equation (18)). The steady-state measure, used here as a connection, is not a potential term changing in time, but rather a tangent vector defining the null direction for changes in Shannon entropy of a system.

1.1.2. On Subsuming all Entropy Concepts within a Uniform Large-Deviation Paradigm

The definition of entropy in terms of large deviations, with the Lyapunov property for relaxations derived as a consequence, follows the approach of [43]. The Hamilton–Jacobi and eikonal methods used to approximate large-deviation functions follow the established treatments in [40,57] and [62], respectively.

The emphasis in [43] on the large-deviation scaling relation and invariant manifold (related to the invariant manifold of renormalization-group flow [36,38]), rather than on the asymptotic limit of infinite scale separation, together with the use of eikonal methods, permits us a somewhat more flexible and more complete treatment of scale separation than that of [55], due to the following algebraic observation: In the analysis of multiscale systems, the large-deviation scaling parameter (e.g., volume or particle number) becomes the inverse of some weak coupling that defines the convergence of approximation methods (as in the derivation of the central-limit thoerem). As emphasized by Cardy [63] and Coleman [64] Ch. 7, expansions in moments of fluctuations are perturbation series in these weak couplings, but naïvely constructed they are only asymptotic expansions. The instanton trajectories associated with first passages underlie an expansion in essential singularities (terms of the form e1/g in the perturbative coupling g), which must first be extracted to regularize perturbation expansions to render them convergent power series. The resulting analytic distinction formally separates the within-scale effects in perturbative expansions from the essential singularities carrying cross-level scale separations, for finite as well as asymptotic scale separations.

Thus, to compare approaches: in [55], fast and slow variables of indefinitely separated timescale were declared by hand, and the fast variables were then averaged over distributions conditioned on the values of slow variables. In the eikonal expansion developed here, the fixed points in the large-deviation analysis, and the connection of other states to them by relaxation rays, define the conditional structure of a chain rule for entropies in Section 3.2.2 and Section 4.2.2. Perturbative corrections can in principle be computed about instanton backgrounds that are merely rapid and need not be instantaneous, though in practice such calculations tend to be technically difficult [65]. We note, however, that a guiding criterion in the approach of [55] also applies here: the hierarchical conditional structure and the defining relations for a thermodynamic description are self-similar when applied to any scale. Thermal properties are instantiated distinctly at different scales, but they are conceptualized without reference to any particular scale.

Finally, the derivation of fluctuating state-function entropies from a single multiscale large-deviation function clarifies the use of the Hartley function as a random variable on trajectories, addressed in [58]. From an approach originating in accounting consistency between mesoscale and macroscale thermodynamics [5], the log-probability either of the distribution under study, or of a reference distribution, must be evaluated as a random variable along trajectories, so that the average will recover a Shannon entropy or relative entropy of the distribution of trajectory endpoints.

Though it has become a standard usage to refer to both the random variable and its average merely as “entropies”, four distinct meanings are in play in the meso-to-macro relation: (1) A relative entropy function of arbitrary multiscale distributions that increases deterministically; state-function entropies either as (2) large-deviation or (3) Lyapunov functions that fluctuate (with distinct dynamics for the two cases under finite-rate transitions, as shown in Section 4.2), and (4) Hartley information of a reference measure that may never be attained by the system’s dynamics [48]. The fourth form, interpreted in Section 7 as a measure derived from the generating parameters of the system with information about their integration, illustrates concretely the value of an informational interpretation of entropies as random variables, distinct from the dynamical interpretation [58].

1.2. Main Results and Order of the Derivation

Section 2 explains what is meant by multi-level systems with respect to a robust separation of timescales, and introduces a family of models constructed recursively from nested population processes. Macrostates are related to microstates within levels, and if timescale separations arise that create new levels, they come from properties of a subset of long-lived, metastable macrostates. Such states within a level, and transitions between them that are much shorter than their characteristic lifetimes, map by coarse-graining to the elementary states and events at the next level.

The environment of a system that arises in some level of a hierarchical model is understood to include both the thermalized substrate at the level below, and other subsystems with explicit dynamics within the same level. Section 2.4 introduces the problem of system/environment partitioning of changes in entropy, and states the first main claim: that the natural partition is the one into a relative entropy [66] within the system and a housekeeping entropy rate [51] to embed the system in the environment. (The technical requirements are either evident or known from fluctuation theorems [52,53]; variant proofs are given in later sections. Their significance for system decomposition is the result of interest here.) It results in two independently non-negative entropy changes (plus a third entirely within the environment that is usually ignored), which define the intensive thermodynamics of the focal system and the extensive, or embedding thermodynamics of the system into the systemenvironment whole.

Section 3 expresses the relation of macrostates to microstates in terms of the large-deviations concept of separation of scale from structure [43]. Large-deviations scaling, defined as the convergence of distributions for aggregate statistics toward exponential families, creates a formal concept of a macroworld having definite structure, yet separated by an indefinite or even infinite range of scale from the specifications of micro-worlds.

Large-deviations for population processes are handled with the Hamilton–Jacobi theory [40] following from the time dependence of the cumulant-generating function (CGF), and its Legendre duality [41] to a large-deviation function called the effective action [67]. Macrostates are identified with the distributions that can be assigned large-deviation probabilities from a system’s stationary distribution. Freedom to study any CGF makes this definition, although concrete, flexible enough to require choosing what Gell-Mann and Lloyd [68,69] term “a judge”. The resulting definition, however, does not depend on whether the system has any underlying mechanics or conservation laws, explicit or implicit.

To make contact with multi-level dynamics and problems of interest in stochastic thermodynamics [5,70], while retaining a definite notation, Section 4 assumes generators of the form used for chemical reaction networks (CRNs) [71,72,73,74]. While numerous treatments of CRNs simply write down the stoichiometry and proceed directly to solve the chemical master equation [54,57,70], we appeal specifically to the ontology of species and complexes put forward by Horn and Jackson [71] and Feinberg [72], as the basis for this treatment. It both characterizes hypergraphs as generators for this distinctive class of systems, and when combined with the Doi operator algebra in Section 4.1, expresses the projection between the transition matrix in the state space, and the complex network in the generator, that will be the basis for cycle decompositions in Section 5. The finite-to-infinite mapping that relates macrostates to microstates has counterparts for CRNs in maps from the generator matrix to the transition matrix, and from mass-action macroscopic currents to probability currents between microstates.

This section formally distinguishes the Lyapunov [39] and large-deviation [44] roles of entropy, and shows how the definition of macrostates from Section 3 first introduces scale-dependence in the state-function entropy that was not present in the Lyapunov function for arbitrary multi-scale models of Section 2. In the Hamilton–Jacobi representation, Lyapunov and large-deviation entropy changes occur within different manifolds and have different interpretations. These differences are subordinate to the more fundamental difference between the entropy state function and entropy as a functional on arbitrary distributions over microstates. A properly formulated 2nd law, which is never violated, is computed from both deterministic and fluctuation macrostate-entropies. The roles of Hartley informations [46] for stationary states in stochastic thermodynamics [5,52,75] enter naturally as macrostate relative entropies.

Section 5 derives the proofs of monotonicity of the intrinsic and extrinsic entropy changes from Section 2, using a cycle decomposition of currents in the stationary distribution. The decomposition is related to that used by Schnakenberg [76] to compute dissipation in the stationary state, but more cycles are required in order to compute dynamical quantities. Whether only cycles or more complex hyperflows [77] are required in a basis for macroscopic currents distinguishes complexity classes for CRNs. Because cycles are always a sufficient basis in the microstate space, the breakdown of a structural equivalence between micro- and macro-states for complex CRNs occurs in just the terms responsible for the complex relations between state-function and global entropies derived in Section 4.

Section 6 illustrates two uses of the intensive/extensive decomposition of entropy changes. Some of the formalism in the earlier sections may seem more grounded with an example in mind, and readers who prefer this approach are encouraged to browse this section to see a fairly conservative example, still within domains commonly treated with stochastic thermodynamics [54,57,70], where the large-deviation emphasis provides more flexibility in handling irreversibility.

The section studies a simple model of polymerization and hydrolysis under competing spontaneous and driven reactions in an environment that can be in various states of disequilibrium. First a simple linear model of the kind treated by Schnakenberg [76] is considered, in a limit where one elementary reaction becomes strictly irreversible. Although the housekeeping entropy rate becomes uninformative because it is referenced to a diverging chemical potential, this is a harmless divergence analogous to a scale divergence of an extensive potential. A Legendre dual to the entropy rate remains regular, reflecting the existence of a thermodynamics-of-events about which energy conservation, although present on the path to the limit, asymptotically is not a source of any relevant constraints.

A second example introduces autocatalysis into the coupling to the disequilibrium environment, so that the system can become bistable. The components of entropy rate usually attributed to the “environment” omit information about the interaction of reactions responsible for the bistability, and attribute too much of the loss of large-deviation accessibility to the environment. Housekeeping entropy rate gives the correct accounting, recognizing that part of the system’s irreversibility depends on the measure for system relative entropy created by the interaction.

Section 7 offers an alternative characterization of thermodynamic descriptions when conservation and reversibility are not central. In 20th century physics the problem of the nature and source of macroworlds has taken on a clear formulation, and provides an alternative conceptual center for thermodynamics to relations between work and heat. System decomposition and the conditional independence structures within the loss of large-deviation accessibility replace adiabatic transformation and heat flow as the central abstractions to describe irreversibility, and define the entropy interpretation of the Hartley information. Such a shift in view will be needed if path ensembles are to be put on an equal footing with state ensembles to create a fully non-equilibrium thermodynamics. The alternative formulation is meant to support the development of a native thermodynamics of stochastic rule-based systems and a richer phenomenology of living states.

2. Multi-Level Systems

To provide a concrete class of examples for the large-deviation relations that can arise in multi-scale systems, we consider systems with natural levels, such that within each level a given system may be represented by a discrete population process. The population process in turn admits leading-exponential approximations of its large-deviation behavior in the form of a continuous dynamical system. These dynamical systems are variously termed momentum-space WKB approximations [78], Hamilton–Jacobi representations [40], or eikonal expansions [62]. They exist for many other classes of stochastic process besides the one assumed here [79,80], and may be obtained either directly from the Gärtner-Ellis theorem [44] for cumulant-generating functions (the approach taken here), by direct WKB asymptotics, or through saddle-point methods in 2-field functional integrals [67,81].

A connection between levels is made by supposing that the large-deviation behavior possesses one or more fixed points of the dynamical system. These are coarse-grained to become the elementary states in a similar discrete population process one level up in the scaling hierarchy. Nonlinearities in the dynamical system that produce multiple isolated, metastable fixed points are of particular interest, as transitions between these occur on timescales that are exponentially stretched, in the large-deviation scale factor, relative to the relaxation times. The resulting robust criterion for separation of timescales will be the basis for the thermodynamic limits that distinguish levels and justify the coarse-graining.

2.1. Micro to Macro, within and between Levels

Models of this kind may be embedded recursively in scale through any number of levels. Here we will focus on three adjacent levels, diagrammed in Figure 1, and on the separation of timescales within a level, and the coarse-grainings that define the maps between levels. The middle level, termed the mesoscale, will be represented explicitly as a stochastic process, and all results that come from large-deviations scaling will be derived within this level. The microscale one level below, and the macroscale one level above, are described only as needed to define the coarse-graining maps of variables between adjacent levels. Important properties such as bidirectionality of escapes from metastable fixed point in the stationary distribution, which the large-deviation analysis in the mesoscale supplies as properties of elementary transitions in the macroscale, will be assumed self-consistently as inputs from the microscale to the mesoscale.

Figure 1.

Figure 1

A multiscale population process with a lattice of fixed points. Middle layer is the grid of microstates and transitions in the mesoscale. Surface logρ_ for a stationary distribution is indicated by the color gradient. Fixed points (asterisks) are at well bottoms. Modes of ρ_ concentrated near fixed points are indicated in greyscale beneath the surface. A classical deterministic trajectory (black, with arrow) starts at the open circle and relaxes to the fixed point for that basin. First-passage trajectories (colors) then mediate well switching by rare large deviations. Fixed points of trajectory equations in the mesoscale are coarse-grained to become elementary microstates (black dots) in the macroscale (top layer), and first passages become elementary transition events (arrows).

Within the description at a single level, the discrete population states will be termed microstates, as is standard in (both classical and stochastic) thermodynamics. Microstates are different in kind from macrostates, which correspond to a sub-class of distributions (defined below), and from fixed points of the dynamical system. There is no unique recipe for coarse-graining descriptions to reduce dimensionality in a multi-scale system, because the diversity of stochastic processes is vast. Here, to obtain a manageable terminology and class of models, we limit to cases in which the dynamical-system fixed points at one level can be put in correspondence with the microstates at the next higher level. Table 1 shows the terms that arise within, and correspondences between, levels.

Table 1.

Structures from micro to macro within and between levels. H-J refers to the Hamilton–Jacobi dynamical system that arises in the large-deviation approximation. Note that the distinction between microstate and macrostate is a distinction of kind, which occurs within a level, whereas microscale, mesoscale, and macroscale are designations of relative levels with respect to large-deviation timescale separations. Terms in the same row are representations of the same quantity in different levels, related through coarse-graining.

Macroscale Mesoscale Microscale
microstate H-J fixed point
thermalization of the microscale H-J relaxation trajectories
elementary transitions between microstates H-J first-passages
arbitrary distribution on microstates coarse-grained distribution on fixed points
macrostate (distribution) ↔ H-J variables
microstate H-J fixed point

2.2. Models Based on Population Processes

The following elements furnish a description of a system:

The multiscale distribution:

The central object of study is any probability distribution ρ, defined down to the smallest scale in the model, and the natural coarse-grainings of ρ produced by the dynamics. To simplify notation, we will write ρ for the general distribution, across all levels, and let the indexing of ρ indicate which level is being used in a given computation.

Fast relaxation to fixed points at the microscale:

The counterpart, in our analysis of the mesoscale, to Prigogine’s [13,14] assumption of local equilibrium in a bath, is fast relaxation of the distribution on the microscale, to a distribution with modes around the fixed points. (For an example allowing relaxation to local stationary non-equilibrium, in the same spirit as the approach here, see the averaging over fast degrees of freedom in [55].) In the mesoscale these fixed points become the elementary population states indexed n, and the coarse-grained probability distribution is denoted ρn. If a population consists of individuals of types indexed p1,,P, then nnp is a vector in which the non-negative integer coefficient np counts the number of individuals of type p.

Elementary transitions in the mesoscale:

First-passages occurring in the (implicit) large-deviations theory, between fixed points n and n at the microscale, appear in the mesoscale as elementary transitions nn. In Figure 1, the elementary states are grid points and elementary transitions occur along lines in the grid in the middle layer. We assume as input to the mesoscale the usual condition of weak reversibility [70], meaning that if an elementary transition nn occurs with nonzero rate, then the transition nn also occurs with nonzero rate. Weak reversibility is a property we will derive for first passages within the mesoscale, motivating its adoption at the lower level.

Thermalization in the microscale:

The elementary transitions nn are separated by typical intervals exponentially longer in some scale factor than the typical time in which a single transition completes. (In the large-deviation theory, they are instantons.) That timescale separation defines thermalization at the microscale, and makes microscale fluctuations conditionally independent of each other, given the index n of the basin in which they occur. Thermalization also decouples components np in the mesoscale at different p except through the allowed elementary state transitions.

A System/Environment partition within the mesoscale:

Generally, in addition to considering the thermalized microscale a part of the “environment” in which mesoscale stochastic events take place, we will choose some partition of the type-indices p to distinguish one subset, called the system (s), from one or more other subsets that also form part of the environment (e). Unlike the thermalized microscale, the environment-part in the mesoscale is slow and explicitly stochastic, like the system. The vector n indexes a tensor product space se, so we write nns,ne.

The notation here for nested population processes lends itself directly to examples such as large chemical reaction networks, in which a subset of species and reactions are regarded as the system s and the remainder serve as an environment e of chemostats, or to Darwinian populations in which a subset s of focal species evolve under frequency-dependent selection with background populations e treated only in aggregate. The reader is asked to imagine as well a variety of heterogeneous multi-level systems that could be handled in similar fashion with suitable notations for various case-specific state spaces. Examples of current interest include the use of active media such as chemotactic bacteria to drive mesomechanical loads [82,83]. The load is s, undergoing biased Brownian motion, while the active medium is e, which must be modeled dynamically on the same timescale as the load. Active media with biological components are also most-naturally modeled with completely irreversible elementary events, and provide a large part of our motivation to demonstrate how to handle such limits formally in Section 6.1.3.

Marginal and conditional distributions in system and environment:

On se, ρn is a joint distribution. A marginal distribution ρs for the system is defined by ρnssnnsρn, where nns fixes the s component of n in the sum. From the joint and the marginal a conditional distribution ρes at each ns is given by ρn=ns,neρnssρnes. The components of ρes can fill the role often given to chemostats in open-system models of CRNs. Here we keep them as explicit distributions, potentially having dynamics that can respond to changes in s.

Notations involving pairs of indices:

Several different sums over the pairs of indices associated with state transitions appear in the following derivations. To make equations easier to read, the following notations are used throughout:

  • n,n is an unordered pair of indices.

  • nnn counts every pair in both orders.

    n,n counts every unordered pair once.

  • Therefore for any function fn,n, nnnfn,n=n,nfn,n+fn,n.

  • nns is a sum on the ne component of n=ns,ne.

  • n,nns counts all unordered pairs n,n with common s-component ns.

2.3. Stochastic Description within the Mesoscale

The coarse-grained distribution ρn at the mesoscale evolves in time under a master equation

ρ˙n=nTnnρn=nnwnnρnwnnρn. (1)

Here and below, (˙) indicates the time derivative. The generator T is a stochastic matrix on the left, which we write 1TT=0, where 1T is the row-vector on index n corresponding to the uniform (unnormalized) measure. wnn (following a standard notation [70]) is the component of T giving the transition rate from state n to state n.

For all of what follows, it will be necessary to restrict to systems that possess a stationary, normalizable distribution denoted ρ_, satisfying Tρ_=0. The stationary distribution will take the place of conservation laws as the basis for the definition of macrostates. Moreover, if ρ_ is everywhere continuous, and the number of distinct events generating transitions in the mesoscale (in a sense made precise below) is finite, first passages between fixed points corresponding to modes of ρ_ will occur at rates satisfying a condition of detailed balance. The joint, marginal, and conditional stationary distributions are denoted ρ_n=ns,neρ_nssρ_nes.

The marginal stochastic process on s:

The system-marginal distribution ρs evolves under a master equation ρ˙s=Tsρesρs, for which the transition matrix Ts has components that are functions of the instantaneous environmental distribution, given by

wnsnssρesnnsnnswnnρnes. (2)

The only assumption on the separation of timescales underlying Equation (2) is that the same treatment to eliminate fast variables has been used across the system s and environment e that remain in the mesoscale. In practice this is not restrictive other than to assume that timescale separations are sparse enough to leave well-defined scales between them to describe with fixed collections of stochastic variables; for a given level of description we eliminate all faster variables than those left explicit. As noted in the summary Section 1.1, the separation need not be asymptotic or absolute: as long as eikonal methods identify fixed points in the microscale that can serve as elementary states in the mesoscale, and as long as an instanton method (with or without perturbative corrections) exists to compute effective rate constants (see Section 3.2.2, Section 4.2.1 and Section 4.2.2 later in the paper), the mesoscale projection has a well-defined construction by elimination of faster degrees of freedom.

Time-independent overall stationary distribution as a reference:

To maintain a clean abstraction in which all dynamics is kept explicit within the written distribution ρ, the parameters defining the multiscale stochastic process are assumed time-homogeneous. This assumption makes possible the definition of a Lyapunov function for the entire multi-level system distribution, and is tantamount to formally closing the system description. (Equivalently, the stochastic process abstraction is the only one assumed; any mechanical control parameters or other degrees of freedom are to be internal degrees of freedom simply possessing restricted distributions.) Since fast degrees of freedom at the microscale are assumed to relax to unique fixed points except for rare transitions that are left as the explicit events in the mesoscale, time-homogeneity of the overall description implies time-homogeneity of the effective coefficients in the mesoscale transition matrix T. Residual mesoscale dynamics is then carried exclusively in the distributions ρs and ρes.

Detailed balance propagating up from the microscale:

Finally we assume, propagating up from the microscale, a condition of detailed balance that we will prove as a property of first-passages in the mesoscale in Section 4.2.2, and then apply recursively:

wnnρ_n=wnnρ_n. (3)

Note that, as was a central theme in [55], condition (3) is not an assumption of microscopic reversibility in whatever faster stochastic process is operating below in the microscale. To understand why, using constructions that will be carried out explicitly within the mesoscale, see that even with rate constants satisfying Equation (3), the system-marginal transition rates (2) need not satisfy a condition of detailed balance. Indeed we will want to be able to work in limits for the environment’s conditional distributions ρes in which some transitions can be made completely irreversible: that is wnsnss0 but wnsnss=0. Even from such irreversible dynamics among microstates, first-passage rates with detailed balance in the stationary distribution will result, and it is that property that is assumed in Equation (3).

From these assumptions on T and ρ_, it follows that the relative entropy of any distribution ρ from the stationary distribution ρ_ is non-decreasing,

D˙ρρ_=nnnlogρ/ρ_nρ/ρ_nwnnρn=nnnlogwnnρnwnnρnwnnρn=n,nlogwnnρnwnnρnwnnρnwnnρn0. (4)

The result (4) appears as an integral fluctuation theorem in all stochastic treatments of the total entropy change [21,47,48,49,54,55,57]; as a single-time result it is elementary for systems with detailed balance, because each term in the third line of Equation (4) is individually non-negative. We use relative entropy to refer to minus the Kullback–Leibler divergence of ρ from ρ_ [66], to follow the usual sign convention for a non-decreasing entropy.

2.4. System-Environment Decompositions of the Entropy Change

The exchange of heat for work is central to classical thermodynamics because energy conservation is a constraint on joint configurations across sub-systems, either multiple thermal systems in contact or a mechanical subsystem having only deterministic variables, some of which set the boundary conditions on thermal subsystems that also host fluctuations. Only the state-function entropy, however, is a “function” of energy in any sense, so the only notion of a limiting partition of irreversible effects between subsystems derivable from energy conservation is the one defined by adiabatic transformations passing through sequences of macrostates.

In more general cases, with or without conservation laws, the boundary conditions on a system are imposed only through the elements of the marginal transition matrix Ts. The problem remains, of understanding how one subsystem can limit entropy change in another through a boundary, but it is no longer organized with reference to adiabatic transformations.

We wish to understand what constitutes a thermodynamically natural decomposition of the mesoscale process into a system and an environment. A widely-adopted decomposition [5,70] for systems with energy conservation separates a Shannon entropy of ρns from heat generation associated with terms logwnsnss by the local equilibrium assumption for the bath. (The decomposition is the same one used to define an energy cost of computation [2,3] by constructing logical states through the analogues to heat engines [1,61].) We begin by writing down this information/heat decomposition, and arguing that it is not the natural partition with respect to irreversibility. An equivalent argument, not framed as one regarding inherent system/environment identities, but rather as making fullest use of available fluctuation theorems, is made in [21].

Entropy relative to the stationary state rather than Shannon entropy:

The following will differ from the usual construction in replacing Shannon entropy with a suitable relative entropy, without changing the essence of the decomposition. As noted in [49], the natural entropy for a stochastic process will generally be a relative entropy for which a measure must be specified. Two arguments may be given for this claim: It would be clear, for a system with a continuous state space in which ρn would become a density, that the logarithm of a dimensional quantity is undefined. Hence some reference measure is always implicitly assumed. A uniform measure is not a coordinate-invariant concept, and a measure that is uniform in one coordinate system makes those coordinates part of the system specification. (The same argument is made in [84] with respect to Bayesian statistics, and as a critique of arguments in “objective Bayesianism” that one can evade the need to choose.) Since discrete processes are often used as approximations to continuum limits, the same concerns apply. The more general lesson is that a logarithmic entropy unit is always given meaning with respect to some measure. Only for systems such as symbol strings, for which a combinatorial measure on integers is the natural measure, is Shannon entropy the corresponding natural entropy. For other cases, such as CRNs, the natural entropy is relative entropy referenced to the Gibbs equilibrium [61], and its change gives the dissipation of chemical work. For the processes described here, the counterpart to Shannon entropy that solves these consistency requirements, but does not yet address the question of naturalness, is the relative entropy referenced to the steady-state marginal ρ_s. Its time derivative is given by

D˙ρsρ_s=ns,nslogρ/ρ_nssρ/ρ_nsswnsnssρnsswnsnssρnss. (5)

The quantity (5) need not be either positive or negative in general.

A second term that separates out of the change in total relative entropy (4) comes from changes in environmental states through events that do not result in net change of the system state. (Note wnn may depend on the system index-component ns shared by both n and n, so these rates can depend on system state. Catalysis acts through such dependencies.) The relative entropy of the conditional distribution ρes at a particular index ns from its stationary reference ρ_es has time derivative

D˙nsesρesρ_es=ns,nsnslogρ/ρ_nesρ/ρ_neswnnρneswnnρnes=ns,nsnslogwnnρneswnnρneswnnρneswnnρnes. (6)

Unlike the change of system relative entropy (5), Equation (6) is non-negative term-by-term, in the same way as Equation (4).

The remaining terms to complete the entropy change (4) come from joint transformations in system and environment indices ns and ne, and in usual treatments have the interpretation of dissipated heats. (When more than one environment transition couples to the same system transition, there can be reasons to further partition these terms; an example is given in Section 6.1.) They are functions of the pair of indices ns,ns. An average change in relative entropy of the environment, over all processes that couple to a given system state-change, is

σnsnsρes1wnsnssnnsnnslogρ/ρ_nesρ/ρ_neswnnρnes. (7)

Note that if we had wished to use the un-referenced Shannon entropy nsρnsslogρnss in place of the relative entropy (5)—for instance, in an application to digital computing—we could shift the measures ρ_s to the dissipation term to produce what is normally considered the “environmental” heat dissipation, given by

σnsnsenvwnsnssσnsnslogρ_nssρ_nsswnsnss=nnsnnslogwnnρneswnnρneswnnρnes. (8)

The quantity (8) is regarded as a property of the environment (both slow variables and the thermal bath) because it is a function only of the transition rates wnn and of the marginal distributions ρes.)

2.4.1. The Information/Heat Decomposition of Total Relative-Entropy Change

Equation (4) is decomposed in terms of the quantities in Equations (5)–(7) as

D˙ρρ_=D˙ρsρ_s+nsnsnsσnsnswnsnssρnssnsρnssD˙nsesρesρ_es. (9)

The total is non-negative and the third summand is independently non-negative, as already mentioned. The sum of the first two summands is also non-negative, a result that can be proved as a fluctuation theorem for what is normally called “total entropy change” [21]. (More detailed proofs for a decomposition of the same sum will be given below.)

Here we encounter the first property that makes a decomposition of a thermal system “natural”. The term in D˙nsesρesρ_es is not generally considered, and it does not need to be considered, because thermal relaxation at the microscale makes transitions in ne at fixed ns conditionally independent of transitions that change ns. Total relative entropy changes as a sum of two independently non-decreasing contributions.

The first and second lines in Equation (9) are not likewise independently non-negative. Negative values of the second or first line, respectively, describe phenomena such as randomization-driven endothermic reactions, or heat-driven information generators. To the extent that they do not use thermalization in the microscale to make the system and environment conditionally independent, as the third summand in Equation (9) is independent, we say they do not provide a natural system/environment decomposition.

2.4.2. Relative Entropy Referencing the System Steady State at Instantaneous Parameters

Remarkably, a natural division does exist, based on the housekeeping heat introduced in a phenomenological treatment by Oono and Paniconi [50], for which a fluctuation theorem was subsequently derived by Hatano and Sasa [51] followed by Speck and Seifert [52]. (In related proofs later in Section 5.2.2, we will more nearly follow the treatment by Harris and Schutz [53].)

The decomposition uses the solution ρ¯s to Tsρ¯s=0, which would be the stationary marginal distribution for the system s at the instantaneous value of ρes. As for the whole-system stationary distribution ρ_, we restrict to cases in which ρ¯s exists and is normalizable. (There are multiple ways in which a normalizable marginal distribution ρ¯s does not restrict normalizability elsewhere in the description. Environment distributions ρes may be taken to large-system limits to model chemostats, for which a normalizable distribution on an unbounded state space may have asymptotically unbounded entropy and also have the same effect in s as a non-normalizable distribution in a covering space that counts cycles traversed within s [49]. A more serious restriction is faced if chemostats permit no normalizable distribution within the system, as might occur quite naturally in polymerization models admitting indefinite growth. In such cases, other methods of analysis must be used [85].)

Treating ρ¯s as fixed and considering only the dynamics of ρs with ρ¯s as a reference, we may consider a time derivative D˙ρsρ¯s in place of Equation (5). Note that the rates wnsnss in the transition matrix Ts no longer need satisfy any simplified balance condition in relation to ρ¯s, such as detailed balance. Non-negativity of D˙ρsρ¯s was proved by Schnakenberg [76] by an argument that applies to discrete population processes with normalized stationary distributions of the kind assumed here. We will derive a slightly more detailed decomposition proving this result for the case of CRNs, in a later section.

The dissipation term that complements the change in D˙ρsρ¯s is obtained by shifting the “environmental” entropy change (8) by logρ¯s to obtain the housekeeping heat

σnsnsHKwnsnssσnsnsenv+logρ¯nssρ¯nsswnsnss=nnsnnslogρ¯nssρnes/ρ_nρ¯nssρnes/ρ_nwnnρnes. (10)

(These terms are used because this is how they are known. No energy interpretation is assumed here, so a better term would be “housekeeping entropy rate”). Non-negativity of Equation (10) is implied by a fluctuation theorem [52], and a time-local proof with interesting further structure for CRNs will be given below.

The total change in relative entropy (4) is then the sum

D˙ρρ_=D˙ρsρ¯s+nsnsnsσnsnsHKwnsnssρnssnsρnssD˙nsesρesρ_es. (11)

Each summand in Equation (11) is now independently non-negative. The first measures a gain of entropy within the system s, conditionally independent of changes in the environment given the marginal transition matrix Ts. The third measures a gain of entropy in the environment e independent of any changes in the system at all. The second, housekeeping entropy rate, measures a change of entropy in the environment that is conditionally independent of changes of entropy within the system, given Ts as represented in ρ¯s. Any of the terms may be changed, holding the conditioning data ρ¯s or ns, or omitted, without changing the limits for the others, as observed in [48,55]. They respect the conditional independence created by thermalization in the microscale, and by that criterion constitute a natural decomposition of the system.

2.4.3. Intrinsic and Extrinsic Thermodynamics

We take Ts and Dρsρ¯s as a specification of the intrinsic thermodynamics of the system s, analogous to the role of intrinsic curvature of a manifold in differential geometry. The vector (indexed by pairs of system indices) of housekeeping entropy differentials, σnsnsHK, correspondingly defines the way s is thermodynamically embedded in the mesoscale system, analogous to the role of components of an embedding curvature for a sub-manifold within larger manifold. In differential geometry, the embedding curvature is termed extrinsic curvature, and by analogy we refer to σnsnsHK as defining an extrinsic thermodynamics for s within the larger se mesoscale world. The rates we separate as intrinsic and extrinsic correspond respectively to the “non-adiabatic” and “adiabatic” entropy production designations in [21,50].

2.4.4. System Hartley Information as a Temporal Connection

The natural decomposition (11) differs from the information/heat decomposition (9) in what is regarded as inherent to the system versus the environment, as mentioned briefly above in Section 1.1. The attribution of Shannon entropy as a “system” property follows from the fact that it involves only ρnss, and its change counts only actual transitions nsns with rate wnsnss. Likewise, the “environment” heat (8) is a function only of the actual distribution ρes and the realized currents.

Those events that occur within s or e, however, fail to capture the additional features of Ts: that specific transitions are coupled as a system, and that they have the dependence on ρes of Equation (2). The stationary distribution ρ¯s is the function of Ts reflecting its status as a system. In the degenerate case where irreversibility is removed and the system obeys detailed balance, the stationary distribution ρ¯s reduces to the marginal Gibbs measure on the system degrees of freedom, but as part of the relative entropy it is treated as part of the system description, in contrast to the mechanical view where it is appears as an intensive boundary condition associated with the environment.

The differences between the three entropy-change terms (7,8,10) are differences of the Hartley informations [46], respectively for ρ_s or ρ¯s. In the natural decomposition (11), they are not acted upon by the time derivative, but rather define the tangent plane of zero change for logρ terms that are acted upon by the transition matrix, and resemble connection coefficients specifying parallel transport in differential geometry.

3. Hamilton–Jacobi Theory for Large Deviations

The central concepts in thermodynamics are that of the macrostate, and of the entropy as a state function from which properties of macrostates and constraints on their transformations are derived. In classical thermodynamics [39,86], macrostates are introduced in association with average values of conserved quantities (e.g., energy, particle numbers). Conservation laws provide the essential element of conditioning in probability. Energy conservation alone, in the context of detailed balance that results from microscopic reversibility under conditions of time-translation symmetry, enables specification of invariant marginal distributions conditionally independent of, and without a need to specify, the entire state space. The generality of energy conservation also makes it a robust property on which to condition.

Here, we wish to separate the construction that defines a macrostate from the properties that make one or another class of macrostates dynamically robust in a given system. The property of conditioning will be provided not by conservation, but by reference to stationary distributions and a the tilting procedure by which generating functions are defined from these. The defining construction can be quite general, but it must in all cases create a dimensional reduction by an indefinite (or in the limit infinite) factor, from the dimensionality of the microstate space that is definite but arbitrarily large, to the dimensionality of a space of macrostate variables that is fixed and independent of the dimensionality of microstates. Only dimensional reductions of this kind are compatible with the large-deviation definition of macroworlds as worlds in which structure can be characterized asymptotically separate from scale [43]. Robustness can then be characterized separately within the large-deviation analysis in terms of closure approximations or spectra of relaxation times for various classes of macrostates.

Dimensional reduction by an indefinite degree is achieved by associating macrostates with particular classes of distributions over microstates: namely, those distributions produced in exponential families to define generating functions. The coordinates in the tilting weights that define the family are independent of the dimension of the microstate space for a given family and become the intensive state variables (see [41]). They are related by Legendre transform to deviations that are the dual extensive state variables. Relative entropies such as Dρρ_, defined as functionals on arbitrary distributions, and dual under Legendre transform to suitable cumulant-generating functions, become state functions when restricted to the distributions for macrostates. The extensive state variables are their arguments and the intensive state variables their gradients. Legendre duality leads to a system of Hamiltonian equations [40] for time evolution of macrostate variables, and from these the large-deviation scaling behavior, timescale structure, and moment closure or other properties of the chosen system of macrostates are derived.

In the treatment of Section 2, the relative entropy increased deterministically without reference to any particular level of system timescales, or the size-scale factors associated with various levels. As such, it fulfilled the Lyapunov role of (minus) the entropy, but not the large-deviation role that is the other defining characteristic of entropy [43,44]. The selection of a subclass of distributions as macrostates introduces level-dependence, scale-dependence, and the large-deviation role of entropy, and lets us construct the relation between the Lyapunov and large-deviation roles of entropy for macroworlds, which are generally distinct. As a quantity capable of fluctuations, the macrostate entropy can decrease along subsets of classical trajectories; these fluctuations are the objects of study in stochastic thermodynamics. The Hamiltonian dynamical system is a particularly clarifying representation for the way it separates relaxation and fluctuation trajectories for macrostates into distinct sub-manifolds. In distinguishing the unconditional versus conditional nature of the two kinds of histories, it shows how these macro-fluctuations are not “violations” of the 2nd law, but rather a partitioning of the elementary events through which the only properly formulated 2nd law is realized. (See a brief discussion making essentially this point in Section 1.2 of [5]. Seifert refers to the 2nd law as characterizing “mean entropy production”, in keeping with other interpretations of entropies such as the Hartley information in terms of heat. The characterization adopted here is more categorical: the Hartley function and its mean, Shannon information, are not quantities with the same interpretation; likewise, the entropy change (11) is the only entropy relative to the boundary conditions in T that is the object of a well-formulated 2nd law. The “entropy productions” resulting from the Prigogine local-equilibrium assumption are conditional entropies for macrostates defined through various large-deviation functions, shown explicitly below.)

3.1. Generating Functions, Liouville Equation, and the Hamilton–Jacobi Construction for Saddle Points

3.1.1. Relation of the Liouville Operator to the Cumulant-Generating Function

The P-dimensional Laplace transform of a distribution ρn on discrete population states gives the moment-generating function (MGF) for the species-number counts npp1,,P. For time-dependent problems, it is convenient to work in the formal Doi operator algebra for generating functions, in which a vector of raising operators aap are the arguments of the MGF, and a conjugate vector of lowering operators aap are the formal counterparts to /a. MGFs are written as vectors ρnρnan0nρnn in a Hilbert space built upon a ground state 0, and the commutation relations of the raising and lowering operators acting in the Hilbert space are ap,aq=δpq.

The basis vectors corresponding to specific population states are denoted n. They are eigenvectors of the number operators apap with eigenvalues np:

apapn=npn. (12)

Through by-now-standard constructions [67], the master equation ρ˙=Tρ is converted to a Liouville equation for time evolution of the MGF,

ddtρ=La,aρ, (13)

in which the Liouville operator La,a is derived from the elements of the matrix T.

To define exponential families and a cumulant-generating function (CGF), it is convenient to work with the Laplace transform with an argument that is a vector zzp of complex coefficients. The corresponding CGF, Γlogz, for which the natural argument is logz, is constructed in the Doi Hilbert space as the inner product with a variant on the Glauber norm,

eΓlogz=0ezaρ. (14)

z will be called the tilt of the exponential family, corresponding to its usage in importance sampling [87]. (We adopt the sign for Γ corresponding to the free energy in thermodynamics. Other standard notations, such as ψ for the CGF [41], are unavailable because they collide with notations used in the representation of CRNs below.)

An important quantity will be the vector of expectations of the number operators in the tilted distribution

nρpz0ezaapapρ0ezaρ. (15)

Time evolution of the CGF follows from Equation (13), as

teΓlogz=0ezaLa,aρ. (16)

Under the saddle-point or leading-exponential approximation that defines the large-deviation limit (developed further below), the expectation of L in Equation (16) is replaced by the same function at classical arguments

tΓlogz=Lz,nρz/z. (17)

A systematic though indirect way to derive Equation (17) is by means of coherent-state expansions and 2-field functional integral representations of the generating function, reviewed didactically with a simple example in [67]. The flavor of the construction is easy to understand, however. The operator eza acts, through the commutation relations ap,aq=δpq, as a shift operator taking aa+z on all instances of a in La,a. The left null state 0 in the inner product is annihilated by all a, so in the ordering of L called normal order that places all a operators to the left of all a operators, the only remaining argument is the number z. The substitution of a by nρz/z relies on the replacement of all instances of apap by factors of np, followed by the mean-field approximation—another name for the saddle-point approximation—in which higher-order correlations are approximated by equivalent powers of the mean. The factor a in these number expansions differs from the number operator aa by additional factors of a replaced by z as explained above. The saddle-point evaluation of a with the coherent-state parameter that is its eigenvalue is more direct in the functional-integral construction [67], where coherent-state and number fields are related through a canonical transformation [88] in the field variables of integration.

From coherent-state to number-potential coordinates:

With normal ordering of operators in L (all lowering operators to the right of any raising operator), z is the exact value assigned to a in the expectation (16), and only the value nρz/z for a depends on saddle-point approximations. In special cases, where ρ are eigenstates of a known as coherent states, the assignment to a is also exact. Therefore, the arguments of L in Equation (17) are called coherent-state coordinates.

However, logz, which we henceforth denote by θθp, is the affine coordinate system in which the CGF is locally convex, and it will be preferable to work in coordinates θ,nρ, which we term number-potential coordinates, because for applications in chemistry, θ has the dimensions of a chemical potential. (θ also provides the affine coordinate system in the exponential family, which defines contravariant coordinates in information geometry [41].) We abbreviate exponentials and other functions acting component-wise on vectors as zeθ, and simply assign nρθnρz. Likewise, L is defined in either coordinate system by mapping its arguments: L(NP)θ,nL(CS)z,nz/z.

3.1.2. Legendre Transform of the CGF

To establish notation and methods, consider first distributions ρn that are convex with an interior maximum in n. Then, the gradient of the CGF

Γθ=nρθ (18)

gives the mean (15) in the tilted distribution.

The stochastic effective action is the Legendre transform of Γ, defined as

Seffn=maxθθn+Γθ. (19)

For distributions over discrete states, to leading exponential order, Seff is a continuously-indexed approximation to minus the log-probability: Seffnlogρnnn. Its gradient recovers the tilt coordinate θ,

Seffn=θρn, (20)

and the CGF is obtained by inverse Legendre transform.

Γθ=maxnθnSeffn. (21)

The time evolution of Seff can be obtained by taking a total time derivative of Equation (19) along any trajectory, and using Equation (18) to cancel the term in θ˙. The partial derivative that remains, evaluated using Equation (17), gives

Sefftn=Lθρn,n. (22)

Equation (22) is of Hamilton–Jacobi form, with L filling the role of the Hamiltonian. (The reason for this sign correspondence, which affects nothing in the derivation, will become clear below.)

Multiple modes, Legendre–Fenchel transform, and locally-defined extrema

Systems with interesting multi-level structure do not have Seff or ρ globally convex, but rather only locally convex. For these, the Legendre–Fenchel transform takes the place of the Legendre transform in Equation (19), and if constructed with a single coordinate θ, Γ may have discontinuous derivative.

For these one begins, rather than with the CGF, with Seff, evaluated as a line integral of Equation (20) in basins around the stationary points nρ0 at θ=0. Each such basin defines an invertible pair of functions θρnnρ0 and nρθnρ0. We will not be concerned with the large-deviation construction for general distributions in this paper, which is better carried out using a path integral. We return in a later section to the special case of multiple metastable fixed points in the stationary distribution, and the modes associated with the stationary distribution ρ_, and provide a more complete treatment.

3.1.3. Hamiltonian Equations of Motion and the Action

Partial derivatives with respect to t and n commute, so the relation (20), used to evaluate /n of Equation (22), gives the relation

θρnt=Ln. (23)

The dual construction for the time dependence of n from Equation (18) gives

nρθt=Lθ. (24)

The evolution Equations (23) and (24) describe stationary trajectories of an extended-time Lagrange–Hamilton action functional which may be written in either coherent-state or number-potential coordinates, as

S=dtdtzn/z+L(CS)z,n/z=dtdtθn+L(NP)θ,n. (25)

The same action functionals are arrived at somewhat more indirectly via 2-field functional integral constructions such as the Doi–Peliti method. From the form of the first term in either line, it is clear that the two sets of coordinates relate to each other through a canonical transformation [88].

Circulation-free vector field of θ for the stationary distribution

In order for Seff to be a continuum approximation to logρ, if ρ exists and is smooth everywhere, the vector field θ obtained from stationary trajectories of the action (25) must have zero circulation in order to be a well-defined gradient through Equation (20). To check that this is the case, consider the increment of θ under a small interval dt under Equation (23):

dθpdtLnp. (26)

The gradient in n of θ therefore increments in time as

θp+dθpnq=θpnq+dtL2npnq. (27)

Contraction of Equation (27) with the antisymmetric symbol in p and q vanishes

ddtϵpqθpnq=0, (28)

so the circulation of θ is the same everywhere as at the fixed points.

From Equation (20), it is required to be the case that θp/nq=2Seff/npnqgpq1, the inverse of the Fisher metric [41], symmetric by construction and thus giving ϵpqgpq1=0. The only difference between the fixed point and any other point is that for distant points we are relying on Hamiltonian trajectories to evaluate θ, whereas at the fixed point, the Fisher metric may be calculated by means not relying on the large-deviation saddle-point approximation. Therefore, Equation (28) may be read as a check that symmetry of the Fisher metric is preserved by Hamiltonian trajectories directly from the symmetric partial derivative of L in Equation (27).

3.2. The Stationary Distribution and Macrostates

Up to this point only the Lyapunov role (4) of the relative entropy has been developed. While the increase of Dρρ_ has the appearance of the classical 2nd law, we can understand from three observations that this relative entropy is not the desired generalization of the entropy state function of classical thermodynamics to express the phenomenology of multi-level systems:

  1. The relative entropy Dρρ_ is a functional on arbitrary distributions, like the Shannon entropy that is a special case. It identifies no concept of macrostate, and has no dependence on state variables.

  2. In a multi-level system that may have arbitrarily fine-grained descriptions, there is no upper limit to Dρρ_, and no appearance of the system scale at any particular level, which characterizes state-function entropies.

  3. Equation (4) describes a deterministic increase of relative entropy; the large-deviation role of entropy as a log-probability for macrostate fluctuations [44] does not appear.

The step that has not yet been taken in our construction is, of course, the identification of a macrostate concept. Here, we depart from the usual development based on conservation laws, and follow Gell-Mann and Lloyd [68,69] in claiming that the concept of macrostate is not inherent in features of a system’s dynamics, but requires one to explicitly choose a procedure for aggregation or coarse-graining—what they call a “judge”—as part of the commitment to which phenomenology is being described.

We will put forth the definition of macrostates as the tilted distributions arising in generating functions for the stationary distribution ρ_. In the case of generating functions for number, these are the distributions appearing in Equation (14), which we will denote by ρn¯. They are the least-improbable distributions with a given non-stationary mean to arise through aggregate microscopic fluctuations, and therefore dominate the construction of the large-deviation probability.

(The justification for this interpretation, as one entailed by the leading-exponential definition of the large-deviation approximation, is found in the conditional dependence structure of the Hamilton–Jacobi representation. If the large-deviation function for ρ_ exists and is continuous, its gradient is the vector field formed from the conjugate momentum variable θ to n at each point. The value logρ_ is reached cumulatively as θ·dn along the stationary Hamiltonian contours, meaning that the leading probability to reach each point is conditionally dependent only on earlier points on the contour. Concentration of conditional dependence within the stationary Hamiltonian trajectories identifies the bundles of microtrajectories that pass through the distributions indexed by the macrostates as carrying the leading probability for all further escapes, which is what we mean by these distributions being “least-improbable” under fluctuations. The classic reference for eikonal treatment of escapes for boundary-value problems and first passages is [62]. Illustrations of ways the ray method can fail to estimate a large-deviation function for non-equilibrium distributions are found in [89,90]. Further discussion of stationary Hamiltonian rays as reductions to equivalent one-dimensional problems, which may be easily understood in terms of balance of up-going and down-going probability flows, may be found in [91].)

The extensive state variable associated with this definition of macrostate is the tilted mean from Equation (18), which we will denote n¯=nρ_θ. If ρ_ and ρn¯ are sharply peaked—the limit in which the large-deviation approximation is informative—the relative entropy of the macrostate is dominated at the saddle point of ρn¯, where logρn¯0, and thus

Dρn¯ρ_logρ_nnn¯=S_effn¯. (29)

The general relative entropy functional, applied to the macrostate, becomes the entropy state function S_eff, which takes as its argument the extensive state variable n¯. Moreover, because probability under ρ_ is concentrated on configurations with the scale that characterizes the system, tilted means n¯ that are not suppressed by very large exponential probabilities will have comparable scale. If n¯ is also the scale factor in the large-deviations function (a property that may or may not hold, depending on the system studied), then S_effn¯ in scale, and the entropy state function now has the characteristic scale of the mesoscale level of the process description.

The three classes of distributions that enter a thermodynamic description are summarized in Table 2.

Table 2.

Notation conventions adopted for three classes of distributions arising in large-deviation systems with multi-scale relaxation structure.

Notation Definition Comment
ρ_, ρ_s, ρ_es whole-system, s-marginal, es-conditional distributions in the global stationary state
ρ¯s marginal system steady-state distribution function of instantaneous environment conditional distribution ρes
ρn¯ macrostate tilted to saddle-point value n¯ defined relative to global stationary distributions ρ_; may be defined for whole-system, s, or es

3.2.1. Coherent States, Dimensional Reduction, and the f-Divergence

A special case, which is illustrative for its simplicity and which arises for an important sub-class of stochastic CRNs, is the case when ρ_ is a coherent state, an eigenvector of the lowering operator a. Coherent states are the generating functions of product-form Poisson distributions, or cross-sections through such products if the transitions in the population process satisfy conservation laws. They are known [92] to be general solutions for CRN steady states satisfying a condition termed complex balance, and the fixed points associated with such stationary distributions are also known to be unique and interior (no zero-expectations for any np) [72].

Let n_ be the eigenvalue of the stationary coherent state: aρn_=n_ρn_. Then the mean in the tilted distribution lies in a simple exponential family, n¯nρ_θ=eθn_ (component-wise), and the tilted macrostate ρn¯ is also a coherent state: aρn¯=n¯ρn¯.

The logarithm of a product-form Poisson distribution in Stirling’s approximation is given by

logρnn¯n·lognn¯nn¯·1Dfnn¯. (30)

Dfnn¯, known as the f-divergence, is a generalization of the Kullback–Leibler divergence to measures such as n which need not have a conserved sum. The Lyapunov function from Equation (4) reduces in the same Stirling approximation to

Dρn¯ρ_Dfn¯n_, (31)

giving S_effn¯ in Equation (29). On coherent states, the Kullback–Leibler divergence on distributions, which may be of arbitrarily large dimension, reduces to the f-divergence on their extensive state variables which have dimension P.

The coherent states play a much more general role than their role as exact solutions for the restricted case of complex-balanced CRNs. In the Doi–Peliti 2-field functional integral formalism [93,94,95,96] for generating functionals over discrete-state stochastic processes, the coherent states form an over-complete basis in the Peliti representation of unity. The saddle-point approximation on trajectories, which yields the classical actions (25) and the resulting Hamilton–Jacobi equations, approximates expectations in the exact distribution by those in the nearest coherent-state basis element. Observables in macrostates are thus mapped to observables in coherent states; although, in cases when the coherent state is not an exact solution, the saddle-point condition may be sensitive to which observable is being evaluated.

3.2.2. Multiple Fixed Points and Instantons

Systems with multiple metastable fixed points correspond to non-convex ρ_ and thus multiple modes. For these, monotone decrease of D˙ρρ_ in Equation (4) does not entail monotonicity of the f-divergence in Equation (31). In such systems, first passages between basins of attraction are solutions to the Hamiltonian Equations (23) and (24) with momentum coordinate θ0. Along these S_effn¯ increases, and that increase is what is sometimes termed the “violation of the 2nd law”.

For unimodal ρ_, the θ0 large-deviation trajectories have a separate use and interpretation from the relaxation trajectories at θ=0 that give the classical 2nd law in Equation (49). For multi-modal ρ_, a special sub-class of θ0 trajectories, those known as instantons being responsible for first-passages between fixed-points [63,64], must be used to refine the interpretation of classical relaxation trajectories. That refinement relates the transient increases in the large-deviation function to the deterministic 2nd law (4) that continues to apply.

This section briefly introduces the Legendre duality that defines first-passage probabilities in metastable systems, arriving at the chain rule for entropy that separates the roles of classical and instanton trajectories. Let n_ be a fixed point of the Hamiltonian equations for ρ_, and denote by nρ_θn_ the values of classical state variables n¯ obtained along θ0 trajectories from n_. Call these escape trajectories. The set of all n¯ is partitioned among basins of repulsion from fixed points. Saddle points and escape separatrices are limit points of escapes from two or more basins.

Within one such basin, we may construct S_eff as a Legendre transform of a summand Γn_ in the overall CGF, as

S_effn¯=maxθθn¯+Γn_θ. (32)

θ ranges only over the values that arise on escape trajectories from n_, which generally are bounded [91], and within that range

Γn_θθn¯=n¯. (33)

Next, let the function n_n¯ denote the fixed point to which a trajectory with θ0 relaxes, starting from n¯. Note that the maps n_n¯ and nρ_θn_ need not be reflexive. That is, we may have nρ_θn_n¯n¯ for any θ, because escape and relaxation separatrices may differ. From the large-deviation identification of Seffnlogρnnn, recognize that the large-deviation function for the stationary distribution is given by S_effn_=D1n_ρ_, where 1n_ denotes the special macrostate for which the stationary value is a fixed-point value n_.

Then, the expression (29) for the Kullback–Leibler divergence appearing in Equation (4) may be approximated to leading exponential order as

Dρn¯ρ_S_effn¯S_effn_n¯+D1n_n¯ρ_. (34)

Equation (34) is the chain rule for relative entropy. S_effn_n¯S_effn¯ is the conditional entropy of the macrostate n¯ relative to the macrostate 1n_n¯ to which it is connected by a θ=0 Hamiltonian trajectory. D1n_n¯ρ_ is the unconditioned entropy of 1n_n¯ relative to ρ_.

Relaxation along θ=0 trajectories describes a classical 2nd law only for the conditional part of the relative entropy. Deterministic relaxation of the unconditioned entropy is derived from the refinement of the (as it turns out, only apparent) classical trajectory with an instanton sum. The general method is described in depth in [63] and [64] Ch.7, using path integrals that require too much digression to fit within the scope of this paper. The structure of the instanton sum, and in particular the way it creates a new elementary stochastic process at the macroscale for which D1n_n¯ρ_ is the Lyapunov function, will be explained in Section 4.2.2 following [67], after the behavior of S_eff along θ=0 and θ0 trajectories has been characterized.

4. Population Processes with CRN-Form Generators

The results up to this point apply to general processes with discrete state spaces, normalizable stationary distributions, and some kind of systemenvironment tensor-product structure on states. For the next steps, we restrict to stochastic population processes that can be written in a form equivalent to chemical reaction networks [71,72,97]. CRNs are an expressive enough class to include many non-mechanical systems such as evolving Darwinian populations [81], and to implement algorithmically complex processes [98]. Yet they possess generators with a compact and simple structure [73,74], in which similarities of microstate and macrostate phenomena are simply reflected in the formalism.

4.1. Hypergraph Generator of State-Space Transitions

Population states n give counts of individuals, grouped according to their types p which are termed species. Stochastic chemical reaction networks assume independent elementary events grouped into types termed reactions. Each reaction event removes a multiset (a collection of distinct individuals, which may contain more than one individual from the same species) of individuals termed a complex from the population, and places some (generally different) multiset of individuals back into it. The map from species to complexes is called the stoichiometry of the CRN. Reactions occur with probabilities proportional per unit time to rates. The simplest rate model, used here, multiplies a half-reaction rate constant by a combinatorial factor for proportional sampling without replacement from the population to form the complex, which in the mean-field limit leads to mass-action kinetics.

Complexes will be indexed with subscripts i,j,, and an ordered pair such as ji labels the reaction removing i and creating j. With these conventions, the Liouville-operator representation of the generator from Equation (13) takes the form [73,74]

L=jikjiψjaψiaψiaψTaAψa. (35)

kji are half-reaction rate constants, organized in the second line into an adjacency matrix A on complexes. ψψi is a column vector with components ψiapapyipaYi that, in the Doi algebra, produce the combinatorial factors and state shifts reflecting proportional sampling without replacement. Here, Yyip is the matrix of stoichiometric coefficients, with entry yip giving the number of individuals of species p that make up complex i. Further condensing notation, Yiyip are column vectors in index p, and Ypyip are row vectors on index i. aYi is understood as the component-wise product of powers apyip over the species index p. ψT is a transpose (row) vector, and vector and matrix products are used in the second line of Equation (35).

The generator (35) defines a hypergraph [99] in which A is the adjacency matrix on an ordinary graph over complexes known as the complex network [97,100]. The stoichiometric vectors Yi, defining complexes as multisets, make the edges in A directed hyper-edges relative to the population states that are the Markov states for the process. (Properly, the generator should be called a “directed multi-hypergraph” because the complexes are multisets rather than sets.) The concurrent removal or addition of complexes is the source of both expressive power and analytic difficulty provided by hypergraph-generators.

A master Equation (1) acting in the state space rather than on the generating function may be written in terms of the same operators, as

ρ˙n=ψTe/nAdiagψe/nΨnρn. (36)

Here, a formal shift operator e/ne/np is used in place of an explicit sum over shifted indices, and ψie±/n=e±YiT/n creates shifts by the stoichiometric vector Yi. diagψ refers to the matrix with diagonal entries given by the components ψi.

In the master equation, the combinatorial factors must be given explicitly. These are written as a vector ΨΨi with components Ψinpnp!/npyip!nYi_ that are falling factorials from n, denoted with the underscore as nYi_.

The matrix elements of Equation (1) may be read off from Equation (36) in terms of elements in the hypergraph, as

wnnjikjiΨinwheren=YjYi+n, (37)

between all pairs n,n separated by the stoichiometric difference vector YjYi. If multiple reactions produce transitions between the same pairs of states, the aggregate rates become

wnnjiYjYi=nnwnnji. (38)

In this way, a finite adjacency matrix A on complexes may generate an infinite-rank transition matrix T, which is the adjacency matrix for an ordinary graph over states. We will see that for CRNs, the hypergraph furnishes a representation for macrostates similar to the representation given by the simple graph for microstates.

From Equation (38), marginal transition rates wnsnss for the system may be defined using the average (2) over ρes. Note that the dependence of the activity products Ψi on species p within the system remains that of a falling factorial, even if the average over activities of species in the environment is complicated. Denote by Ψsns and ψse±/ns the restrictions of the activity and shift functions to the species in the system s, and by kjisρes the rate constants after computing the sum in Equation (2) over the index for species in the environment.

4.1.1. Descaling of Transition Matrices for Microstates

Proofs of monotonic change, whether of total relative entropy (1) or for the components of entropy change partitioned out in Section 2.4, take a particularly simple form for CRNs generated by finitely many reactions. They make use of finite cycle decompositions of the current through any microstate or complex, which are derived from descaled adjacency matrices.

The descaling that produces a transition matrix that annihilates the uniform measure on both the left and the right is

T^ψTe/nAdiagψe/nΨndiagρ_n (39)

for the whole mesoscale, or

T^sψsTe/nsAsdiagψse/nsΨsnsdiagρ¯nss (40)

for the subsystem s, by definition of the stationary states. As in Equation (40) is the adjacency matrix on complexes that gives rate constants wnsnss from Equation (2). These descalings are familiar as the ones leading to the dual time-reversed generators in the fluctuation theorem for housekeeping heat [52].

We return to use the descaled microstate transition matrices (39) and (40) in monotonicity proofs for general CRNs in Section 5; but, before doing that, we use the descaling in the state space to motivate an analogous and simpler descaling for macrostates at the level of the hypergraph. That descaling illustrates the cycle decomposition on finitely many states, though it only yields the f-Divergence of Equation (31) as a Lyapunov function for the complex-balanced CRNs.

4.1.2. Descaling of Transition Matrices for Macrostates

The coherent states, which are the moment-generating functions of Poisson distributions and their duals in the Doi Hilbert space, are defined as

ϕeϕϕ0eϕaϕeaϕ0, (41)

where ϕϕp is a vector of (generally complex) numbers and ϕ is its Hermitian conjugate. They are eigenstates respectively of the raising and lowering operators with eigenvalues ϕap=ϕϕp* and apϕ=ϕpϕ.

On the constant trajectories corresponding to a fixed point of the Hamiltonian Equations (23) and (24), ϕ1 and ϕϕ_=n_, the fixed-point number. We may descale the coherent-state parameters ϕ and ϕ, which correspond to classical state variables, by defining

ϕpϕ_pφpϕp*ϕ_pφp*. (42)

In vector notation, ϕdiagϕ_φ, ϕdiagϕ_φ. The scaling (42) was introduced by Baish [101] to study dualities of correlation functions in Doi–Peliti functional integrals.

The compensating descaling of the adjacency matrix

A^Adiagψϕ_ (43)

may be compared with Equation (39) for T. A similar descaling may be done for As using ρ¯s. Henceforth, we omit the duplicate notation, and carry out proofs with respect to whatever is the stationary distribution for a given adjacency matrix and hypergraph.

The coherent-state action (25) with Liouville operator (35) has a symmetric form in descaled variables that is particularly useful for understanding the relaxation and escape trajectories in the Hamilton–Jacobi system:

S=dtϕdiagϕ_dtφψTϕ*A^ψφ. (44)

4.1.3. Equations of Motion and the L=0 Manifold

The Hamiltonian equations derived by variation of the action (44) can be written

diagϕ_φ˙=ψTϕA^ψφϕ=1YA^ψφϕ˙diagϕ_=ψTϕ*A^ψφφ=1ψTϕ*A^YT. (45)

The general solution is shown first in each line and then particular limiting forms are shown.

The sub-manifold ϕ1 contains all relaxation trajectories for any CRN. It is dynamically stable because A is a stochastic matrix on the left, and thus ψT1A^=0 in the second line of Equation (45). The image of YA is called the stoichiometric subspace, and its dimension is denoted sdimimYA.

An important simplifying property of some CRNs is known as complex balance of the stationary distributions. It is the condition that the fixed point ψn_kerA and not only kerYA. Since n_=ϕ_ corresponds to φ=1 and thus ψφ=1, 1kerA^ and φ˙=0 in the first line of Equation (45) at any ϕ. Complex-balanced CRNs (with suitable conditions on A to ensure ergodicity on the complex network [97]) always possess unique interior fixed points [72,102] and simple product-form stationary distributions at these fixed points [92]. Complex balance can be ensured at all parameters if a topological character of the CRN known as deficiency equals zero, but may also be true for nonzero-deficiency networks for suitably tuned parameters. In this work, nothing requires us to distinguish these reasons for complex balance.

For non-complex-balanced stationary solutions, although escapes may have φ=1 as initial conditions, that value is not dynamically maintained. Recalling the definition (12) of the number operator, the field np=ϕpϕp, so non-constant ϕ is required for instantons to escape from stable fixed points and terminate in saddle fixed points, in both of which limits ϕ1.

All relaxations and also the escape trajectories from fixed points (the instantons) share the property that L0 for a CRN with time-independent parameters. This sub-manifold separates into two branches, with ϕ1 for relaxations and ϕ1 for escapes. (We can see that this must be true because L is a Hamiltonian conserved under the equations of motion, and because instantons trace the values of n and θ in the stationary distribution ρ_, which must then have L=0 to satisfy the Hamilton–Jacobi Equation (22).)

4.1.4. The Schlögl Cubic Model to Illustrate

The features of CRNs with multiple metastable fixed points are exhibited in a cubic 1-species model introduced by Schlögl [103], which has been extensively studied [73,74,90] as an example of dynamical bistability.

The reaction schema in simplified form is

k¯dkdA2Ak¯ckc3A. (46)

We choose rate constants so that the mass-action equation for the number of A particles, given by n, is

n˙=n3nn2nn1n. (47)

The three fixed points are n_n1,n2,n3, of which n1 and n3 are stable, and n2 is a saddle.

The relaxation and escape branches of the L=0 manifold are shown in Figure 2. Because the Schlögl model is 1-dimensional, the condition L=0 fixes θn along escape trajectories. Because the stochastic model is a birth-death process, it is also exactly solvable [104]. We will return to the example in Section 6.2 to study properties of the intensive and extensive thermodynamic potentials for it.

Figure 2.

Figure 2

The two branches of the L=0 manifold, corresponding to the rate Equation (47). ϕ1 (blue); ϕ1 (red). Trajectories at common n have the same value on the vertical axis. Time is shown moving toward the front.

4.2. Large-Deviation and Lyapunov Roles of the Effective Action

The role of entropy in the understanding of Boltzmann and Gibbs was that of a Lyapunov function [39,86], accounting for unidirectionality from microscopically reversible mechanics. The much later understanding of entropy in relation to fluctuations [43,44]—precisely the opposite of deterministic evolution—is that of a large-deviation function.

The chain rule in Equation (34) of Section 3.2.2 relates the conditional relative entropy associated with quasi-deterministic relaxation to an additional unconditioned entropy of metastable states that are stable fixed points of the Hamiltonian dynamical system. Here, we complete the description of the relation between the Lyapunov and large-deviation roles of the macrostate entropy (29), and show how the sum over instantons rather than a single Hamiltonian trajectory results in deterministic increase of the unconditioned relative entropy D1n_ρ_. In the Hamilton–Jacobi representation, this means constructing relations between the ϕ=1 and the ϕ1 branches of the L=0 manifold. For special cases, such as CRNs with detailed balance or one-dimensional systems, the mapping is one of simple time reversal of the n fields in trajectories. More generally, even for complex-balanced CRNs where the Lyapunov and large-deviation functions are the same, the relations between relaxation and escape trajectories become more variable.

4.2.1. Convexity Proof of the Lyapunov Property of Macrostate Entropy in Hamilton–Jacobi Variables

Begin with the dynamics of the relative entropy S_effn¯ from Equation (29), as the state variable n¯ evolves along a relaxation solution to the Hamiltonian equations of motion. The properties of S_effn¯ follow from its construction as a large-deviation function along escape trajectories. From Equation (20), the time derivative of S_eff along an escape trajectory is given by

S_effnnn˙esc=θρ_n·n˙escS_˙effesc. (48)

Hamiltonian trajectories are least-improbable paths of fluctuation, with escapes conditionally dependent along trajectories (see Section 3.2). The conditional probability to extend a path, having reached any position along the path, is always positive, giving S_˙effesc0 in Equation (48).

Next compute S_˙eff along a relaxation trajectory, for simplicity considering A (an equivalent construction exists for As in terms of ρ¯s). The continuum limit of the relative entropy from Equation (4) replaces ndn, and continuously-indexed logρnn¯ and logρ_n are defined through the large-deviation functions.

Writing the CRN Liouville operator (35) in coherent-state arguments, the time dependence is evaluated as

D˙ρn¯ρ_dsnlogρnn¯ρ_nρ˙nn¯dsnlogρnn¯ρ_nψTzAψnzρnn¯=jikjidsnlogρnn¯ρ_neθYjYiψinρnn¯jikjidsnnplogρnn¯ρ_nn¯nn¯p×nn¯q2logρnn¯nqnrn¯YjYirψinρnn¯=jikjilogρ_nnpn¯YjYipψin¯=logρ_nnn¯·YA^ψin¯=θρ_n¯·n˙ρn¯0=S_˙effn¯rel. (49)

In Equation (49), z abbreviates zρn¯nexpθρn¯n from Equation (22), for the Seff that is the continuum approximation of logρn¯. The third through fifth lines expand A explicitly in terms of rate constants kji following Equation (35), to collocate all terms in θlogz in the Liouville operator. The fourth line expands logρn¯/ρ_ and θ to linear order in nn¯ in neighborhoods of the saddle point n¯ of ρn¯. The matrix 2logρ/nqnr is the inverse of the Fisher metric that is the variance nn¯pnn¯q [41], so the product of the two is just the identity δpr.

In the penultimate line of Equation (49), θρ_n¯ is the value of S_eff/n for the escape trajectory passing through n¯, and now n˙ρn¯0 is the velocity along the relaxation trajectory rather than the Hamilton–Jacobi escape solution at n¯. Therefore, the net effect of the large-deviation approximation on relative entropy has been to replace escape with relaxation velocity vectors at a fixed value of θ Legendre dual to n¯.

Lemma 1.

θρ_n¯·n˙ρn¯00.

Proof. 

The proof follows from four observations:

  1. ℒ = 0: As noted, for both escapes and relaxations, Lθ,n=0.

  2. Convexity: Both the potential value θρ_ for the escape trajectory, and the velocity n˙ of the relaxation trajectory, are evaluated at the same location n¯=nρn¯0. The Liouville function L=jikjieθYjYi1ψin, with all kji>0, is convex on the s-dimensional sub-manifold of fixed n. L is bounded above at fixed n, and in order for cycles to be possible, shift vectors YjYi giving positive exponentials must exist for all directions of θ in the stoichiometric subspace. Therefore, L at large θ in every direction, and the region L>0 at fixed n is bounded. The boundary Lθ,n=0 at fixed n is likewise convex with respect to θ as affine coordinates, and L>0 is its interior.

  3. Chord: The vector θ0 is thus a chord spanning the L=0 sub-manifold of co-dimension 1 within the s-dimensional manifold of fixed n.

  4. Outward-directedness: The equation of motion n˙=L/θ gives n˙θ,n as the outward normal function to the surface Lθ,n=0. The outward normal at θ=0 is the classical relaxation trajectory. Every chord θ0 of the surface lies in its interior, implying that θρ_n·n˙0,n<0 for any n, and thus θρ_n¯·n˙ρn¯00.

 □

The conditional part of the relative entropy, Dρn¯ρ_, is thus shown to be monotone increasing along relaxation trajectories, which is the Lyapunov role for the entropy state function familiar from classical thermodynamics. That increase ends when n¯ terminates in the trajectory fixed point n_n¯.

4.2.2. Instantons and the Loss of Large-Deviation Accessibility from First Passages

The deterministic analysis of Equation (49) is refined by the inclusion of instanton trajectories through the following sequence of observations, completing the discussion begun in Section 3.2.2. Relevant trajectories are shown in Figure 1.

  1. The 2nd law as formulated in Equation (4) is approximated in the large-deviation limit not by a single Hamiltonian trajectory, but by the sum of all Hamiltonian trajectories, from an initial condition. Along a single trajectory, Dρn¯ρ_ could increase or decrease.

  2. Dρn¯ρ_ increases everywhere in the sub-manifold θ=0 of the manifold L=0, by Equation (49). This is the classical increase of (relative) entropy of Boltzmann and Gibbs. Dρn¯ρ_ decreases everywhere in the submanifold θ0 of the manifold L=0, by Equation (48). This is the construction of the log-probability for large deviations. These escape paths, however, simply lead to the evaluations of S_effnlogρ_nnn, the stationary distribution.

  3. If a CRN has a single fixed point n_, there is a unique θ=0 trajectory from any starting n¯ to it, and Dρn¯ρ_ increases deterministically by Equation (49) along that single path. The black trajectory with arrow converging exactly in a domain fixed point is such a path in Figure 1.

  4. If a CRN has multiple fixed points and instantons, all trajectories are exponentially close to the exact θ=0 trajectory before they enter a small neighborhood around the terminus n_n¯ of the exact trajectory; that is, they give the appearance of being the black deterministic trajectory in Figure 1. The trajectory sum is correspondingly close to the deterministic relaxation that increases the conditional entropy S_effn_n¯S_effn¯ in Equation (34).

  5. On longer times, however, the infinite sum of formally distinct Hamiltonian trajectories disperses into a sum over series of instantons making a random walk among fixed points, with an integral for each passage over the possible times at which the escape occurs. (See [64] Ch.7.) Such a sum is shown as a tree of colored first-passage trajectories in Figure 1. The “cross-sectional” sum at a single observation time over instantons distinguished by their escaping times gives the same result as a longitudinal line integral along a single instanton between the start time and the observation. That integral of logρnn¯/nn˙ through a full passage (escape instanton + succeeding relaxation) gives S_effn_S_effn_=logwn_n_macro/wn_n_macro. The escape from fixed point n_ to a saddle between n_ and a fixed point n_ in an adjacent basin, which we denote n¯=n_n_, is an integral over Equation (48), while the relaxation from the saddle n¯ to the fixed point n_ is an integral over Equation (49). These are the classical “entropy fluctuations” of stochastic thermodynamics.

  6. The contribution to the probability of a trajectory from each instanton comes only from the θ0 sub-manifold, and is given by wn_n_macroeS_effn¯S_effn_eΔS_eff, just the leaving rate from the macrostate 1n_. The result, upon coarse-graining to the macroscale (see Table 1 and the top diagram in Figure 1) where first-passages become instantaneous elementary events, is a new stochastic process on discrete states corresponding to the mesoscale Hamiltonian fixed points n_,n_,. The coarse-grained counterpart to D1n_n¯ρ_ from Equation (34) is the Lyapunov function reduced by a transition matrix Tmacro with matrix elements wn_n_macro. The coarse-graining and the reduction of D1n_n¯ρ_ are described in detail for a 2-basin example in [67].

  7. The properties of Tmacro are exactly those we have assumed for T as inputs to the mesoscale, completing the self-consistency of our recursively-defined multi-level model universe.

5. Cycle Decompositions and Non-Decrease of Intrinsic and Extrinsic Relative Entropies

A diagram technique was devised by Terrell Hill and co-authors [105,106] (summarized in [107]), to both construct steady-state densities and decompose the associated heat dissipation into contributions from fluxes around a basis of cycles [108]. The expression for dissipation and the diagram method to obtain a basis of cycles were adopted in the network theory of master equations by Schnakenberg [76].

Both the mean and the fluctuations in the environmental or housekeeping entropy changes may be approximated in either the long-time or the ensemble limit from the counting statistics of directed cycle completions [108,109], together with methods to account for the relative occupancy of states within each cycle. The latter, single-time ensemble method will be used here to derive the positive-semidefiniteness of the relative entropy and housekeeping terms in Equation (11). The form of the CRN generator leads to a decomposition of entropy changes into sums of densities convected around a basis of cycles by the stationary-state currents. Positivity is proved by convexity arguments similar to those for the standard fluctuation theorems [21,52,53], and the cycle decomposition expresses the duality relations of those theorems in terms of shifts forward or backward around cycles.

Section 4 constructed the relation between the hypergraph adjacency matrix A and stoichiometry Y, and the microstate transition matrix T. Each reaction in the hypergraph acts regularly as a rule on indefinitely many microstates, making CRNs an example of rule-based systems [110]. This section extends that mapping to cycle decompositions. For finitely-generated CRNs, there is a finite basis of either cyclic flows through complexes or stoichiometrically coupled cycles called hyperflows [77], with the interpretation of mass-action currents. From these, a (generally infinite) basis of finite-length cycles can be constructed for flows in the microstate space, which projects onto the basis in the hypergraph. We first use the finite basis of macrostate currents to prove monotonicity results for f-divergences where those exist, showing how the projection of cycles can result in a projection of the Lyapunov property between levels.

The parallelism between microstates and macrostates only holds, however, when the bases for currents at both levels contain only cycles. More generally, when macrostate currents include non-cyclic hyperflows, the analogy between the two levels is broken, leading to loss of the f-divergence as a Lyapunov function on macrostate variables and generally to much greater difficulty of solving for the stationary distribution on microstates. The conditions for different classes of flows on the hypergraph are the basis for a complexity classification of CRNs, and both the form of the large-deviation function and the symmetry between its Lyapunov and large-deviation roles differ across classes.

5.1. Complexity Classes and Cycle Decompositions of Stationary Currents on the Hypergraph

Remarkably, the properties of classical fixed points in the Hamilton–Jacobi representation are sufficient to classify CRNs in a hierarchy with three nested levels of complexity, according to the mass-action currents at the fixed points. Let n_ again denote the fixed point.

  1. The CRNs with detailed balance are those in which kjiψin_=kijψjn_, for all pairs of complexes j,i connected by a reaction. Under the descaling (43), this condition is that A^T=A^.

  2. The CRNs with complex balance only require ikjiψin_=ikijψjn_, for each complex j, or under descaling, A^ψ1=0.

  3. The general case requires only YAψn_, the condition of current balance at each species p, or under descaling, only YA^ψ1=0.

The classes are summarized in Table 3,

Table 3.

Hierarchical categories of stationary distributions of chemical reaction networks (CRNs).

Case Complexity Class
YA^1=0 general case
A^1=0 complex-balanced
A^T=A^ detailed-balanced

5.1.1. Complex Balance and Relations of the Lyapunov and Large-Deviation Roles of Seff

It is known that complex-balanced CRNs (with technical conditions to ensure ergodicity on the complex network under A [97,111]) possess unique interior fixed points [72], and moreover that their exact steady states ρ_ are products of Poisson distributions or sections through such products [92] defined by conserved quantities under the stoichiometry. Their generating functions are the coherent states from Section 3.2.1. It is immediate that all classical states obtained by exponential tilts with parameter z=eθ are likewise coherent states with state variables

n¯peθpn_p, (50)

and that the f-divergence of Equation (30) is the macrostate relative entropy function.

From the equations of motion (45), within either the ϕ1 or ϕ1 sub-manifolds, the time derivatives of S_eff along escape and relaxation trajectories, Equation (49) and Equation (48) respectively take the forms

S_˙effrel=lognn_·n˙rel=lognn_·YA^ψnn_S_˙effesc=lognn_·n˙esc=lognn_·YA^Tψnn_. (51)

By Equation (50), ψi appearing in Equation (51) evaluate to

ψinn_=eθYi. (52)

Finite cycle decomposition of the steady-state current:

By definition of complex balance, A^1=0, in addition to the general condition that 1TA^=0T. Any such matrix acting over finitely many complexes may be written as a sum of adjacency matrices for cyclic flows, which we index α, with positive coefficients ȷ¯α equal to the currents around those cycles in the stationary state. For the subclass with detailed balance, a decomposition in cycles of length 2 is always possible.

Letting jiα denote the sum over directed links ji in order around the cycle α, the trajectory derivatives (51) may be decomposed as

S_˙effrel=θ·n˙rel=αȷ¯αjiαeθYi·logeθYieθYjS_˙effesc=θ·n˙esc=αȷ¯αjiαeθYj·logeθYjeθYi. (53)

(· indicates the vector inner product over the species index p.)

Letting iα denote the sum over all complexes in the cycle α, the complex activities (52) may be normalized to vectors pαpiα with unit measure, as

piαeθYijαeθYj. (54)

Then the trajectory derivatives (53), themselves the time derivatives of the f-divergence (34), may be written in terms of positive semidefinite KL divergences of the measures pα from their own images advanced or retarded by one complex around each cycle:

θ·n˙rel=αȷ¯αiαeθYijiαpiα·logpiαpjαθ·n˙esc=αȷ¯αiαeθYijiαpjα·logpjαpiα. (55)

Non-negativity of the KL divergences in every term of Equation (55) recovers the monotonicities of S_eff along relaxation and escape trajectories from Section 4.2.1. The total time derivatives are decomposed into terms over finitely many cycles, each independently having the same sign, a stronger decomposition than can be obtained from the convexity proofs for increase of relative entropy [76] for general CRNs.

Locally linear coordinates for classical entropy change in complex-balanced CRNs

Note that the cycle-KL divergences, multiplied by the density factors iαeθYi, define a coordinate system that is locally invertible for the coordinates θ except at isolated points:

xαjiαeθYilogeθYieθYjxα+jiαeθYjlogeθYjeθYi. (56)

The time derivatives of S_eff from Equation (53) may be written in these coordinates as a linear system,

S_˙effrel=D˙eθY1rel=θ·n˙rel=αȷ¯αxαS_˙effesc=D˙eθY1esc=θ·n˙esc=αȷ¯αxα+. (57)

Figure 3 shows examples of the coordinates (56) for a 2-cycle and a 3-cycle in the simplex of normalized activities ψi/jψj for three species iA,B,C.

Figure 3.

Figure 3

Simplex for a probability distribution p=pA,pB,pC, with two coordinates xAB and xABC of the form in Equation (56) on a 2-cycle and a 3-cycle.

5.1.2. Vorticity in the Flowfield of Stationary Trajectories

Recall (Table 3) that detailed balance means A^T=A^. In this case, Equation (51) implies equal and opposite change of S_eff along relaxations and escapes, because the two trajectories are time-reverses of each other. Detailed balance thus generalizes the time-reversal symmetry of 1-dimensional systems to any number of dimensions, and is a correspondingly restrictive condition. This is the assumption in classical thermodynamics that identifies the Lyapunov and large-deviation roles of the entropy state function.

Already for complex-balanced CRNs, exact time reversal is generally broken. The gradients of the tangent vectors to relaxation and escape trajectories, with respect to the exponential-family coordinates θ in Equation (50), evaluate to

n˙relpθq=YpA^diagψeθYqTn˙escpθq=YpA^TdiagψeθYqT. (58)

A measure of the time-asymmetry of the CRN is the vorticity, defined as the antisymmetric part of the matrices (58). This vorticity equals the gradient of the sum of tangent vectors to relaxations and escapes at the same point,

n˙esc+n˙relpθq=YpA^A^TdiagψeθYqT. (59)

At the fixed point where θ=0, the vorticity is the antisymmetric part of the matrix YAYT.

5.1.3. Hyperflow Decomposition for Non-Complex-Balanced CRNs

For complex-balanced CRNs, A^ describes flows on the complex network, which is an ordinary directed graph. For non-complex-balanced flows, YA^1=0, but A10. A^ therefore cannot be written as a sum of adjacency matrices for currents on cycles in the complex graph. However, because net current still equals zero for every species, a basis for the currents at any fixed point still exists in balanced hyperflows. If the elements of Y are all integer values (as they are for chemistry or for biological population processes), the basis elements are balanced integer hyperflows [77]. Each such flow is an assignment of a set of non-negative integers to every current with nonzero kji, such that the net currents at each species vanish.

We may continue to refer to basis hyperflows with index α by extension of the complex-balanced case, and for each basis element there is a non-negative current ȷ¯α which is the coefficient of that flow in the stationary-state current solution. Integer hyperflows cannot be used to express S_˙eff as sums of KL divergences with uniform sign, in keeping with the variety of complications that arise for S_˙eff in non-complex-balanced CRNs.

5.2. Cycle Decomposition in the Microstate Space, and Non-Decrease of Relative Entropy Components

Next we apply a cycle decomposition similar to the one in the previous section, but in the microstate space rather than on the hypergraph, to prove non-negativity of the first two terms in Equation (11). The crucial distinction between the two levels is that balanced integer flows can always be mapped to cycles in the state space. For cycles in the hypergraph, a natural map is unique; for more general flows many mappings may be possible. Relative entropies thus give Lyapunov functions more generally for distributions over microstates than for macrostate variables. However, unlike the hypergraph where basis decompositions are finite, in the state space they generally require solution for infinitely many cycle currents and are difficult except in special cases.

5.2.1. The System-Marginal Relative Entropy from ρ¯s

The system s and environment e must now be considered explicitly because housekeeping heat is an embedding of s in se. We therefore work in system indices ns and descale with the steady state ρ¯s at the instantaneous ρes. In the following derivations ρs and ρes can be any normalizable distributions; macrostates are no longer singled out. The time-change of Dρsρ¯s, fixing ρ¯s as before, follows from Equation (1) and the form (40), as

D˙ρsρ¯s=nslogρnssρ¯nssρ˙nss=nslogρnssρ¯nssψsTe/nsAsdiagψse/nsΨsnsρnss=nsnslogρnssρ¯nss·T^ns,nssρnssρ¯nss. (60)

From the definition (40), 1TT^s=0 and T^s1=0, as was the case for complex-balanced As. Therefore, T^s can be written as a sum of adjacency matrices for cycles, weighted by the currents around those cycles in the steady state. For a general CRN, it would not be assured that these cycles were all of finite length or that there were only finitely many of them passing through any given state ns. However, for a CRN in which both the dimensions of A and Y and their matrix elements are finite, the integer hyperflow decomposition under As and Ys is in turn finite, and the basis flows can be embedded in the microstate space to give a decomposition of T^s.

Let α index any basis of integer hyperflows spanning kerYA. The cyclic flows spanning kerA embed without ambiguity in the state space, as images the cycles on the complex network. Each cycle α defines a sequence of state shifts nn+YjYi when the transitions jiα in the cycle are activated in order.

Non-complex-balanced integer flows also create cycles through states, because by construction the sum of stoichiometric shifts in a balanced flow is zero. However, there may be a choice in the way a given flow projects into the state space, depending on the realization, defined as the order in which the reaction events in the flow are activated. Any order is acceptable, as long as each state in the cycle can be reached by a reaction that can execute. (This condition is called reachability. It can be violated if there are boundary states with too few individuals to permit the input complex to some reaction to be formed. States may be grouped into equivalence classes under reachability, as a means to study network features such as autocatalysis. Reachability is beyond the scope of this paper. For the large-population limits to which large-deviation scaling applies, under finitely generated reactions, all states will have arbitrarily many members and will be equivalent under reachability, so all embeddings of a finite integer hyperflow will always complete.) We then extend the indexing α from the images of the complex-balanced cycles to include all the integer flows.

Once a cycle realization has been chosen for each integer hyperflow, an embedding of these flows in the state space is defined as follows. Pick an arbitrary starting complex in the cycle, and for each ns, embed an image of the cycle with the starting complex sampled at ns. Then every state is passed through by every cycle with each complex in the cycle sampling from that state exactly once. Every link has a finite number of cycles passing through it, because the number of integer flows spanning kerYA and the length of each flow are both finite.

A set of currents ȷ¯α,n that sum to the steady-state currents over each transition is then assigned to the cycles, indexed by the state n where cycle α samples at its starting complex. Solving for the values of the ȷ¯α,n is of course equivalent to solving for ρ¯, and is not generally a finite problem.

With these notations in place, Equation (60) becomes

D˙ρsρ¯s=nsαȷ¯α,nssnsnsαρnssρ¯nsslogρnss/ρ¯nssρnss/ρ¯nss. (61)

Equation (61) is a non-positive sum of KL divergences of probability ratios ρnss/ρ¯nss referenced to their own values advanced around cycles, of the same form as Equation (53) for complex activities eθYi for macrostates on the hypergraph. The state space divergence involves an additional sum over the reference state ns. As a result that the sum over α is finite, the contribution at each ns is finite and proportional by finite factors to ρ¯nss within a finite neighborhood of ns. Therefore, the sum (61) is finite. This proves that the first line of Equation (11) is non-decreasing.

5.2.2. Non-Negativity of the Housekeeping Entropy Rate

The cycle decomposition on the state space may also be used to prove non-negativity of the housekeeping entropy rate (10), by what amounts to a generating-function method similar to those in [51,52]. Here, rather than work with entropy changes over extended-time paths, we prove positivity moment by moment from the master equation.

Unlike the relative entropy within the system, the housekeeping entropy rate is not only a function of aggregated within-system transition rates wnsnss, but must be disaggregated to all distinct system-environment interactions. Begin by expressing the sum of all transition currents both in terms of elementary events and in the previous cycle decomposition:

Jρsnsρnssnsnsnnsnnsjiyjyi=nnwnnjiρnesnsρnssnsnswnsnss=nsαȷ¯α,nssnsnsαρnssρ¯nss, (62)

where wnnji is labeled by the particular reaction connecting n and n. (For notational simplicity, we suppose that each ordered pair of complexes ji is connected by at most one reaction. This assumption still allows a common difference vector YjYi to be mediated by several distinct pairs ji, and the generalization to more complex cases is straightforward.)

We follow the construction of correlation functions for a counting process from [53], for a quantity r with event-dependent values rji defined as

rnnjilogwnnjiρ¯nssρneswnnjiρ¯nssρnes. (63)

A total housekeeping entropy rate denoted S˙HK is a sum over pairs of system states, of the quantity in Equation (10). Written as an expectation of r, it is

S˙HKnsnsnsσnsnsHKwnsnss=nsαȷ¯α,nssnsnsαρnsρ¯ns1wnsnssnnsnnsjiyjyi=nnwnnjiρnesrnnji. (64)

The sign of Equation (64) can be deduced from convexity of the observable er, as in the usual fluctuation theorems. r is nonzero only on transitions, the terms in the sum in Equation (62). Let dt be a short time interval. Then the expectation of any function of r on the interval will be 1dtJρs times its value in an unchanging state, plus a term dt from transitions. Extracting the contribution dt for expectations of 1 and er over the interval, denoted dt, gives

1dt1dtJρs=dtnsαȷ¯α,nsnsnsαρnssρ¯nsserdt1dtJρs=dtnsρnssρ¯nssnsnsnnsnnsjiyjyi=nnwnnjiρnesρ¯nssernnji=dtnsρnssρ¯nssnsnsnnsnnsjiyjyi=nnwnnjiρnesρ¯nss=dtnsρnssρ¯nssnsnswnsnssρ¯nss=dtnsαȷ¯α,nsnsnsαρnssρ¯nss. (65)

Between the third and fourth lines in the evaluation of erdt, index labels n and n are switched. As a result that the steady state currents decompose into cycles, whether ρ/ρ¯ is summed over the first index or over the second index along a cycle, the sum is the same. Hence, the two expressions in Equation (65) are the same. By Jensen’s inequality, drdt/dt=S˙HK0, proving that Equation (64), which is the second line of Equation (11), is non-decreasing.

Remark 1.

The Hatano–Sasa generating function is constructed to replace the transition matrix T^ with its adjoint [53]. In Equation (65), this relation is reflected in the way erdt switches ρ/ρ¯ to the tail position of links, whereas in 1dt it is in the head position. Exactly the same sum arises if positivity of D˙ρsρ¯s is proved by a similar convexity argument, from the expectation of ρnss/ρ¯nss/ρnss/ρ¯nss over an interval dt, which is the inverse exponential of the log-ratio of Hartley information appearing in Equation (61). Thus the two fluctuation theorems are for generating functions spanned by the same chord between two probability measures, though the counting observables in the two generating functions are distinct.

6. Examples

6.1. Dualizing the Housekeeping Embedding Thermodynamics

In the first example, the system s is elementary: a 2-state model of polymerization and hydrolysis. Our interest is in how the embedding of such a model in a 1-parameter family of environments is captured in the intensive and extensive thermodynamic parameters, when these produce identical distributions ρ¯s, but with differing housekeeping entropy rate.

The motivation for the model is a question from astrobiology: in how far can two environments be considered analogues simply because they produce similar distributions for a material considered to be of biological interest. For instance, is Titan an analogue to early Earth if both are believed to support significant polymerization of small organic molecules [112,113,114], even if polymers on Titan are stable and near equilibrium at low water activity, whereas Earth produced them (putatively) through a competition between ligation driven by disequilibrium leaving groups such as phosphates [115,116] or thioesters [117,118] and disequilibrium hydrolysis?

The polymerization/hydrolysis mechanism, though elementary, can be a foundation for more complex heterogeneous polymerization models [85], in which polymers once formed may be sensitive to other consequences of water activity, such as propensity to fold or to aggregate. We ask, to what extent can the rate balance that governs polymerization, and water activity per se, be decoupled as measures of environmental similarity. The example will show that in the limit where one driving buffer goes to zero concentration and a subset of reactions become strictly irreversible, the two parameters can be made independent.

In this example, all influences on what we should call thermodynamic order come from kinetics in buffered non-equilibrium settings. Total entropy production is uninformative because the entropy associated with the polymer distribution sits atop a housekeeping heat that can be varied freely. Even knowing that housekeeping heat depends on a difference of chemical potentials gives no information when that difference is taken to infinity in an irreversible limit. Thus, none of the conservation laws linking the model to microscopically reversible mechanics provide information beyond what is in the transition matrix. Yet the thermodynamics of the process retains a regular representation: dualization of the housekeeping heat introduces an intensive state variable that is just the regular current through the polymerization/hydrolysis cycle, even in the strictly irreversible limit.

6.1.1. One System, Families of Environments

Elementary reactions:

For simplicity, we let environmental distributions ρes be Poisson with large capacity as a model of chemostats, and omit them from the explicit notation. Half-reaction rate constants are denoted k^i and k¯^i, where i is a label.

The first reaction type in the model is reversible dehydrating polymerization and hydrolysis, with schema

Ak¯^1k^1A*+H2O. (66)

A are monomers capable of some oriented bonding (as seen for peptide or nucleotide bonds), and we will refer to one side of the oriented link as the bond donor and the other as the acceptor. Unmarked A are monomers unligated at the donor site, which may be free monomers in solution, or monomers ligated at other sites in a polymer, and which we take to be buffered to unit activity. A* are monomers ligated at the donor site within polymers. n denotes the number of A*, which will be the counting index and is the number of anhydrous bonds available to be hydrolyzed. These may cleave free monomers from the end of a chain, or my cleave a chain internally; the same rate is assumed for either. For the calculation in this example, the distribution of polymer sizes formed is not of interest, and we take n to be the system state, which changes under a Poisson process. Appendix A briefly relates the Poisson model to ensembles of distributions for linear polymers. Water is also buffered, at activity aH2O, which is varied as a control parameter across a family of environments.

A second process, involving only environment species to define a reference scale for chemical potentials, is hydrolysis of a phosphoanhydride bond, with schema

P*+H2Ok¯^2k^2P. (67)

P* is a bound form with activity aP*, and P a hydrolyzed form, with activity aP. (Equation (67) is a stand-in for a variety of phosphoryl group-transfer processes, some linear as shown and some higher-order in the hydrolyzed species [116]. For higher-order reactions, appropriate powers of the activity would replace aP in the expressions below, without otherwise changing the results.)

Monomer ligation driven by phosphate hydrolysis is the stoichiometrically coupled reaction

A+P*k¯^3k^3A*+P. (68)

Equilibrium constants for the three reactions (66)–(68) are denoted K^i=k^i/k¯^i. In actual chemistry, detailed balance implies the relation K^3=K^1K^2. To simplify notation, we will choose activity units to set k¯3/k¯1=1. Reaction 2 contributes only to dissipation internal to the environment (the third summand in Equation (11)), and we omit it, so the scale of k¯2 never enters. The system of coupled reactions (66) and (68), and the resulting state space for n, are diagrammed in Figure 4.

Figure 4.

Figure 4

Illustration of the reaction schema (68). Species P* are modeled as anhydrous phosphates with any context beyond the ligating P. Monomers A could be hydroxyacids, amino acids, carbohydrates, or any other molecules with complementary H and OH groups subject to dehydrating condensation. R and R are any ligands at sites other than the focal bond. Harpoons ⇌ remind that both reactions are bidirectional; grey arrows indicate net flux for driven polymerization. State space (shown below) is indexed by the number of hydrolizable AA bonds n. Three independent cycles exist between any two adjacent states, in the cycle decomposition of Section 5.2 for non-equilibrium relaxation.

Poisson stationary distribution over polymer lengths:

The stationary distribution ρ¯s for the number n of anhydrous AA bonds under the joint action of reactions (66) and (68) is Poisson with parameter Keff, given by

ρ¯n+1sn+1ρ¯nsKeff=K^1+K^3aP*aH2O+aP=n_. (69)

As shown in Equation (A7), the typical length of polymers thus formed is l=1+n_.

The varieties of chemical disequilibrium:

We are interested in families of environments that share the same value of Keff and so are indistinguishable from within the system. Chemical-potential coordinates within such a family are

logKeffaH2OK^1μHlogK^2aP*aH2OaPμP. (70)

μH is the chemical potential for hydrolysis in the stationary polymer distribution. μP is the chemical potential of the phosphate system for hydrolysis.

Within surfaces of constant Keff we consider variation of phosphate and water activities along contours of fixed aP/aH2O, and vary this ratio later to define routes to irreversibility. (aP* is a nonlinear function of the value of aP along this contour.) The thermodynamic equilibrium corresponds to μP=μH=0, or

K^2aP*_=aPaH2OaH2O_K^1=1Keff. (71)

At equilibrium, the sub-system stationary distribution ρ¯s is also the marginal ρ_s of the whole-system equilibrium; to emphasize that it is fixed as μH and μP are varied across a family of environments, we reference distributions to ρ_s.

The activities governing reaction fluxes then depend on the coordinates (70) as

aH2O=K^1KeffeμHaP=K^1KeffeμHeμH1eμPeμHaP*=1K^2eμPeμH1eμPeμH. (72)

Fixed aP/aH2O contours near and far from equilibrium satisfy

μPμHμHμP0aH2OaPμPμHμPlogaH2OaP. (73)

Cycle decomposition of steady-state currents:

If environmental marginals ρes are chemostats at the indicated chemical potentials, then currents in the stationary system distribution at a state n are proportional to ρ_ns by factors that do not depend on n, and can be decomposed into three “specific currents” around cycles, which are functions of the chemostat activities

ȷ^1=K^12eμH+1ȷ^3=K^12eμH1eμPeμHeμP+eμHȷ^δ=K^1eμH1. (74)

ȷ^1 is the average of forward and reverse currents in reaction (66), and ȷ^3 the average of forward and reverse currents in reaction (68), divided by ρ_ns. ȷ^δ is the difference of forward and reverse currents, which must be equal and opposite in reactions (66) and (68) in stationary state, also divided by ρ_ns; it is the only current in Schnakenberg’s fundamental graph [76] for this CRN. ȷ^δ is so named because we have called it a “δ-flow” in [73,74]. The number of possible independent net flows in the stationary state of a CRN equals Feinberg’s [72] topological characteristic termed deficiency and denoted δ. The CRN of this example has δ=1. The fundamental graph omits currents ȷ^1 and ȷ^3 because they do not lead to dissipation in the stationary state, but they do for more general states.

6.1.2. The Housekeeping Entropy Rate

The housekeeping entropy rate (64), for an arbitrary system distribution ρs, evaluates in terms of the specific currents (74) and the density ρ_s, to

S˙HKnnnσnnHKwnnsρns=nρρ_n+1sȷ^1+ȷ^δ2μHȷ^3ȷ^δ2μPμHn+ρρ_nsȷ^3+ȷ^δ2μPμHȷ^1ȷ^δ2μHρ_nsȷ^δμP+dnρ_nsnρρ_nsȷ^1μH+ȷ^3μHμP. (75)

The second expression is a continuum approximation to first order in derivatives, with logρns understood as usual to be approximated by the appropriate large-deviation function Seff.

To check that the exact (discrete) form of Equation (75) is positive semidefinite for arbitrary ρs, define measures

pȷ^1ȷ^δ/2ȷ^1+ȷ^31pȷ^3+ȷ^δ/2ȷ^1+ȷ^3qȷ^1+ȷ^δ/2ȷ^1+ȷ^31qȷ^3ȷ^δ/2ȷ^1+ȷ^3. (76)

From the formulae (74), it follows that

pq=eμH1p1q=eμPμH, (77)

and thus Equation (75) can be written

S˙HK=nρ_nsȷ^1+ȷ^3ρρ_n+1sDqp+ρρ_nsDpq, (78)

with Dqp a Kullback–Leibler divergence as elsewhere.

Housekeeping entropy rate as an embedding vector:

Equation (78) can be put in the form

S˙HK=nρnsσ˙nHK. (79)

σ˙HKσ˙nHK is a vector with positive-semidefinite components, which by Equation (77) equals zero only at μP=μH=0.

As noted, ρ¯s is an extremal vector for the intrinsic relative entropy Dρsρ¯s in the simplex of distributions ρs. By Equation (75), at this extremal point of s, nρ¯nsσ˙nHK=ȷ^δμP.

6.1.3. Legendre Duality for Housekeeping Entropy Rate

The chemical potential μP if the activity aP0 at fixed aP* and aH2O. In this limit, schema (68) becomes an irreversible reaction. Phenomenologically, all “thermodynamic” characteristics of the system remain regular, only the energetic accounting breaks down because it is referenced to the equilibrium state variable logaP, in a system that nowhere couples to an equilibrium environment.

The divergence of S˙HK in this limit is like the divergence of any extensive thermodynamic potential in the limit that one of the extensive state variables diverges, except that S˙HK is an inherently non-equilibrium potential, and μP only behaves like an extensive variable with respect to dissipation. (Note that S˙HK has no status with respect to equilibrium; while entropies are extensive, S˙HK depends inherently on a rate.)

From the view that thermodynamics is about statistical organization and not fundamentally about energy, it is natural to Legendre transform S˙HK to expose the dynamically intensive current that is dual to μP. For dualization, we work within the tangent space to the family of constant Keff, but now vary μP independently of μH, rather than varying within the non-linear contours (71) at fixed aP/aH2O.

The gradient of S˙HK with respect to μP at fixed μH is

S˙HKμP=K^1nρ_nseμH1eμPeμHeμPρρ_nseμHρρ_n+1seμPeμHeμP+eμHK^1Keff+eμPeμHeμP+eμHρρ_n+1sρρ_n+1sμHμPȷ^δdnρ_nsnρρ_nsȷ^3+ȷ^3μPμPμH. (80)

By Equation (74), ȷ^3μPμH is convex in μP, so its derivative, the term in square brackets in the final line of Equation (80), is invertible to a value for μP. The Legendre dual potential to S˙HK on μP, which we denote F˙P, is then given by

F˙PS˙HKμP,μHμPS˙HKμPS˙HKdnρ_nsnρρ_nsȷ^3μPμPμPμH+ȷ^3+ȷ^1μH. (81)

Independent variation of control parameters in the irreversible limit:

We may now consider the effects of varying μH, which remains finite, across the one-parameter family of contours of different aP/aH2O as aP0. ȷ^3+ȷ^1 approaches a constant independent of μP, and the only term in Equation (81) including multiples of diverging μP evaluates to

ȷ^3μPμPμPμHK^1aP+aH2OaH2OμH113!aH2OaP2μH2:μH02ȷ^3eK^1/KeffaPμHμPμPμH:μH1. (82)

It approaches K^1μH within a range around μH=0 that becomes vanishingly small as aP0, and converges exponentially to zero outside that range.

Thus, the susceptibility that is the Legendre dual to μP, S˙HK/μPȷ^δ, a function only of μH, almost everywhere. The potential F˙P retains only the μP-independent terms in Equation (81). If, for example, we choose ρs, a macrostate with mean n¯, then dnρ_nsρ/ρ_ns/n=n¯n_ and F˙Pn¯n_ȷ^3+ȷ^1μH.

In the irreversible limit the problem is seen to separate. ρ¯s dictates the intrinsic thermodynamics of the polymer system, while along a contour that fixes Keff, the dual potential F˙P describes the non-trivial dependence of entropy change in the environment on ρs. S˙HK diverges harmlessly as an extensively-scaling potential independent of ρs with a regular susceptibility ȷ^δ.

6.2. Metastability and ρ¯s as the Reference Measure for Relative Entropy

The second example uses the same buffered driving environment as the first, but separates phosphate-driven ligation into a distinct, self-catalyzed channel while leaving the reaction (66) with water uncatalyzed. The model for system catalysis is a variant on the cubic Schlögl model of Figure 2. It is chosen to highlight the difference between the entropy rate (8) typically assigned as “environmental” and the housekeeping entropy rates (10). A quantity S˙env is a function only of the realized distributions ρs and ρes. Nowhere does it reflect the joint participation of uncatalyzed and catalyzed reactions within the same system.

The difference S˙HKS˙env may have either sign. We illustrate the case where phosphate activities are chosen to put the system in its bistable regime, and μP is chosen to make the unstable root n2 of the rate Equation (47) a fixed point. For equivalent rate constants a Poisson distribution ρs at mean n2 would be the driven stationary distribution in the previous example, and the S˙env is the same in the two models. However, S˙HKS˙env<0 because the loss of large-deviation accessibility in the system results not only from its Shannon entropy change, but from the measure of that entropy relative to the true, bistable stationary distribution ρ¯s.

The autocatalytic model replaces the schema (68) with

2A*+A+P*k¯^4k^43A*+P. (83)

Catalysis does not change energetics, so the rate constants must satisfy K^4=K^1K^2.

In the mass-action rate law (47), the roots n1<n2<n3 and n2 is unstable. We set a characteristic scale for transitions between water and phosphate control by choosing relative rate constants k¯4/k¯1=1/n22.

The Schlögl model is a birth–death process, so ρ¯s is exactly solvable. In place of Keff from Equation (69) adjacent indices are related by a position-dependent effective equilibrium constant

ρ¯n+1sn+1ρ¯nsKn=K^1+K^4aP*nn1/n22aH2O+aPnn1/n22. (84)

Note that Kn2Keff to integer rounding error, at all activity levels.

In terms of the same roots, choose water activity to set the lower limit of Equation (84) at n1 to

K^1aH2O=1i1/nin_. (85)

We will vary the activities in the phosphorus system so that ρ¯s moves from a lower stable mode near equilibrium, through a bistable phase, to an upper stable mode driven by phosphorylation farther from equilibrium.

The cubic Schlögl model has an appealing interpretation as a phenomenological model. Note from Appendix A that for a Poisson distribution with parameter K corresponding to a root ni, the typical polymer length l=1+K by Equation (A7). If Kn1 characterizes polymers too short to fold, and Kn3 characterizes polymers in which folding with some catalytic consequence occurs with non-vanishing frequency l2 suggesting self-interactions within a string, then the catalyzed channel for ligation first arises in proportion to the occurrence of folded polymers.

Equilibrium is again defined as in Equation (71), and there Kn=eqn_;n. The marginal distribution for s in the whole-system equilibrium, ρ_s, is then Poisson with parameter n_. Recall that, as a solution in detailed balance, it does not and cannot reflect the fact that s is a CRN capable of bistability. To match rate constants to the roots n1,n2,n3 of the Schlögl model, we define a reference equilibrium for the phosphorus system with activity levels at

K^2aP*_=aP_aH2O=1. (86)

We now consider a family of driving environments in which the chemical potential μP is partitioned between the activities aP* and aP in the proportions

logaP*aP*_=loginin22/iniloginii1/niμPlogaPaP_=loginii1/ni/n22loginii1/niμP. (87)

The two potentials (87) are coordinates in an exponential family for the phosphorus system. Where μP=loginii1/ni, the three roots of the mass action law pass through the values n1,n2,n3 from Figure 2.

The difference in relative dissipations between the driven steady state ρ¯s and ρ_s, at adjacent n indices, is given by the ratio of effective rate constants

logρ¯n+1s/ρ_n+1sρ¯ns/ρ_ns=Knn_=1+aP*/aP*_nn1/n221+aP/aP_nn1/n22=1+eμPaP/aP_nn1/n221+aP/aP_nn1/n22. (88)

Equation (88) is a sigmoidal function of logn/n2 in the range Kn/n_1,eμP, graphed versus μP in Figure 5. The hydrolysis potential μH from Equation (70 for the linear system is replaced in the autocatalytic case by a position-dependent chemical potential

μnlogaH2Oρ¯n+1sn+1K^1ρ¯ns=logKnn_. (89)

Figure 5.

Figure 5

Kn on a sequence of increasing μP values with proportions (87). Curve at μP=loginii1/ni is black. Boundaries of bistable range are red. Fixed points Kn=n are shown with markers.

In the cycle decomposition, the specific currents exiting and entering a site n, now functions of n, depend on activities as

ȷ^1=12KnaH2O+K^1ȷ^3=12nn1n22K^4aP*+KnaPȷ^δ=KnaH2OK^1=nn1K^4aP*KnaP. (90)

In the chemical-potential coordinates, these become

ȷ^1=K^12eμn+1ȷ^3=K^12nn1n22eμP+eμnaPaP_ȷ^δ=K^1eμn1=K^1nn1n22eμPeμnaPaP_. (91)

Positive-semidefiniteness of S˙HK again follows by using the ratios (76) to produce the form (78), where now instead of Equation (77), we have

pq=eμn1p1q=eμPμn. (92)

For this example, we separate out a term S˙env, which is the sum over transitions of Equation (8), from S˙HK, to give

S˙HK=nK^1nn1n22aP*aP*_ρnsaPaP_ρn+1sn+1n_μPρρ¯nsn+ȷ^1+ȷ^3ρn+1sn+1Knρnsμn=nK^1nn1n22ρ_nsaP*aP*_ρρ_nsaPaP_ρρ_n+1sμPρρ¯nsn+ȷ^1+ȷ^3ρ¯nsρρ¯n+1sρρ¯nsμn=S˙env+nȷ^1+ȷ^3ρ¯nsρρ¯n+1sρρ¯nsμn. (93)

S˙env does not depend on ρ¯s, and differs from the sum of terms multiplying μP in Equation (75) only by the n-dependence now multiplying K^1 in Equation (93).

All dependence of S˙HK on ρ¯s comes from the second term involving μn. The shape of ρ¯ns is shown as a function of μP in Figure 6, and the lumped coefficient ȷ^1+ȷ^3ρ¯nsμn multiplying the difference term in ρs/ρ¯s in Equation (93) is shown in Figure 7.

Figure 6.

Figure 6

Densities ρ¯s for a scale factor 4×1,4,9 for roots.

Figure 7.

Figure 7

The combination ȷ^1+ȷ^3ρ¯nsμn.

As an example, consider the value μP=loginii1/ni that makes n1,n2,n3 the three fixed points, and evaluate the embedding entropies for a Poisson distribution ρs with mean n2, the unstable fixed point. The “environmental” component

S˙env=K^1nρnsnn1n22eμPn2n_aPaP_μP=K^1eμPeμn2aPaP_μP=ȷ^δn2μP. (94)

is the same as the value ȷ^δμP following from Equation (75) for the non-catalytic CRN, if we set μH=logn2/n_ in Equation (70), where Keffn2 and this ρs is a driven steady state. It is also the value of S˙HK in Equation (79) at the extremal distribution ρs=ρ¯s for the linear model.

For the same ρs, recognizing that μnμH=logKn/n2, the gradient relative to the bistable reference distribution ρ¯s in Equation (93) expands to ρ¯nsρ/ρ¯n+1sρ/ρ¯nsρnsμnμH. The difference of environment from housekeeping entropy changes then evaluates to

S˙HKS˙envnȷ^1+ȷ^3ρnsμnμnμH. (95)

The combination ȷ^1+ȷ^3ρnsμn is increasing through n2 (See Figure 7), and μnμH is antisymmetric, so Equation (95) is negative.

Hence, not all of the entropy change by S˙env reflects a loss of large-deviation accessibility attributable to the environment. A quantity equal to S˙envS˙HK is assigned in the natural decomposition (11) to the system, because a Poisson distribution around n2 is unstable and has excess net probability to relax toward the bistable distribution ρ¯s.

7. Discussion

7.1. The Problem of Macroworlds as an Alternative Central Contribution

A suggestion that thermodynamics should not be mainly about the relation between work and heat flows would have been non-sequitur at the turn of the 20th century. Not only were applications centered around interactions with a thermal system through mechanical boundaries [59]; the nature of irreversibility and dissipation as scientific problems was framed in their relation to mechanical principles of time-reversibility [119] and conservation of energy [26,120].

Innovations in scaling methods, however, in context of the unification of statistical and quantum theories in the second half of the 20th century, have exposed understanding the existence of macroworlds as one of the central problems in science. Section 3 explained how large-deviations scaling formalizes a concept of macrostate in terms of selected distributions over microstates. When methods derived for phase transitions and critical phenomena in condensed matter [36] were merged with the renormalization-group approach [35] to vacuum quantum field theory [38], it became clear that renormalization-group flow, a closely related operation of dimensional reduction to large-deviations scaling—through the correspondence of Keldysh to 2-field statistical field methods, the algebras of the two can often be interconverted [80,121]—provides a concept of effective fields describing the robust properties of macroworlds through phenomenological Lagrangians [37]. Effective fields are the counterparts under renormalization to classical states under large-deviations scaling.

In effective field theories nested through phase transitions (as in the models of this paper), microstates are assimilated back to macrostates. The result has been a theory of the vacuum and the hierarchy of matter in which all earlier, partly phenomenological concepts of elementary and composite objects and interactions has been subsumed within a semantics of distributions. While these physics examples continue to be organized by symmetries and conservation laws, similar scaling methods developed independently for reliable coding [45,66] offer non-mechanical counterparts (see [122], Ch. 7).

The alternative characterization offered, then, is that thermodynamics is not fundamentally the theory of the movements of heat, but rather the theory of the emergence of macroworlds from microworlds.

7.2. The End of Entropy Flow, the Natural Partition, and Hartley Information

In the absence of an energy-mediated constraint (adiabatic accessibility [32]) on the states jointly realizable by a system and its environment, the notion of “entropy flow” between subsystems may never arise. (Even in classical thermodynamics, it is only well-defined for adiabatic transformations, in which both subsystems remain in classical states constrained by energy at every moment.) The default description of entropy change ceases to be the metaphor of a fluid and becomes the literal one: it is the loss of large-deviation accessibility of classical states due to relaxation of the whole-system distribution. The concept of a tangent surface delimiting the amount of entropy gain in one component that can be influenced by the state of another, instead of the adiabatic transformation, is filled by the the vector of Hartley informations logρ¯s of the stationary state for the Ts at that moment. The natural system-environment partition (11) encodes the conditional independence of components of the loss of states accessible by fluctuations.

Hartley information appears as a random variable along trajectories in the construction of the generating functional for housekeeping heat [5,52]. In standard treatments, its interpretation as an entropy is taken to depend on its association with a dissipated heat. (The exact statement, referring to Equation (123) in [5], is: “In the absence of a first law for the master equation dynamics, which would require further physical input not available at this general stage, this identification is by analogy only.”) Here, logρ¯s gains its interpretation as an entropy through its role as a carrier of information (an interpretation independent of dynamics advocated in [58]), about the interaction of processes within a system on its own limiting large-deviation function and on its capacity to limit large-deviations in the environment.

7.3. Making Trajectories First-Class Citizens

The persistence over 90 years of Onsager’s program [11,12] of making equilibrium Gibbs entropies the foundation of non-equilibrium thermodynamics has built an asymmetry into this concept even within physics, at the same time as growth in statistical methods for path ensembles has reduced the reason for an asymmetry to exist, by defining the same tools for paths as for states. The equilibrium Gibbs entropy is defined from the large-deviation function on ensembles of states. The stochastic effective action (25) [67] is the corresponding functional for trajectories. However, the role that energy conservation plays in stochastic thermodynamics, as a constraint on which states can be jointly occupied by components within a system and within its environment, will not generally have a counterpart for the jointly realizable trajectories involving these components. (Consider, as examples, non-Markovian noise sources and error-correcting codes exploiting time correlations to optimize against those environments [123].)

Borrowing a term from computer science, the privileged roles of equilibrium entropy and energy conservation in current non-equilibrium thermodynamics makes spaces of states “first-class citizens” [124], and admits thermodynamic interpretations for ensembles only in so far as those derive from equilibrium heat. Jaynes anticipated [125,126] a self-contained thermodynamic interpretation for path ensembles, though still only in a limited form referenced to the calibers of cross-sections of states. If there is to be a thermodynamics in which trajectories become first-class citizens on par with states, it will need a foundation in more general primitives such as large-deviation accessibility and separation of scales, as in the domain-agnostic framework presented here.

7.4. Rule-Based Systems and Life

This paper’s formula for a self-contained and substrate-agnostic thermodynamics is meant to support concept discovery in two domains that should clearly have such a thermodynamics, and for which energy conservation should play a limited role or no role at all for many interesting questions. (A well-known example of a constraint that is proved by a fluctuation theorem, but not binding because a more proximal constraint exists, is that of total entropy production versus the “excess heat” in a driven system. [127] The former bounds the non-equilibrium entropy change within a distribution, but is uninformative because it diverges on long timescales. The tight bound, under some conditions, is the finite excess heat.)

The first domain is rule-based modeling [110] which recognizes the finitary-to-infinitary mappings exhibited here between CRN generators and state spaces as a widely generalizable organizing principle. The algebra and combinatorics of non-commutative rule systems is a very active area of study [77,128,129,130,131] which connects to chemistry, systems biology, theories of algorithms, process calculus, and much more. A thermodynamics of rule-based systems that expands the conceptual scope of science will not be a reduction to their consequences for heat generation.

The second domain is the thermodynamic nature of life [122], encompassing rule-based order in its chemical substrate, the multi-level control systems and error correction required to maintain complex dynamical phases, and the nested population dynamics and historical contingency of evolution. There will be order in such systems that derives, sometimes cryptically, from microscopic time-reversibility. It is inconceivable that there will not be much more order, of novel kinds, that originates from the abundance of architectures—most of them irreversible—standing between the microscopic substrate and robust macrophenomena.

8. Concluding Remarks

The presentation above is meant to encapsulate a shift in emphasis that has been building, from questions about work and heat that were germane in mechanics and classical thermodynamics, to a concern with distinguishing sources of irreversibility in a large-deviation sense. That shift was enabled by the discovery of fluctuation theorems that identify the natural modularization of a system into a focal subsystem and its environmental context. The use here of the Hartley information defining the housekeeping entropy rate as a connection, rather than as a term in a potential function, is meant to provide a natural information-based interpretation in place of analogies of thermodynamic potentials to mechanical energy surfaces.

The derivation of all roles of entropy by suitable restrictions or conditionings of a single overall large-deviation function is meant to obviate formulations such as “violation of the 2nd law”, which turn on an overly coarse semantics and are unhelpful in separating different categories of probabilistic effects. Doing so has required, first and foremost, a systematic approach to the concept and construction of macrostates, and to the treatment of the associated entropy state functions. The decomposition in Equation (34) of Dρn¯ρ_ in terms of the Lyapunov entropy state function Seff and a remainder D1n_n¯ρ_ that must be relaxed by instantons, retains an un-violated 2nd law for relative entropy, and also the Lyapunov and large-deviation roles of the classical state-function entropy, in the proper relation.

The dualization by Baish [101] of observable and response fields in Doi–Peliti theory stands in obvious analogy to the construction of the adjoint transition matrix in the fluctuation theorem for housekeeping heat [52,53]. The standardization of the CRN generator (35) makes the analogy explicit in the descaling (39) of T and the descaling (43) of A. Moreover, it shows the analogy as an expression of the general relation of microstates to macrostates, and relates the preservation or loss of information in the classical limit to CRN complexity classes.

The distinction between relaxation and escape trajectories in terms of formal momenta is widely developed in momentum-space WKB theory [78], as Hamilton–Jacobi theory [40], and in relation to information geometry [132]. The convexity proof in Equation (49) relating the Lyapunov to the large-deviation role of the classical-state entropy, and to the structure of the L=0 manifold, is novel.

Cycle decompositions of steady-state entropy rates now exist in many forms since Hill [76,107,109]. Here, a slightly more fine-grained cycle decomposition (keeping detailed-balanced currents) unifies all proofs of monotonicity of entropies for non-stationary distributions, relating the Lyapunov and large-deviation changes in macrostate entropy on complex-balanced CRNs in Equation (53), and its counterpart (61) in the state space for all CRNs. It also provides an interesting view of the (known) equivalence of the adjoints constructed in the generating function (64) for housekeeping heat in the environment, and an equivalent function for the relative entropy (61) in the system.

The Legendre duality of S˙HK to the potential F˙P in Equation (81) defines extensivity and intensivity with respect to a different scaling than that for entropy, but a natural one if time-reversal and energy conservation are not the only basis for taking macroscopic limits.

Acknowledgments

The author wishes to thank four anonymous referees for very useful suggestions as well as many relevant references.

Appendix A. On Distributions of Strings in a Mixture with Random Ligation and Hydrolysis

In a model concerned with linear (as opposed to branched) polymers, each monomer must have the capacity for double ligation. Suppose here that the monomers make directional links, as amino acids and nucleotides in standard 53 anhydrides do.

Further, suppose that one ligation site of the pair on each monomer, which we term the donor, is activated (whether by an ambient process such as deprotonation or by an out-of-equilibrium leaving group) to initiate a ligation. Then, in an ensemble of polymers of variable length l1 (with l=1 corresponding to free monomers) the activity of donor sites is the total count of all polymers.

Denote by πl a polymer of length l, and let nl be the expected number of polymers of length l. Then, for any two lengths l1l2, there are two channels interconverting polymers at those lengths with a polymer of length l1+l2 through ligation (in either order) or hydrolysis (at either asymmetric position), as

πl1+πl22k¯2kπl1+l2. (A1)

The corresponding steady-state rate-balance condition, with Kk/k¯, is

Knl1nl2=nl1+l2. (A2)

We will consider the range of large l, and not give separate consideration to the singular case l1=l2 where the hydrolysis rate is halved due to symmetry, or other special cases such as ligation to a ring.

The relations (A2) are satisfied for all pairs if the expected numbers have the form

nl=Kαl1 (A3)

for any α. Real α>0 are normalized for K<1 and α<0 are normalized for K>1.

The total polymer count and average length in the distribution (A3) is

l1nl=Kα11Kαl=11Kα (A4)

The number of anhydrous bonds in the ensemble, available to be cleaved, is

l1nll1=l1nll1=1KKα1Kα2 (A5)

If this total bond number (A5) is set equal to K, then α is given by

αlogK=log1+1/K. (A6)

The activity of donor sites (free string ends) and expected length then become

l1nl=1l=1+K (A7)

If K is set equal to Keff from Equation (69), then Equation (A7) recovers the input assumption of unit activity of species A and total ligated donor sites equal to Keff. Note that while in the Poisson process, n indexes total bonds, by Equation (A7), its expectation also characterizes expected string length.

Funding

The work was supported in part by NASA Astrobiology CAN-7 award NNA17BB05A, PI Frank Rosenzweig, through the College of Science, Georgia Institute of Technology, and in part by the Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT) through the World Premiere International program.

Conflicts of Interest

The author declares no conflict of interest.

References

  • 1.Szilard L. Uber die entropieverminderung in einem thermodynamischen system bei eingriffen intelligenter wesen. Z. Phys. 1929;53:840–856. doi: 10.1007/BF01341281. [DOI] [Google Scholar]
  • 2.Landauer R. Irreversibility and Heat Generation in the Computing Process. IBM J. Res. Dev. 1961;3:183–191. doi: 10.1147/rd.53.0183. [DOI] [Google Scholar]
  • 3.Bennett C.H. The Thermodynamics of Computation—A Review. Int. J. Theor. Phys. 1982;21:905–940. doi: 10.1007/BF02084158. [DOI] [Google Scholar]
  • 4.Wolpert D.H. The stochastic thermodynamics of computation. J. Phys. A Math. Theor. 2019;52:193001. doi: 10.1088/1751-8121/ab0850. [DOI] [Google Scholar]
  • 5.Seifert U. Stochastic thermodynamics, fluctuation theorems, and molecular machines. Rep. Prog. Phys. 2012;75:126001. doi: 10.1088/0034-4885/75/12/126001. [DOI] [PubMed] [Google Scholar]
  • 6.England J.L. Statistical physics of self-replication. J. Chem. Phys. 2013;139:121923. doi: 10.1063/1.4818538. [DOI] [PubMed] [Google Scholar]
  • 7.Perunov N., Marsland R., England J. Statistical physics of adaptation. Phys. Rev. X. 2015;6:021036. doi: 10.1103/PhysRevX.6.021036. [DOI] [Google Scholar]
  • 8.Luce R.D. Individual Choice Behavior. Wiley; New York, NY, USA: 1959. [Google Scholar]
  • 9.McFadden D. Quantal choice analysis: A survey. Ann. Econ. Soc. Meas. 1976;5:363–390. [Google Scholar]
  • 10.Lenton T.M., Livina V.N., Dakos V., van Nes E.H., Scheffer M. Early warning of climate tipping points from critical slowing down: Comparing methods to improve robustness. Philos. Trans. R. Soc. A. 2011;370:1185–1204. doi: 10.1098/rsta.2011.0304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Onsager L. Reciprocal Relations in Irreversible Processes. I. Phys. Rev. 1931;37:405–426. doi: 10.1103/PhysRev.37.405. [DOI] [Google Scholar]
  • 12.Onsager L. Reciprocal Relations in Irreversible Processes. II. Phys. Rev. 1931;38:2265–2279. doi: 10.1103/PhysRev.38.2265. [DOI] [Google Scholar]
  • 13.Glansdorff P., Prigogine I. Thermodynamic Theory of Structure, Stability, and Fluctuations. Wiley; New York, NY, USA: 1971. [Google Scholar]
  • 14.Kondepudi D., Prigogine I. Modern Thermodynamics: From Heat Engines to Dissipative Structures. Wiley; New York, NY, USA: 1998. [Google Scholar]
  • 15.Evans D.J., Searles D.J. Fluctuation theorem for stochastic systems. Phys. Rev. E. 1999;60:159–164. doi: 10.1103/physreve.60.159. [DOI] [PubMed] [Google Scholar]
  • 16.Evans D.J., Searles D.J. The fluctuation theorem. Adv. Phys. 2002;51:1529–1585. doi: 10.1080/00018730210155133. [DOI] [Google Scholar]
  • 17.Gallavotti G., Cohen E.D.G. Dynamical ensembles in non-equilibrium statistical mechanics. Phys. Rev. Lett. 1995;74:2694–2697. doi: 10.1103/PhysRevLett.74.2694. [DOI] [PubMed] [Google Scholar]
  • 18.Gallavotti G., Cohen E.D.G. Dynamical ensembles in stationary states. J. Stat. Phys. 1995;80:931–970. doi: 10.1007/BF02179860. [DOI] [Google Scholar]
  • 19.Jarzynski C. Nonequilibrium work relations: Foundations and applications. Eur. Phys. J. B. 2008;64:331–340. doi: 10.1140/epjb/e2008-00254-2. [DOI] [Google Scholar]
  • 20.Chetrite R., Gawedzki K. Fluctuation relations for diffusion processes. Commun. Math. Phys. 2008;282:469–518. doi: 10.1007/s00220-008-0502-9. [DOI] [Google Scholar]
  • 21.Esposito M., Van den Broeck C. Three Detailed Fluctuation Theorems. Phys. Rev. Lett. 2010;104:090601. doi: 10.1103/PhysRevLett.104.090601. [DOI] [PubMed] [Google Scholar]
  • 22.Crooks G.E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E. 1999;6:2721–2726. doi: 10.1103/PhysRevE.60.2721. [DOI] [PubMed] [Google Scholar]
  • 23.Crooks G.E. Path-ensemble averages in systems driven far from equilibrium. Phys. Rev. E. 2000;61:2361–2366. doi: 10.1103/PhysRevE.61.2361. [DOI] [Google Scholar]
  • 24.Kurchan J. Non-equilibrium work relations. J. Stat. Mech. 2007;2007:P07005. doi: 10.1088/1742-5468/2007/07/P07005. [DOI] [Google Scholar]
  • 25.Hald A. A History of Probability and Statistics and Their Applications before 1750. Wiley; New York, NY, USA: 1990. [Google Scholar]
  • 26.Joule J.P. The Scientific Papers of James Prescott Joule. Physical Society; Open Library; London, UK: 1844. [(accessed on 30 July 2020)]. Available online: https://openlibrary.org/books/OL239730M/The_scientific_papers_of_James_Prescott_Joule. [Google Scholar]
  • 27.Hirst T.A., editor. The Mechanical Theory of Heat. John van Voorst; London, UK: 1865. [Google Scholar]
  • 28.Boltzmann L. Populäre Schriften. J. A. Barth; Leipzig, Germany: 1905. The second law of thermodynamics; pp. 25–50. [Google Scholar]
  • 29.Rota G.C. Lectures on Being Time (1998) New Yearb. Phenomenol. Phenomenol. Philos. 2008;8:225–319. [Google Scholar]
  • 30.Jaynes E.T. Information theory and statistical mechanics. Phys. Rev. 1957;106:620–630. doi: 10.1103/PhysRev.106.620. [DOI] [Google Scholar]
  • 31.Jaynes E.T. Information theory and statistical mechanics. II. Phys. Rev. 1957;108:171–190. doi: 10.1103/PhysRev.108.171. [DOI] [Google Scholar]
  • 32.Lieb E.H., Yngvason J. The physics and mathematics of the second law of thermodynamics. Phys. Rep. 1999;310:1–96. doi: 10.1016/S0370-1573(98)00082-9. [DOI] [Google Scholar]
  • 33.Lieb E.H., Yngvason J. The entropy concept for non-equilibrium states. Proc. R. Soc. A. 2013;469:20130408. doi: 10.1098/rspa.2013.0408. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Lieb E.H., Yngvason J. Entropy meters and the entropy of non-extensive systems. Proc. R. Soc. A. 2014;470:20140192. doi: 10.1098/rspa.2014.0192. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Gell-Mann M., Low F. Quantum electrodynamics at small distances. Phys. Rev. 1954;95:1300–1312. doi: 10.1103/PhysRev.95.1300. [DOI] [Google Scholar]
  • 36.Wilson K.G., Kogut J. The renormalization group and the ε expansion. Phys. Rep. Phys. Lett. 1974;12C:75–200. doi: 10.1016/0370-1573(74)90023-4. [DOI] [Google Scholar]
  • 37.Weinberg S. Phenomenological Lagrangians. Phys. A. 1979;96:327–340. doi: 10.1016/0378-4371(79)90223-1. [DOI] [Google Scholar]
  • 38.Polchinski J.G. Renormalization group and effective lagrangians. Nucl. Phys. B. 1984;231:269–295. doi: 10.1016/0550-3213(84)90287-6. [DOI] [Google Scholar]
  • 39.Fermi E. Thermodynamics. Dover; New York, NY, USA: 1956. [Google Scholar]
  • 40.Bertini L., De Sole A., Gabrielli D., Jona-Lasinio G., Landim C. Towards a nonequilibrium thermodynamics: A self-contained macroscopic description of driven diffusive systems. J. Stat. Phys. 2009;135:857–872. doi: 10.1007/s10955-008-9670-4. [DOI] [Google Scholar]
  • 41.Amari S.I., Nagaoka H. Methods of Information Geometry. Oxford University Press; Oxford, UK: 2000. [Google Scholar]
  • 42.Ay N., Jost J., Lê H.V., Schwachhöfer L. Information Geometry. Springer International; Cham, Switzerland: 2017. [Google Scholar]
  • 43.Touchette H. The large deviation approach to statistical mechanics. Phys. Rep. 2009;478:1–69. doi: 10.1016/j.physrep.2009.05.002. [DOI] [Google Scholar]
  • 44.Ellis R.S. Entropy, Large Deviations, and Statistical Mechanics. Springer; New York, NY, USA: 1985. [Google Scholar]
  • 45.Shannon C.E., Weaver W. The Mathematical Theory of Communication. University of Illinois Press; Urbana, IL, USA: 1949. [Google Scholar]
  • 46.Hartley R.V.L. Transmission of information. Bell Syst. Tech. J. 1928;7:535–563. doi: 10.1002/j.1538-7305.1928.tb01236.x. [DOI] [Google Scholar]
  • 47.Esposito M., Harbola U., Mukamel S. Entropy fluctuation theorems in driven open systems: Application to electron counting statistics. Phys. Rev. E. 2007;76:031132. doi: 10.1103/PhysRevE.76.031132. [DOI] [PubMed] [Google Scholar]
  • 48.Ge H., Qian H. Physical origins of entropy production, free energy dissipation, and their mathematical representations. Phys. Rev. E. 2010;81:051133. doi: 10.1103/PhysRevE.81.051133. [DOI] [PubMed] [Google Scholar]
  • 49.Wang Y., Qian H. Mathematical Representation of Clausius’ and Kelvin’s Statements of the Second Law and Irreversibility. J. Stat. Phys. 2020;179:808–837. doi: 10.1007/s10955-020-02556-6. [DOI] [Google Scholar]
  • 50.Oono Y., Paniconi M. Steady state thermodynamics. Prog. Theor. Phys. Suppl. 1998;130:29–44. doi: 10.1143/PTPS.130.29. [DOI] [Google Scholar]
  • 51.Hatano T., Sasa S.i. Steady state thermodynamics of Langevin systems. Phys. Rev. Lett. 2001;86:3463–3466. doi: 10.1103/PhysRevLett.86.3463. [DOI] [PubMed] [Google Scholar]
  • 52.Speck T., Seifert U. Integral fluctuation theorem for the housekeeping heat. J. Phys. A Math. Gen. 2005;38:L581–L588. doi: 10.1088/0305-4470/38/34/L03. [DOI] [Google Scholar]
  • 53.Harris R.J., Schütz G.M. Fluctuation theorems for stochastic dynamics. J. Stat. Mech. 2007:P07020. doi: 10.1088/1742-5468/2007/07/P07020. [DOI] [Google Scholar]
  • 54.Ge H., Qian H. Mesoscopic kinetic basis of macroscopic chemical thermodynamics: A mathematical theory. Phys. Rev. E. 2016;94:052150. doi: 10.1103/PhysRevE.94.052150. [DOI] [PubMed] [Google Scholar]
  • 55.Santillán M., Qian H. Irreversible thermodynamics in multiscale stochastic dynamical systems. Phys. Rev. E. 2011;83:041130. doi: 10.1103/PhysRevE.83.041130. [DOI] [PubMed] [Google Scholar]
  • 56.Qian H. A decomposition of irreversible diffusion processes without detailed balance. J. Math. Phys. 2013;54:053302. doi: 10.1063/1.4803847. [DOI] [Google Scholar]
  • 57.Ge H., Qian H. Mathematical Formalism of Nonequilibrium Thermodynamics for Nonlinear Chemical Reaction Systems with General Rate Law. J. Stat. Phys. 2017;166:190–209. doi: 10.1007/s10955-016-1678-6. [DOI] [Google Scholar]
  • 58.Qian H., Cheng Y.C., Thompson L.F. Ternary Representation of Stochastic Change and the Origin of Entropy and Its Fluctuations. arXiv. 20201902.09536v1 [Google Scholar]
  • 59.Carnot S. In: Reflections on the Motive Power of Fire. Mendoza E., editor. Dover; New York, NY, USA: 1960. [Google Scholar]
  • 60.Gibbs J.W. The Scientific Papers of J. Willard Gibbs, Vol. 1: Thermodynamics. Ox Bow Press; New Haven, CT, USA: 1993. [Google Scholar]
  • 61.Smith E. Thermodynamics of natural selection II: Chemical Carnot cycles. J. Theor. Biol. 2008;252:198–212. doi: 10.1016/j.jtbi.2008.02.008. [DOI] [PubMed] [Google Scholar]
  • 62.Freidlin M.I., Wentzell A.D. Random Perturbations in Dynamical Systems. 2nd ed. Springer; New York, NY, USA: 1998. [Google Scholar]
  • 63.Cardy J.L. Electron localisation in disordered systems and classical solutions in Ginzburg-Landau field theory. J. Phys. C. 1987;11:L321–L328. doi: 10.1088/0022-3719/11/8/006. [DOI] [Google Scholar]
  • 64.Coleman S. Aspects of Symmetry. Cambridge University Press; New York, NY, USA: 1985. [Google Scholar]
  • 65.Kirsten K., McKane A.J. Functional determinants by contour integration methods. Ann. Phys. 2003;308:502–527. doi: 10.1016/S0003-4916(03)00149-0. [DOI] [Google Scholar]
  • 66.Cover T.M., Thomas J.A. Elements of Information Theory. Wiley; New York, NY, USA: 1991. [Google Scholar]
  • 67.Smith E. Large-deviation principles, stochastic effective actions, path entropies, and the structure and meaning of thermodynamic descriptions. Rep. Prog. Phys. 2011;74:046601. doi: 10.1088/0034-4885/74/4/046601. [DOI] [Google Scholar]
  • 68.Gell-Mann M., Lloyd S. Information measures, effective complexity, and total information. Complexity. 1996;2:44–52. doi: 10.1002/(SICI)1099-0526(199609/10)2:1&#x0003c;44::AID-CPLX10&#x0003e;3.0.CO;2-X. [DOI] [Google Scholar]
  • 69.Gell-Mann M., Lloyd S. Effective complexity. In: Gell-Mann M., Tsallis C., editors. Nonextensive Entropy—Interdisciplinary Applications. Oxford U. Press; New York, NY, USA: 2004. pp. 387–398. [Google Scholar]
  • 70.Polettini M., Esposito M. Irreversible Thermodynamics of Open Chemical Networks. I. Emergent Cycles and Broken Conservation Laws. J. Chem. Phys. 2014;141:024117. doi: 10.1063/1.4886396. [DOI] [PubMed] [Google Scholar]
  • 71.Horn F., Jackson R. General mass action kinetics. Arch. Ration. Mech. Anal. 1972;47:81–116. doi: 10.1007/BF00251225. [DOI] [Google Scholar]
  • 72.Feinberg M. Lectures on Chemical Reaction Networks. [(accessed on 8 May 2016)];1979 Lecture Notes. Available online: https://crnt.osu.edu/LecturesOnReactionNetworks.
  • 73.Krishnamurthy S., Smith E. Solving moment hierarchies for chemical reaction networks. J. Phys. A Math. Theor. 2017;50:425002. doi: 10.1088/1751-8121/aa89d0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Smith E., Krishnamurthy S. Flows, scaling, and the control of moment hierarchies for stochastic chemical reaction networks. Phys. Rev. E. 2017;96:062102. doi: 10.1103/PhysRevE.96.062102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Seifert U., Speck T. Fluctuation-dissipation theorem in nonequilibrium steady states. Europhys. Lett. 2010;89:10007. doi: 10.1209/0295-5075/89/10007. [DOI] [Google Scholar]
  • 76.Schnakenberg J. Network theory of microscopic and macroscopic behavior of master equation systems. Rev. Mod. Phys. 1976;48:571–585. doi: 10.1103/RevModPhys.48.571. [DOI] [Google Scholar]
  • 77.Andersen J.L., Flamm C., Merkle D., Stadler P.F. Generic strategies for chemical space exploration. Int. J. Comput. Biol. Drug Des. 2014;7:225–258. doi: 10.1504/IJCBDD.2014.061649. [DOI] [PubMed] [Google Scholar]
  • 78.Assaf M., Meerson B. WKB theory of large deviations in stochastic populations. J. Phys. A. 2017;50:263001. doi: 10.1088/1751-8121/aa669a. [DOI] [Google Scholar]
  • 79.Martin P.C., Siggia E.D., Rose H.A. Statistical Dynamics of Classical Systems. Phys. Rev. A. 1973;8:423–437. doi: 10.1103/PhysRevA.8.423. [DOI] [Google Scholar]
  • 80.Kamenev A. Keldysh and Doi-Peliti Techniques for out-of-equilibrium Systems. In: Lerner I.V., Althsuler B.L., Fal′ko V.I., Giamarchi T., editors. Strongly Correlated Fermions and Bosons in Low-Dimensional Disordered Systems. Springer; Berlin/Heidelberg, Germany: 2002. pp. 313–340. [Google Scholar]
  • 81.Smith E., Krishnamurthy S. Symmetry and Collective Fluctuations in Evolutionary Games. IOP Press; Bristol, UK: 2015. [Google Scholar]
  • 82.Angelani L., Di Leonardo R., Ruocco G. Self-Starting Micromotors in a Bacterial Bath. Phys. Rev. Lett. 2009;102:048104. doi: 10.1103/PhysRevLett.102.048104. [DOI] [PubMed] [Google Scholar]
  • 83.Mallory S.A., Valeriani C., Cacciuto A. Curvature-induced activation of a passive tracer in an active bath. Phys. Rev. E. 2014;90:032309. doi: 10.1103/PhysRevE.90.032309. [DOI] [PubMed] [Google Scholar]
  • 84.Gelman A., Shalizi C.R. Philosophy and the practice of Bayesian statistics. Br. J. Math. Stat. Psychol. 2013;66:8–38. doi: 10.1111/j.2044-8317.2011.02037.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Esposito M., Lindenberg K., Van den Broeck C. Extracting chemical energy by growing disorder: Efficiency at maximum power. J. Stat. Mech. 2010 doi: 10.1088/1742-5468/2010/01/P01008. [DOI] [Google Scholar]
  • 86.Kittel C., Kroemer H. Thermal Physics. 2nd ed. Freeman; New York, NY, USA: 1980. [Google Scholar]
  • 87.Siegmund D. Importance sampling in the Monte Carlo study of sequential tests. Ann. Stat. 1976;4:673–684. doi: 10.1214/aos/1176343541. [DOI] [Google Scholar]
  • 88.Goldstein H., Poole C.P., Safko J.L. Classical Mechanics. 3rd ed. Addison Wesley; New York, NY, USA: 2001. [Google Scholar]
  • 89.Maier R.S., Stein D.L. Effect of focusing and caustics on exit phenomena in systems lacking detailed balance. Phys. Rev. Lett. 1993;71:1783–1786. doi: 10.1103/PhysRevLett.71.1783. [DOI] [PubMed] [Google Scholar]
  • 90.Dykman M.I., Mori E., Ross J., Hunt P.M. Large fluctuations and optimal paths in chemical kinetics. J. Chem. Phys. 1994;100:5735–5750. doi: 10.1063/1.467139. [DOI] [Google Scholar]
  • 91.Smith E., Krishnamurthy S. Eikonal solutions for moment hierarchies of Chemical Reaction Networks in the limits of large particle number. J. Phys. A Math. Theor. 2020 submitted. [Google Scholar]
  • 92.Anderson D.F., Craciun G., Kurtz T.G. Product-form stationary distributions for deficiency zero chemical reaction networks. Bull. Math. Biol. 2010;72:1947–1970. doi: 10.1007/s11538-010-9517-4. [DOI] [PubMed] [Google Scholar]
  • 93.Doi M. Second quantization representation for classical many-particle system. J. Phys. A. 1976;9:1465–1478. doi: 10.1088/0305-4470/9/9/008. [DOI] [Google Scholar]
  • 94.Doi M. Stochastic theory of diffusion-controlled reaction. J. Phys. A. 1976;9:1479. doi: 10.1088/0305-4470/9/9/009. [DOI] [Google Scholar]
  • 95.Peliti L. Path-integral approach to birth-death processes on a lattice. J. Phys. 1985;46:1469. doi: 10.1051/jphys:019850046090146900. [DOI] [Google Scholar]
  • 96.Peliti L. Renormalization of fluctuation effects in A + A → A reaction. J. Phys. A. 1986;19:L365. doi: 10.1088/0305-4470/19/6/012. [DOI] [Google Scholar]
  • 97.Gunawardena J. Chemical Reaction Network Theory for In-Silico Biologists. [(accessed on 2 June 2016)];2003 Lecture Notes. Available online: vcp.med.harvard.edu/papers/crnt.pdf.
  • 98.Andersen J.L., Flamm C., Merkle D., Stadler P.F. Maximizing output and recognizing autocatalysis in chemical reaction networks is NP-complete. J. Syst. Chem. 2012;3:1. doi: 10.1186/1759-2208-3-1. [DOI] [Google Scholar]
  • 99.Berge C. Graphs and Hypergraphs. Rev. ed. North-Holland; Amsterdam, The Netherlands: 1973. [Google Scholar]
  • 100.Baez J.C., Fong B. Quantum Techniques for Studying Equilibrium in Reaction Networks. [(accessed on 26 October 2017)];J. Complex Netw. 2014 3:22–34. doi: 10.1093/comnet/cnu013. Available online: https://academic.oup.com/comnet/article-abstract/3/1/22/490572/Quantum-techniques-for-studying-equilibrium-in?redirectedFrom=fulltext. [DOI] [Google Scholar]
  • 101.Baish A.J. Bachelor’s Thesis. Bucknell University; Lewisburg, PA, USA: 2015. Deriving the Jarzynski Relation from Doi-Peliti Field Theory. [Google Scholar]
  • 102.Feinberg M. Chemical reaction network structure and the stability of complex isothermal reactors—I. The deficiency zero and deficiency one theorems. Chem. Enc. Sci. 1987;42:2229–2268. doi: 10.1016/0009-2509(87)80099-4. [DOI] [Google Scholar]
  • 103.Schlögl F. On thermodynamics near a steady state. Z. Phys. 1971;248:446–458. doi: 10.1007/BF01395694. [DOI] [Google Scholar]
  • 104.van Kampen N.G. Stochastic Processes in Physics and Chemistry. 3rd ed. Elsevier; Amsterdam, The Netherlands: 2007. [Google Scholar]
  • 105.Hill T.L., Kedem O. Studies in irreversible thermodynamics III. models for steady state and active transport across membranes. J. Theor. Biol. 1966;10:399–441. doi: 10.1016/0022-5193(66)90136-6. [DOI] [PubMed] [Google Scholar]
  • 106.Hill T.L. Studies in Irreversible Thermodynamics IV. Diagrammatic Representation of Steady State Fluxes for Unimolecular Systems. J. Theor. Biol. 1966;10:442–459. doi: 10.1016/0022-5193(66)90137-8. [DOI] [PubMed] [Google Scholar]
  • 107.Hill T.L. Free Energy Transduction in Biology: The Steady-State Kinetic and Thermodynamic Formalism. Academic Press; New York, NY, USA: 1977. [Google Scholar]
  • 108.Hill T.L., Chen Y.D. Stochastics of Cycle Completions (Fluxes) in Biochemical Kinetic Diagrams. Proc. Nat. Acad. Sci. USA. 1975;72:1291–1295. doi: 10.1073/pnas.72.4.1291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Qian M., Qian M. Circulation for recurrent Markov chain. Z. Wahrsch. Verw. Geb. 1982;59:203–210. [Google Scholar]
  • 110.Danos V., Feret J., Fontana W., Harmer R., Krivine J. Rule-based modelling, symmetries, refinements. Form. Methods Syst. Biol. Lect. Notes Comput. Sci. 2008;5054:103–122. [Google Scholar]
  • 111.Baez J.C. Quantum Techniques for Reaction Networks. [(accessed on 14 October 2017)];2013 Available online: https://arxiv.org/abs/1306.3451.
  • 112.Yung Y., Allen M., Pinto J.P. Photochemistry of the atmosphere of Titan: Comparison between model and observations. Astrophys. J. Suppl. Ser. 1984;55:465–506. doi: 10.1086/190963. [DOI] [PubMed] [Google Scholar]
  • 113.Hébrard E., Dobrijevic M., Loison J.C., Bergeat A., Hickson K.M. Neutral production of hydrogen isocyanide (HNC) and hydrogen cyanide (HCN) in Titan’s upper atmosphere. Astron. Astrophys. 2012;541:A21. doi: 10.1051/0004-6361/201218837. [DOI] [Google Scholar]
  • 114.Turse C., Leitner J., Firneis M., Schulze-Makuch D. Simulations of prebiotic chemistry under post-imact conditions on Titan. Life. 2013;3:538–549. doi: 10.3390/life3040538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 115.Westheimer F.H. Why nature chose phosphates. Science. 1987;235:1173–1178. doi: 10.1126/science.2434996. [DOI] [PubMed] [Google Scholar]
  • 116.Pasek M.A., Kee T.P. On the Origin of Phosphorylated Biomolecules. In: Egel R., Lankenau D.H., Mulkidjanian A.Y., editors. Origins of Life: The Primal Self-Organization. Springer; Berlin/Heidelberg, Germany: 2011. pp. 57–84. [Google Scholar]
  • 117.Goldford J.E., Hartman H., Smith T.F., Segrè D. Remnants of an Ancient Metabolism without Phosphate. Cell. 2017;168:1126–1134. doi: 10.1016/j.cell.2017.02.001. [DOI] [PubMed] [Google Scholar]
  • 118.Goldford J.E., Hartman H., Marsland R., III, Segrè D. Environmental boundary conditions for early life converge to an organo-sulfur metabolism. Nat. Ecol. Evol. 2019;3:1715–1724. doi: 10.1038/s41559-019-1018-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Laplace P.S. In: A Philosophical Essay on Probabilities. 6th ed. Truscott F.W., Emory F.L., editors. Dover; New York, NY, USA: 1951. [Google Scholar]
  • 120.Clausius R. On the Application of the Theorem of the Equivalence of Transformations to Interior Work. John van Voorst; London, UK: 1867. pp. 215–250. Fourth Memoir. [Google Scholar]
  • 121.Smith E. Quantum-classical correspondence principles for locally non-equilibrium driven systems. Phys. Rev. E. 2008;77:021109. doi: 10.1103/PhysRevE.77.021109. [DOI] [PubMed] [Google Scholar]
  • 122.Smith E., Morowitz H.J. The Origin and Nature of Life on Earth: The Emergence of the Fourth Geosphere. Cambridge U. Press; London, UK: 2016. [Google Scholar]
  • 123.Verdú S. Fifty years of Shannon theory. IEEE Trans. Inf. Theor. 1998;44:2057–2078. doi: 10.1109/18.720531. [DOI] [Google Scholar]
  • 124.Abelson H., Sussman G.J., Sussman J. Structure and Interpretation of Computer Programs. 2nd ed. MIT Press; Cambridge, MA, USA: 1996. [Google Scholar]
  • 125.Jaynes E.T. The minimum entropy production principle. Annu. Rev. Phys. Chem. 1980;31:579–601. doi: 10.1146/annurev.pc.31.100180.003051. [DOI] [Google Scholar]
  • 126.Jaynes E.T. Probability Theory: The Logic of Science. Cambridge U. Press; New York, NY, USA: 2003. [Google Scholar]
  • 127.Speck T., Seifert U. Restoring a fluctuation-dissipation theorem in a nonequilibrium steady state. Europhys. Lett. 2006;74:391–396. doi: 10.1209/epl/i2005-10549-4. [DOI] [Google Scholar]
  • 128.Harmer R., Danos V., Feret J., Krivine J., Fontana W. Intrinsic information carriers in combinatorial dynamical systems. Chaos. 2010;20:037108. doi: 10.1063/1.3491100. [DOI] [PubMed] [Google Scholar]
  • 129.Behr N., Danos V., Garnier I. Stochastic mechanics of graph rewriting. In: Shankar N., editor. Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. ACM Press; New York, NY, USA: 2016. pp. 46–55. [Google Scholar]
  • 130.Behr N., Danos V., Garnier I. Combinatorial Conversion and Moment Bisimulation for Stochastic Rewriting Systems. Log. Methods Comput. Sci. 2020;16:lmcs:6628 [Google Scholar]
  • 131.Behr N., Krivine J. Rewriting Theory for the Life Sciences: A Unifying Theory of CTMC Semantics. In: Gadducci F., Kehrer T., editors. Proceedings of the Graph Transformation, 13th International Conference, ICGT 2020, Proceedings. Springer International Publishing; Cham, Switzerland: 2020. pp. 185–202. Volume 12150 of Theoretical Computer Science and General Issues. [Google Scholar]
  • 132.Smith E. The information geometry of 2-field functional integrals. [(accessed on 21 June 2019)];Inf. Geom. 2019 doi: 10.1007/s41884-022-00071-z. submitted. Available online: http://arxiv.org/abs/1906.09312. [DOI] [PMC free article] [PubMed]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES