Significance
Many biological, chemical, and social phenomena that involve the interaction of large numbers of substances or agents are modeled as reaction networks. Mathematically, reaction networks are high-dimensional dynamical systems, deterministic or stochastic. Numerical simulations have revealed a rich and diverse landscape, one that poses a nontrivial challenge for analysts. In this paper, we consider a class of linear stochastic reaction networks designed to capture certain salient characteristics of real biological networks, yet sufficiently idealized to permit analytical approaches. We present rigorous results on two network phenomena: exponential growth of network size and depletion of one of the substances involved. Both phenomena occur naturally and are known to have biological consequences.
Keywords: reaction networks, exponential growth, depletion, mean-field approximation
Abstract
This paper is about a class of stochastic reaction networks. Of interest are the dynamics of interconversion among a finite number of substances through reactions that consume some of the substances and produce others. The models we consider are continuous-time Markov jump processes, intended as idealizations of a broad class of biological networks. Reaction rates depend linearly on “enzymes,” which are among the substances produced, and a reaction can occur only in the presence of sufficient upstream material. We present rigorous results for this class of stochastic dynamical systems, the mean-field behaviors of which are described by ordinary differential equations (ODEs). Under the assumption of exponential network growth, we identify certain ODE solutions as being potentially traceable and give conditions on network trajectories which, when rescaled, can with high probability be approximated by these ODE solutions. This leads to a complete characterization of the ω-limit sets of such network solutions (as points or random tori). Dimension reduction is noted depending on the number of enzymes. The second half of this paper is focused on depletion dynamics, i.e., dynamics subsequent to the “phase transition” that occurs when one of the substances becomes unavailable. The picture can be complex, for the depleted substance can be produced intermittently through other network reactions. Treating the model as a slow–fast system, we offer a mean-field description, a first step to understanding what we believe is one of the most natural bifurcations for reaction networks.
By a reaction network, we refer loosely to a collection of substances or entities that interact through reactions. In each reaction, some subsets of materials are consumed and others are produced, following rules of interconversion that often depend on concentrations of the substances present. Reaction networks occur ubiquitously in nature: They include chemical networks (1), metabolic networks (which describe the metabolic and physical processes that govern cellular function) 2–4); (4), microbial food webs (5), ecological networks that describe interspecies competition and cross-feeding (6, 7), epidemiological networks (8), and economic networks modeling exchanges of products and services (9), to give just a few examples. Questions surrounding growth and depletion are of fundamental interest in the theory of reaction networks. Many types of networks, such as those describing biological systems, have the capacity for sustained growth; see ref. (10) and its references. Depletion also occurs naturally; the depletion of certain substances, e.g., dopamine, is known to lead to abnormal brain activity.
Mathematically, reaction networks are modeled by high-dimensional dynamical systems, deterministic or stochastic. Numerical simulations have revealed rich and diverse phenomena. Some rigorous results, most of which in idealized settings, have also appeared, e.g., refs. 11–13. A challenge in this emerging area is the development of analytical tools to quantify and elucidate new phenomena.
In this paper, we present some rigorous results for a class of stochastic reaction networks. Our models are also idealized but more realistic than those in many rigorous studies. Informally, our network consists of N substances present in quantities X1(t),…,XN(t) at time t ≥ 0 and a collection of reactions {Jk} which alter the Xn(t). Each Jk is facilitated by a number of “enzymes” (borrowing language from metabolic networks) which are among the N substances. Mathematically, the reaction network is modeled by a continuous-time Markov process in , where each reaction Jk takes place at an exponentially distributed time at a rate that depends on its associated enzymes. An important constraint is that Xn cannot be negative, so that when its clock goes off, a reaction takes place only if all of the required upstream materials are present.
The following questions arise naturally:
-
1.
Can the dynamics of such stochastic reaction networks be described by mean-field approximations?
-
2.
In the case of sustained network growth, how to describe its large-time behavior?
-
3.
What happens if one of the substances depletes?
In this paper, we answer these questions for linear stochastic reaction networks, i.e., the rates at which reactions occur depend linearly on the quantities of enzymes present. Before presenting our results, we first recall some previous work.
Related Literature.
An important early result is ref. 14, which considered a setting more general than ours and answered Question 1 in the affirmative on finite time intervals. This paper laid the groundwork for how network trajectories, when suitably rescaled, are approximated by the system’s mean-field ODE solution. The theory initiated in ref. 14 in fact goes well beyond this mean-field ODE picture; see, e.g., ref. 15.
Among reaction networks, a special case is when all the Xn increase with time or when the matrix B in the limiting mean-field linear ODE has nonnegative entries. This special case can be thought of as a continuous-time multitype branching process; see the classical reference (16). In ref. 17, the author carried out a detailed study of Question 2 under these conditions. Here, the linear mean-field ODE achieves maximum exponential growth along a unique eigendirection, and all initial conditions converge to this leading eigendirection by the Perron–Frobenius Theorem.
It seems that for some special matrices B, Question 2 can be answered by techniques related to those used in generalized urn models and stochastic approximations; see, e.g., the survey paper (18) (and references therein) and the techniques of ref. 19.
Two other papers to which our work is related are ref. 10, which explored the origin of exponential growth in (nonlinear) scalable reaction networks, modeling them as (stochastic) differential equations, and ref. 13, which studied large deviations in mean-field approximations in the large volume limit of statistical mechanics.
Our Results.
Theorems 1–3 answer Questions 1 and 2 above in the setting of linear stochastic networks. Our setup generalizes that in ref. 17 as we assume only exponential growth in norm, allowing the matrix B to have negative entries. For such a matrix, solutions of the ODE need not remain positive for all t ≥ 0, i.e., depletion can occur, at least on the level of the ODE. In Theorems 1 and 2, we prove that with high probability, mean-field approximation in the sense of (14) holds for as long as the network trajectory remains positive. Our proof of this result follows techniques in ref. 17. For initial conditions for which the solutions remain positive for all time, Theorem 3 gives a complete characterization of the limiting behavior of network trajectories as t → ∞. We observe also that under certain conditions, ω-limit sets of the full system can be deduced from the dynamics of enzymes alone—a substantial dimension reduction when only a small fraction of the substances are enzymes.
Theorem 4 describes postdepletion dynamics, offering a partial answer to Question 3. As discussed above, depletion in reaction networks can be a consequential phenomenon. To our knowledge, it has not been studied before aside from ref. 10, which noted that depletion leads to different invariant measures. In other previous work, a number of authors, e.g., refs. 17 and 13, imposed conditions to avoid depletion.
When a substance depletes, one might think that the resulting dynamics are equivalent to those obtained by removing that substance from the equation altogether, but it is not that simple. The depletion of substance N on the level of the mean-field equation means that on average substance N is consumed faster than it is produced. When it is used up, reactions that rely on it as upstream material are paused, but these reactions will resume once substance N becomes available again, such as when it is produced by some other viable reactions. Thus, the depletion of one substance can cause a subset of reactions to become unreliable, in the sense that they may or may not go forward when their clocks ring depending on the availability of the “depleted substance” at that moment in time. In Theorem 4, we show that such dynamics, suitably rescaled, can be approximated by a (nonlinear) ODE that takes into consideration this availability. A crucial observation here is that following depletion, the rescaled network behaves like a slow–fast system.
Results
We begin with a description of the models considered in this paper. This is followed by statements of the main results.
A. Model Description.
We consider a stochastic reaction network described by a continuous time Markov process
on , where and is the number of substances in the network. For n = 1, 2, …, N, Xn(t) is the quantity of substance n present at time t.
There are K possible chemical reactions J1, …, JK, each involving a subset of the N substances. When Jk occurs it instantaneously changes the value of Xn by an amount ak, n, i.e.
Here, can be positive, zero, or negative. In the case ak, n < 0, we say that substance n is upstream for reaction k. If ak, n > 0, we say that substance n is downstream for reaction k. In the case ak, n = 0, substance n is not involved in reaction k.
Associated with each Jk is a time-dependent exponential clock Rk = Rk(X). When this clock rings, reaction k occurs provided that all upstream substances are available in sufficient quantity, i.e., provided that Xn ≥ max{0, −ak, n} for all n. If this condition is not satisfied, reaction k does not occur. This ensures that Xn ≥ 0 at all times, an important biological constraint.
We view the substances involved in the definition of Rk as facilitating reaction Jk and refer to them as “enzymes” for this reaction. Notice that enzymes are themselves produced by the network.
We do allow substances to enter or leave the network. That is, it is not required that . For example, we may have ak, n = 1 (or −1) and ak, m = 0 for all other indices m. In this case, the reaction Jk is to be thought of as a substance n entering from (or dissipating to) the environment external to the network.
Equivalently to the above description, we can define X(t) through its Markov generator G. Let be a bounded function. Then
| [1] |
| [2] |
Remark 1:
The case can be easily reduced to the case by changing the units of all of Xn (such as multiplying by the common denominator of all the rational numbers). The case when some ak, n are irrational could lead to nonlattice distributions, which could possibly have different behaviors; we do not consider that here.
We have defined above a general stochastic reaction network. In this paper, we study the linear case, where for each reaction Jk, the clock rate has the form
It is easy to check that by renaming the reactions; one may assume that each reaction k is fueled by exactly one enzyme Ek, so that Rk = αkXEk.
To summarize, the following is assumed throughout.
We consider a linear stochastic reaction network defined by a Markov processes X(t)=(X1(t),…,XN(t)), t ≥ 0, where is the quantity of substance n present at time t, and X(t) is changed by K reactions J1, …, JK according the following rules: Each Jk is mediated by an enzyme Ek (which is one of the N substances); it carries an exponential clock with rate αkXEk. When the clock for Jk rings, Xn is changed instantaneously by an amount equal to provided that there is sufficient upstream material for Jk to proceed.
B. Stochastic Tracing of Mean-Field Solutions.
Let B = (bn1, n2) be the N × N matrix with entries
| [3] |
and set bn1, n2 = 0 when the sum is empty, i.e., when n2 is not an enzyme. Under positivity assumptions, it is reasonable to expect solutions of the initial value problem
| [4] |
to provide mean-field approximations of X(t). Below we make these ideas precise.
Let denote the open positive orthant, i.e.,
Given , we say x(t) leaves transversally at t = T > 0 if for t < T, x(T)∈{xn = 0 and xk > 0 for k ≠ n} for some n, and x(t) meets {xn = 0} transversally at t = T. Recall that by definition, X(t)≥0 for all t ≥ 0. We call T the first depletion time if it is the smallest t for which either Xn(t)=0 for some n or the clock for a reaction rings but the reaction cannot go forward due to insufficient upstream material.
A first example of stochastic tracing is the classical result by Kurtz (14), which implies in our setting the result stated as Theorem 1 below.
Theorem 1.
(follows from (14)) Let x(t),t ∈ [0, T], be a solution of Eq. Eq. 4 with x(0)=x0.
- (a)
Assume . Then, for all ε, ε′> 0, there exists L0 = L0(x0, T, ε, ε′) so that for every L > L0,
[5] - (b)
Assume for t < T, and x(t) exits transversally at time T. Then, for all t > T and for all ε′> 0 there exists L0 = L0(x0, t, ε′) so that for every L > L0,
Theorem 1, the proof of which we omit, compares the rescaled dynamics of the stochastic reaction network with solutions of the ODE for finite time. Further assumptions are needed to extend these results to infinite time as we now discuss.
Let
| [6] |
where Λ is the set of eigenvalues of B and ℜ(λ) is the real part of λ. As X(t) with larger ∥X(t)∥ deviates less after rescaling from its ODE mean-field solution, to facilitate stochastic tracing for infinite time, it is natural to assume
Condition 1
λ1 > 0.
A second condition, also necessary for traceability, is that the ODE solution remains in . We will, in fact, assume a little more.
For a solution x(t) of Eq. 4 with x(t)=x0, let ϕ(t)=x(t)/∥x(t)∥ be its projection to the unit sphere . It is easy to check that the flow defined by Eq. 4 projects to a well-defined flow on . Let ϕ0 = ϕ(0). Recall that the ω-limit set of ϕ0 is given by
We say ϕ0 is ω-stable if for any open neighborhood U of ω(ϕ0), there is some δ > 0 so that for all δ-close to ϕ0, .
Given , we say the solution x(t) of Eq. 4 is potentially traceable (for all times) if
-
(i)
for all t ≥ 0, and
-
(ii)
ϕ0 = x0/∥x0∥ is ω-stable and .
The following is our first result on infinite-time stochastic tracing.
Theorem 2.
Assume (*), and let x0 be such that x(t) is potentially traceable. Then, for all ε, ε′> 0, there exists L0 = L0(x0, ε, ε′) so that for every L > L0,
[7]
The potential-traceability condition above can be seen as a relaxation of the Perron–Frobenius assumption: if the matrix B is strictly positive, i.e., bn1, n2 > 0 for all n1, n2, then (*) holds and all solutions in are potentially traceable.
C. Asymptotic Behavior in the Case of Exponential Growth.
Next, we turn to the behavior of X(t) as t → ∞, discussing separately the asymptotic behavior of Φ(t):=X(t)/∥X(t)∥ and ∥X(t)∥. For Φ0 = X(0)/∥X(0)∥, the ω-limit set ω(Φ0) is defined as above.
Theorem 3.
Assume (*), and let B, x0, ε′, and L0 be as in Theorem 2. Then, the following hold with probability 1 − ε′:
- (a)
ω(Φ0) is either
- (i)
a (nonrandom) point,- (ii)
a random point, or- (iii)
a random manifold diffeomorphic to a torus.- (b)
for some k ≥ 1 depending on B, e−λ1tt−(k − 1)∥X(t)∥ converges to a random variable as t → ∞.
Theorem 3 has the following interpretation: For initial conditions X(0)=Lx0 where L > 0 is sufficiently large and x0 is such that x(t) is potentially traceable, Theorem 3(b) together with implies that with probability close to 1, all substances in the network grow exponentially as t → ∞, with Xi(t)∼e−λ1tt−(k − 1) for all i. Theorem 3(a) implies that the proportions of the substances present, i.e., the ratios of the Xi(t), either tend to a fixed configuration (scenario i) or a random configuration (scenario ii) or it can fluctuate in either a periodic or quasiperiodic manner (scenario iii).
It seems plausible, even likely, that under the condition λ1 > 0, the following in fact holds for all initial conditions: With probability one, either one of the substances depletes in finite time, or the dynamics of X(t) as t → ∞ are as described in Theorem 3. A proof will require a different set of considerations than those in this paper.
Not surprisingly, ω-limit sets of stochastic trajectories are related to those of their mean-field ODEs. Leaving details for later, we comment here on the factors that determine the latter. An eigenvalue λ of B is called a leading eigenvalue if ℜ(λ)=λ1. Decomposing B into real Jordan blocks {Ji}, we say that Ji is a leading block if it is associated with a leading eigenvalue and call Ji a maximal leading block if it has the largest algebraic multiplicity among all leading blocks. Then, given , ω(ϕ0) is determined by whether the eigenvalues associated with the maximal leading blocks are real or complex, the dimension and geometry of the subspaces corresponding to maximal leading blocks, and the location of ϕ0 in relation to these structures; the value k in part (b) is the algebraic multiplicity of maximal leading blocks.
Reduction to Network of Enzymes.
We will show that under certain conditions, the large-time behavior of X(t) is captured by a reduced model involving only the dynamics of enzymes. This is a significant dimension reduction when only a small fraction of the N substances act as enzymes. Writing
where the first M substances are enzymes and the remaining N − M are not, we observe that the matrix B defined in Eq. 3 takes the form
| [8] |
where and . Positivity issues aside,
| [9] |
gives the mean-field approximation of the dynamics in the network of enzymes. We claim that this subnetwork captures the large-time behavior of the full system provided that the nonenzyme substances are always present in sufficient quantities.
Corollary 1.
Assuming (*) and writingBas in Eq. 8, letbe such thatis potentially traceablewith respect to Eq. 9. Let, and assume thatfor all. Then the full systemhas an open set of potentially traceable solutions theω-limit sets associated to which are diffeomorphic to.
This corollary, the proof of which is left to the reader, follows immediately from the algebraic structures that determine ω-limit sets (as explained in the proof of Proposition 1), together with the fact that the leading Jordan blocks of B are the same as those of .
D. Dynamics Following Depletion.
To motivate our result, consider the following situation. Let be such that x0 = X(0)/L for some large L, and let x(t) be the solution of Eq. 4 with x(0)=x0. We assume for t < T0, leaving transversally at time T0. Theorem 1 implies that X(t)/L is well approximated by x(t) up to a little before time T0. We assume further that xN(T0)=0 and xn(T0)> 0 for all n ≠ N, and let
| [10] |
i.e., consists of those y > 0 for which the vector field defining the ODE Eq. 4 is transversal to at (y, 0) and points away from . Observe that is open, and if Π is the orthogonal projection from to its first N − 1 coordinates, then .
Informal discussion.
When XN(t)=0, reactions relying on substance N either as an enzyme or as a upstream material cannot take place. Let denote the set of all other reactions. Suppose further that no reaction produces substance N. Then, it is easy to see that XN(t)=0 for all t > T0, and the dynamics of ΠX(t) are approximated by an ODE as in Eq. 4 with the set of reactions replaced by . That is, the analysis is as before except that it now involves only N − 1 substances.
But does not imply that the reactions cannot produce substance N. If substance N is produced, then the reactions in can, in principle, resume, though it is reasonable to expect XN(t) to hover around 0 as the mean drift is for XN to deplete. Since we may assume Xn(t)=O(L) for n ≠ N, reactions k with Ek = N take place so rarely relative to other reactions that they can be ignored, but there may be a nontrivial number of reactions requiring substance N as upstream material, and these reactions may take place a nonnegligible fraction of the time when their clocks ring, enough to affect the course of events. One can thus interpret the situation as one in which the ability of the system to produce the reactions in is impaired but failure is not necessarily total, and that must be factored into the subsequent dynamics of X(t).
To formulate Theorem 4, we let be as in Eq. 10. For , consider the Markov process Zy(s), s ≥ 0, on with generator
| [11] |
where is a bounded test function. Note that Zy(s) depends solely on y and on network parameters, not on X(t). Assuming
| [12] |
where GCD denotes greatest common divisor, and the process Zy has a unique invariant probability measure μy. This is because for large m, Zy has a negative drift and Eq. 12 guarantees irreducibility hence ergodicity by Foster’s Theorem. For simplicity, Eq. 12 is assumed throughout, though it is not a necessary condition; see Remark 2.
For 1 ≤ n1, n2 ≤ N − 1, we define the (N − 1)×(N − 1) matrix
| [13] |
Theorem 4.
(a) Let be as above. Then, the solution y(t) of the ODE
[14] exists and is unique for as long as .
(b) Assume for some and T > 0 that for all t ∈ [0, T]. Then, for any and any ε, ε′> 0, there exists L0 = L0(y0, z, T, ε, ε′) such that the following holds: Given X(t) with L > L0, let . If YL(0)=y0 and XN(0)=z, then
E. Examples.
Before we proceed to the proofs of our main results, we provide three examples to illustrate some of the main points of the present work. The first example is a specific stochastic reaction network illustrating Theorem 3 (ii). The second example is the explicit computation of the mean-field ODE following depletion in a simple situation; while the third example illustrates multiple phenomena related to depletion.
Example 1
An example with random asymptotic direction of growth. For illustration, we present a specific example with 3 substances and 4 reactions, φ1, …, φ4. Below “*” denotes the environment external to the network, Ei is the enzyme facilitating reaction i, and Ui is a unit of substance i.
We find that the matrix
[15] as defined in Eq. 3, has the leading eigenvalue λ = 16 with two dimensional eigenspace E16 spanned by the vectors [1, 1, −1]T, [1, −1, 1]T. The other eigenvalue is −2. Sample trajectories are shown on Fig. 1.
Fig. 1.
Six sample trajectories of the stochastic reaction network in Example 1. The simulations started from the initial condition x0 = [1, 1, 1]T with t ∈ [0, 0.25] and L = 5,000. According to Theorem 3(ii), with high probability, a trajectory will converge to the plane E16, and after some initial uncertainty, it will tend to infinity along a random direction.
Example 2
Explicit computation of in a special case. We compute explicitly the ODE Eq. 14 in the case where substance N depletes and
[16] Let B0+ and B0− be matrices given by
[17] 1 ≤ n1, n2 ≤ N − 1. When their clocks ring, reactions k in the sum B0+ always occur, whereas those in B0− occur with probability μy{z ≥ 1} according to Theorem 4. We need therefore to compute μy{z ≥ 1}.
For the Markov process Zy(s), the rate of going from z to z + 1 and z − 1 are and , respectively, where
and 0 cannot go to −1. A simple computation (which we leave to the reader) gives .
It follows that when Eq. 16 holds, the ODE Eq. 14 is given by
Example 3
Qualitatively different behaviors after depletion. A simple but important point is that knowing only the ODE describing the mean-field dynamics for a reaction network , one cannot know the properties—not even qualitatively—of the Markov process after depletion. This is because B carries much less information than .
For example, let be an arbitrary network, and let be obtained from by adding two more reactions JK + 1, JK + 2, with reaction rates αK + 1 = αK + 2 = 1, enzymes EK + 1 = EK + 2 = 1 and
[18] for some p ≠ 0 and a < N. Note that the mean-field equation for the two networks are identical before depletion, because the outcomes of reactions K + 1 and K + 2 cancel each other in mean. But the ODE Eq. 14 can differ substantially upon depletion of substance N; reaction K + 2, which occurred with equal probability as K + 1 before depletion, may now occur less frequently. If, e.g., p ≫ 1 and no other substance depletes, then upon depletion of substance N, the combined effect of reactions K + 1 and K + 2 produces substance a in much larger quantity than it is consumed.
We present in Fig. 2 an example that illustrates a number of scenarios discussed in this paper. Consider a network of two substances; the matrix B describing its mean-field dynamics has a stable and an unstable eigendirection, each having a branch that points into . Depending on which side of the stable eigendirection the initial condition lies, one of the following may occur: i) The ODE solution y(t) is potentially traceable; it stays in for all t > 0 and becomes asymptotically close to the unstable eigendirection with ∥y(t)∥ → ∞ as t → ∞. ii) The solution curve exits through the x2-axis, i.e., X1 depletes.
In case (ii), let us assume for definiteness that the ODE solution y(t) exits transversally on the vertical axis, i.e., x1 = 0 with a downward drift for x2, and viable network reactions postdepletion do not change the downward trend for X2. We now modify to as above (except that X1 is depleted and a = 2).
Depending on the magnitude of p and consequently the failure rate of reaction K + 2 when its clock rings, the subsequent dynamics can cause X2(t) to grow or decrease exponentially with high probability. Indeed by choosing p large enough, X2(t) can grow faster than the unstable eigenvalue of B, i.e., it is possible for the depletion of a substance to lead to faster network growth!
If X2(t) decreases toward 0, several scenarios are possible: it can happen that X(t0)=(0, 0) for some t0 and the network stops forever; trajectories that do not reach (0, 0) may hover around (0, 0) for some time; under most conditions, they will eventually cross-over to the other side of the stable eigendirection, “reviving” the network. For smaller values of ∥X(t)∥, trajectories of the Markov process are not described by mean-field dynamics, and a complete analysis is beyond the scope of the present paper.
Fig. 2.
Sample trajectories of and . For , a network of two substances, stable and unstable directions of B are indicated in red. Network trajectories (green and blue) exemplify scenarios (i) and (ii) in the text. Also shown is a sample trajectory (orange) of with the same initial condition as the blue one. The mean-field pictures in are the same for and (but the trajectories of oscillate more around the mean). Furthermore, can be chosen so X2 grows exponentially upon depletion of X1, leading to qualitatively different behaviors between the blue and orange trajectories.
Main Ideas of Proofs
F. Geometry of Potentially Traceable ODE Solutions (Preliminaries).
Theorems 1 and 2 assert that under suitable conditions, X(t) can be seen as stochastic perturbations of solutions of linear ODEs. We consider the initial value problem Eq. 4 where B is an arbitrary N × N matrix. As before, we let ϕ(t)=x(t)/∥x(t)∥ denote the projection of solutions of Eq. 4 to the unit sphere . Recall that potential traceability (defined just before the statement of Theorem 2) entails two conditions: positivity and ω-stability of ϕ0. We begin with a discussion of the latter, giving a complete characterization of ω(ϕ0) and showing that for almost all initial conditions x0, ϕ0 is ω-stable.
Proposition 1
Given B, there is an exceptional set , a subspace of dimension < N such that for every , ϕ0 is ω-stable. Moreover, ω(ϕ0) is either
- i)
a fixed point,
- ii)
diffeomorphic to a circle on which the dynamics are periodic, or
- iii)
diffeomorphic to a d-dimensional torus, 1 < d ≤ N/2, on which the dynamics are quasiperiodic.
The proof of Proposition 1 is elementary; it involves treating Jordan blocks individually and then combining the results. Details are given in SI Appendix, S1. The Perron–Frobenius picture, where all the entries of B are strictly positive and ω(ϕ0) is a fixed point, is well known. Cases (ii) and (iii) are associated with leading complex eigenvalues. We give a sense of how the invariant tori picture comes about:
Suppose the complex Jordan form of B is diagonal with eigenvalues λ ± iμ1, …, λ ± iμN/2. Let denote the 2D subspace corresponding to the eigenvalue λ ± iμk, and assume for simplicity that the Vk is orthogonal. Consider x0 with ∥x0∥=1, and denote its coordinates in Vk by (xk, 1, xk, 2). Let rSk be the circle of radius r in Vk, and let
Then, the rescaled flow e−λ1tx(t) leaves invariant both the unit sphere in and . That is, is foliated by N/2-dimensional tori left invariant by the projected flow ϕ(t). It follows that for each , .
As to whether ω(ϕ0) is all of or part of it, note that restricted to , ϕ(t) is a linear flow. Recall that a set {s1, …, sl}⊂{μ1, …, μN/2} is called rationally independent if for all , . By a standard result in dynamical systems (20), the closure of any orbit of a linear flow on is a submanifold of diffeomorphic to a torus of dimension d where d is the largest rationally independent subset of {μ1, …, μN/2}.
For arbitrary , since the flows x(t) and ϕ(t) commute, i.e., ϕ(t)=x(t)/∥x(t)∥, ω(ϕ0) is determined by x0/∥x0∥. This completes our discussion of the invariant tori picture.
We comment on the geometry of B in relation to traceability. In addition to the subspace in Proposition 1, another relevant subspace is , the smallest subspace of such that for every , . Both and are identifiable given B.
For intuition on these subspaces, let be such that E+ is the sum of the generalized eigenspaces associated with the leading eigenvalues of B and E* the subspace associated with all other eigenvalues. Then, , and x0 ∈ E* cannot be ω-stable because ω(ϕ0) is trapped inside E* while a small perturbation of x0 will give rise to an ω-limit set contained in E+. It follows that , where x0 = (x0+, x0*)∈E+ ⊕ E*.
To properly identify and , however, we need to consider also polynomial growth. Suppose for definiteness that B consists of a single Jordan block with real eigenvalues λ with algebraic multiplicity k > 1, i.e., there is a basis of unit vectors {u1, u2, …, uk} such that
Then, , and where x0 = ∑jx0juj. For , ϕ0 cannot be ω-stable because perturbing to x0k > 0 gives ω(ϕ0)=u1, while perturbing to x0k < 0 gives ω(ϕ0)= − u1.
Turning next to positivity, a precondition is clearly that . It does not imply , and even when that is the case, x(t) may venture outside of before coming back in—unlike the Perron–Frobenius situation studied in (17). When x0 is potentially traceable, however, all x0′ in a neighborhood of it are, so when the set of potentially traceable initial conditions is nonempty, it is an open set.
Observe that when the leading eigenvalues consist of a single pair of numbers λ ± iμ, there can be no potentially traceable trajectories, because x(t) cannot remain in for all time. Indeed, let mr and mc denote the maximal algebraic multiplicity among the real and complex leading blocks, respectively, and let be the eigenspace corresponding to the real leading blocks with the largest algebraic multiplicity.
Lemma 1
When both real and complex leading blocks are present, the set of potentially traceable initial conditions is nonempty when and mr ≥ mc. Furthermore,
- i)
when mr > mc, ω(x0) is a point;
- ii)
when mr = mc, ω(x0) is diffeomorphic to a D-dimensional torus, where D≤ the number of complex leading blocks with multiplicity equal to mc.
We omit the proof, which is straightforward though notationally tedious. The following example in three dimensional (3D) captures the main idea. Suppose B has a single leading real eigenvalue and a pair of complex conjugate leading eigenvalues. Assume for definiteness that λ1 > 0. Letting Vr and Vc denote the real and complex eigenspaces of B, an initial condition x0 = (x0r, x0c) with |x0r|> ∥x0c∥ gives rise to a solution x(t) that stays inside a cone around Vr, i.e., it projects to a neighborhood of in , as ∥x(t)∥ → ∞. Such a solution can be positive if Vr points into and |x0r|/∥x0c∥ is sufficiently large. It is easy to see that ω(ϕ0) is then a circle centered at , and ϕ0 is ω-stable.
Since solutions do leave —and that is the topic of Theorem 4—we record here a result that says that when x(t) leaves , it is typical for it to exit transversally as defined earlier, before the statement of Theorem 1.
Lemma 2
Let be the nth component of B, and assume that Bn({xn = 0}) ≠ {0} for all n. Then, there is a set that is the union of N immersed codimension-1 submanifolds of such that for all , if x(t) leaves , it does so transversally.
Proof:
There are two ways x(t) can leave nontransversally. One is through the interior of for some n; the other is through {xn = xk = 0} for some n, k with n ≠ k.
To treat the first, since Bn|{xn = 0} is a linear map the image of which is ≠{0}, its kernel Zn := {x ∈ {xn = 0}:Bn(x)=0} is a codim 1 subspace of {xn = 0}. No orbit can leave through z ∈ {xn = 0} with Bn(z)> 0, and orbits that leave through z ∈ {xn = 0} with Bn(z)< 0 leave transversally assuming z ∉ {xk = 0} for some k ≠ n. Nontransveral exits can occur only, therefore, for , where , and that is an immersed codimension-1 submanifold of .
To finish, {e−Btz : z ∈ {xn = xk = 0}} is also an immersed codimension-1 submanifold of .
G. Dynamics of Exponentially Growing Networks (Proofs of Theorems 2 and 3).
Our strategy for proving Theorems 2 and 3 given a potentially traceable ODE solution x(t) consists of the following two steps: The first is to apply Theorem 1 to ensure that X(t), rescaled, follows x(t) until the projection of x(t) to the unit sphere is sufficiently close to its ω-limit set (see Proposition 1). The second is to control deviations from x(t) for all times from there on.
Our techniques for large-time deviation control follow closely (17), which assumes a strong condition on the matrix B but gives a detailed analysis for the dynamics associated with nonleading eigenvalues, and those are the techniques we have emulated. Another departure from (17) is that with no assumptions on the structure of leading Jordan blocks, network trajectories will, in general, have a positive probability of reaching the boundary of even when the corresponding ODE solution is potentially traceable. Our analysis of deviation control, therefore, has to be conditioned on nondepletion.
We assume throughout Condition (*), i.e.,
The following notation is used:
–Eλ is the generalized eigenspace corresponding to the eigenvalue λ,
–Pλ is the projection to Eλ with kernel ∑μ ≠ λEμ, so that I = ∑λ ∈ ΛPλ.
Variance bounds for certain vector-valued martingales.
We make a small, technical modification of the definition of X(t) to allow the first jump that makes one coordinate nonpositive to take place, and then, the process is stopped forever. More precisely, we define Y(t) by the generator
Let τ be the stopping time defined by
| [19] |
Then, with X(0)=Y(0), we have X(t)=Y(t) for all 0 ≤ t < τ. If τ < ∞, define Y(t)=Y(τ) for all t ≥ τ.
We now use a standard approach to multitype branching processes following the proof in ref. 17, Theorem 3.1, except that in ref. 17, τ = ∞ and we need to index the martingales below with the time t ∧ τ := min{t, τ} as opposed to just t.
For , we define the process
| [20] |
where Y(0)=z. For z fixed, is a vector-valued martingale. Indeed, if t < τ, then the martingale property follows from a direct computation as in ref. 17, Lemma 9.2. If t ≥ τ, then the process is constant in t and so the martingale property is trivial.
What makes the stochastic tracing possible is exponential network growth together with the following variance bounds:
Lemma 3
[cf (17)) Let λ ∈ Λ. Then, given δ > 0, there exists C4 independent of z such that
- (a)
If ℜ(λ)≤λ1/2, then
[21] - (b)
If ℜ(λ)> λ1/2, then
[22]
A proof sketch for this result is given in SI Appendix, S2.
Let us fix also a constant C5 depending only on B so that for any λ ∈ Λ,
| [23] |
Lemma 3 will be used a number of times. First, we use it to reduce the problem of deviation control to directions associated with leading eigenvalues. As before, we define
Lemma 4
Let λ′ be such that where λ2 := max{ℜ(λ):λ ∈ Λ, λ ≠ λ1}. Then, there is a constant R for which the following holds for all eigenvalues λ with ℜ(λ)< λ1: Given ε, ε′> 0, there is L0(ε, ε′) so that for all L > L0, if z satisfies
[24] for some v ∈ E+ with ∥v∥=1, then
[25]
The proof of this result, which is quite similar to that of Proposition 2 below, is included in SI Appendix, S3.
Proof of Theorems 2 and 3 in a Special Case.
For expositional clarity, we first present—assuming Lemmas 3 and 24—a complete proof of Theorems 2 and 3 for the case
where I is the identity matrix, i.e., all leading eigenvalues are real, and no leading Jordan block has off-diagonal 1s. There is no restriction on the dimension of Eλ1.
For and ε > 0, writing , we use
to denote the ε-cone centered at v.
Proposition 2
Assume (**), and let be such that ∥v∥=1. Then, for every ε, ε′> 0 for which , there exists L0 = L0(ε, ε′) so that for all L > L0,
Proof:
We further decompose Pλ1, the projection onto Eλ1, as follows. Let v1 = v, and let {v1, …, vk1} be an orthonormal basis of Eλ1. For i = 1, …, k1, let Πi denote the projection of to the line spanned by vi, so that
[26] We have shown in Lemma 4 that deviations of X(t) from Eλ1 are inconsequential. We now examine more closely projections in the first sum above.
With z = X(0), we have ∥Πi∥≤C5 for all i,
[27] while
[28] Moreover, with as defined in Eq. 20, we have, for all i,
[29]
[30] Thus for i = 2, …, k1,
[31] By Doob’s maximal inquality together with Eq. 22, this last quantity is
which can be made arbitrarily small by choosing L0 large. Similarly,
can be made arbitrarily small.
Choosing L sufficiently large, we find
[32] We now use the condition to argue that X(t) must violate ∥X(t)−Leλ1tv∥< εLeλ1t for some time before τ is reached. Hence, t ∧ τ in Eq. 32 can be replaced by t, which is the desired result.▫
Proof of Theorem 2:
Theorem 2 is a corollary of Theorem 1 and Proposition 2. Let x(t) be the ODE solution with x(0)=x0. Since we assume potential traceability of x0 and (**), the projected solution ϕ(t) converges to some . We pick so that , let T be large enough that , and use Theorem 1 to guarantee that X(t)/L follows x(t) up to time T. When applying Theorem 1, choose ε small enough to guarantee with high probability. We are now in a position to apply Proposition 2 to ensure tracing for all t ≥ T on a set of probability > 1 − ε′.▫
Proof of Theorem 3:
Recall that the martingale
is bounded in L2 by Eq. 22. Thus, is uniformly integrable, so Doob’s theorem implies the almost-sure convergence of as t → ∞ to a random variable . By Proposition 2, is bounded away from 0 off a set of arbitrarily small probability.
We have argued in the proofs of Theorem 2 and Proposition 2 that assuming (**) and the potential traceability of x0, can be made as small as we wish by choosing L0 large. The statements below are true on a set of probability 1 − ϵ′, namely, for those sample paths for which τ = ∞, and are not in the set on the left hand side of Eq. 25.
Let Π denote projection to ⊕λ ∈ Λ, λ ≠ λ1Eλ. Then,
As t → ∞, the first term on the right side converges to , and the second converges to 0 by Lemma 4. Thus, e−tλ1X(t) converges to a nonzero point in Eλ1. It follows that Φ(t):=X(t)/∥X(t)∥ converges a.s. to a point in .
In general, limΦ(t) is a random point. In the case where the leading Jordan block is unique, consists of a single point, namely the unique attractive fixed point for the deterministic flow ϕ(t). This must, therefore, be the point to which Φ(t) converges.▫
Complex Leading Eigenvalues.
As noted earlier, in order to have potentially traceable ODE solutions, maximal leading blocks with complex eigenvalues must be accompanied by a maximal leading block with real eigenvalue. The following example is illustrative of the phenomenon:
| [33] |
Let u* be such that Bu* = λu*, and let Ecx be the eigenspace associated with the block J. We assume and identify Ecx with . Let x0 = (x0*, z0)∈⟨u*⟩⊕Ecx be such that x0* > 0, and assume |z0|/x0* is small enough to ensure that x(t) is potentially traceable. For such an x0, we claim that if for L large enough, then with high probability stays close to x(t) for all t > 0.
We write X(t)=(X*(t),Z(t)), where Z(t) is the component in Ecx, and let be as defined in Eq. 20. Focusing on Z(t), we see that since z(t)=z0e(λ + iμ)t and , for any given η > 0,
Arguing as above, i.e., by Doob’s maximal inquality together with Eq. 22, this quantity can be made small by choosing L large. This proves Theorem 2.
To prove Theorem 3, the same martingale argument as before tells us that for a set of sample paths of probability close to 1, i) e−λtX*(t) converges a.s. to a random variable , and ii) provided that z0 ≠ 0, e−(λ + iμ)tZ(t) converges a.s. to a random variable . For these good sample paths, e−λtZ(t) traces out a circle of radius in Ecx as t → ∞. Projected to , i and ii then imply that with high probability, e−λtX(t) traces out a random circle if Ecx is perpendicular to u*, a random ellipse otherwise.
In the case of multiple maximal blocks with complex eigenvalues, projected network trajectories Φ(t) converge not to a random point but to a random manifold diffeomorphic to a D dimensional torus, following their mean-field ODE solutions.
A complete proof of Theorems 2 and 3, which includes a discussion of polynomial growth and multiple leading Jordan blocks, is given in SI Appendix, S4.
H. Dynamics Following Depletion.
We explain here the ideas in the proof of Theorem 4.
Slow–fast dynamics.
Given X(t) and L > 0, the process
where Π is the orthogonal projection from to its first N − 1 coordinates, is a slow–fast system: we think of YL(t) as the “slow process" because it moves a distance with each clock ring while Z(t) moves a distance O(1). Freezing the slow coordinate at y, the fast dynamics are, modulo a factor of L in their clock rates, described by the Markov process Zy(s),s ≥ 0, on as defined in Dynamics Following Depletion.
Theorem 4 asserts that the limiting behavior of the slow process is governed by solutions of a specific ODE. Our first order of business is to prove the existence and uniqueness of solutions of this ODE. That requires controlling the regularity (in y) of the invariant probability distributions μy of the processes Zy(s) that comprise the fast dynamics.
Lemma 5
Every has an open neighborhood U = U(y0) on which the following hold.
- (a)
There exists κ = κ(U) and C = C(U) so that for any y ∈ U, any , and for any t ≥ 0,
- (b)
The functions y ↦ μy(z) are Lipschitz on U for all .
A proof of this lemma is given in SI Appendix, S5. In gist, we apply a theorem of Harris to verify that Zy(s) has a uniform spectral gap for y ∈ U. Following (21), this boils down to showing the existence of a Lyapunov function V (in the stochastic sense) together with a Doeblin-type condition on the set {V < C} for some C. Checking these conditions is straightforward. The regularity of μy, meaning the existence of a constant C0 such that for all ϕ with ∥ϕ∥< ∞ and for all y1, y2 ∈ U,
| [34] |
is then deduced from the presence of a spectral gap following the argument in ref. 22.
An immediate corollary of Lemma 5(b) is that the function
| [35] |
where D(y) is as defined in Dynamics Following Depletion is locally Lipschitz. The first assertion in Theorem 4, namely the existence and uniqueness of solutions of Eq. 14, follows.
Remark 2:
We remark that Theorem 4 continues to hold without the assumption Eq. 12. This assumption is used to guarantee the uniqueness of μy. If GCD =d > 1, then there are d invariant measures μy, i, i = 0, …, d − 1, with μy, i supported on . But because the sets of possible jumps from l and l + i are the same, μy, i(l + i)=μy, 0(l) for all and i = 0, …, d − 1. Thus, we obtain the same ODE for all i = 0, …, d − 1, and the proof of Theorem 4 we give is equally valid in this case.
Convergence of YL(t).
First, we associate YL(t) with a measure νL on Skorokhod space. Let be the piece of ODE trajectory in Theorem 4, and let . Let denote the closure of the ςd neighborhood of S for ς < 1, so that is a compact subset of containing S in its interior.
For L ≫ 1, we consider XL(t) for which
satisfies YL(0)=y0 and ZL(0)=z for some fixed . The superscript L is omitted when there is no ambiguity. We introduce the following stopping time: Let , where
and write
| [36] |
as well as . This stopping time ensures in particular that for all t ∈ [0, T], c, C > 0 depending only on . For much of the proof, we will be working exclusively with these stopped processes. In the interest of notational simplicity, we will omit the “hat” in and .
The process YL(t),t ∈ [0, T], generates a measure on the Skorokhod space which we denote by νL. We will show
| [37] |
where ν∞ is the measure on supported on the solution y(t),t ∈ [0, T], of the initial value problem Eq. 14 and “⇒” stands for weak convergence of measures as L → ∞. The theorem claims convergence of νL to ν∞ in probability, but since ν∞ is supported on one point, weak convergence is equivalent to convergence in probability.
A usual first step is to establish tightness.
Lemma 6
The sequence νL is tight.
A proof of Lemma 6 is given in SI Appendix, S6.
To identify the limit point, we use the martingale characterization of Stroock and Varadhan (23, 24). In our case, this method can be summarized as follows. Let denote the set of C2 functions on with support on , and let be the operator defined by , where is as defined in Eq. 35. Then, is the infinitesimal generator of the solution curve for Eq. 14, which we view as a (degenerate) Markov process.
Applying the martingale characterization of Stroock and Varadhan to this very simple process, we see that ν∞ is the unique probability measure ν on satisfying
(i) ν(ξ : ξ(0) = y0) = 1
-
(ii) for all , the process
with respect to the probability measure dν(ξ) is a martingale on [0, T].
To prove the weak convergence Eq. 37, then, it suffices to verify that in the limit L → ∞, the measures νL satisfy (i) and (ii). In our case, i obviously holds. Having established tightness, to prove ii, it suffices to show that for all 0 ≤ t1 < ... < tl < tl + 1 < T and for all test functions h1, …, hl,
| [38] |
where IL is defined as
| [39] |
see, e.g., ref. 25, Theorem 8.2.
The following is our main technical lemma:
Lemma 7
For any function , and for any ε > 0, there is a positive number R so that for any t ∈ [0, T], there is L0 = L0(A, ε, t) so that for all L > L0,
and
where M(Y)=M(Y)(t) is given by
Proofs of Lemma 7 and Theorem 4, which follows quite readily from Lemma 7, are given in SI Appendix. We indicate below the main steps in the proof of the technical lemma:
-
1.
Reduction to a local-in-time problem: Fix α = −3/4 and t ∈ [0, T]. We divide the interval [0, t] into subintervals Im = [tm, tm + 1], tm = mLα, m = 0, ..., tL−α − 1. Define
so that
To show as L → ∞, we will prove that for some α′< α,
[40] uniformly in m and in X(tm).
-
2.
Approximation by a process with constant clock rates: Recall that for X(t), the clock rate for reaction k, k = 1, ..., K is αkXEk(t). We wish to approximate X(t) with a process X′(t) with the properties that X′(tm)=X(tm), and the clock rates for X′(t) are constant on each Im. If, e.g., we set the clock rates for X′(t) to be
on Im, then it can be shown that with probability close to 1, X′(t)=X(t) ∀t ∈ Im.
-
3.
Completing the proof: Continuing to work with one Im at a time and defining Y′(t) and Mm(Y′) analogously to Y(t) and Mm(Y), one deduces from the proof of Item 2 that
is very small, so the problem is reduced to estimating . Observe first that by Lemma 5(a), Z(s) converges very quickly to its invariant distribution, so that assuming Z(tm) is distributed according to this distribution (instead of Z(tm)=z for some z) incurs only a negligible error. Assuming that, the result follows from a few fairly straightforward approximations.
Discussion
In the context of linear stochastic reaction networks, Theorems 2 and 3 in this paper can be seen as a generalization of the results in ref. 17, which imposed a global condition of Perron–Frobenius type on the mean-field ODE system to ensure that all rescaled solutions converge to a single attractive fixed point. We replaced this global condition with a local one, namely the potential traceability of the ODE solution. This enabled us to remove all assumptions on the linear network while limiting the applicability of our results to suitable initial conditions. Another related result is (14), which treated stochastic tracing for a finite time under general conditions. We extended the tracing to infinite time for our networks, taking advantage of the simplicity of the ω-limit sets for the rescaled ODEs.
All this can be seen as preparation for dealing with nonlinear reaction networks, where linearized solutions of (nonlinear) ODEs replace the linear system considered here. Overcoming the effects of nonlinearities and the time-dependent nature of the linearized ODE, one could hope for similar results under appropriate conditions on large-time behaviors.
Scalable reaction networks provide a suitable framework for this generalization; see ref. 10. To leverage existing ergodic theory to study large-time dynamics of biological networks, one is confronted immediately with the following issue: biological entities grow, while mathematical theories tend to require some form of stationarity; sustained growth of biomass and stationarity seem to be at odds with one another. Scalable networks permit us to get around that. Roughly speaking, these are networks for which the time evolution of ∥X(t)∥ can be decoupled from the dynamics of proportions among the substances present. Mathematically, this translates into the fact that the mean-field ODE on projects to a well-defined flow on . Existing dynamical systems theory such as heteroclinic behavior can then be applied, in principle, to the projected flow, as can smooth ergodic theory tools such as Lyapunov exponents, and the resulting phenomena can, in principle, be translated into stochastic network behavior assuming the exponential growth of ∥X(t)∥. We qualified our statements above with an “in principle” because we do not know what types of dynamics are natural for scalable reaction networks. That is another question to be explored.
In a different direction, Theorem 4 of the present paper touches upon a topic that we believe deserves greater attention than it has received: dynamics following depletion of one of the substances. For reaction networks, depletion is a natural phenomenon, possibly one of the most natural bifurcations, or “phase transitions.” We treated only the finite-time problem, but with deviations from the mean-field equation dying out quickly, it is likely that one can extend the mean-field approximation, leading possibly to descriptions of what may typically happen next. More ambitious are conditions for scenarios such as eventual replenishment of the depleted substance, recurring depletions, and cascading failures. Though not new to general networks theory, these phenomena have not been systematically, analytically, explored in the context of reaction networks.
Finally, there are many examples of reaction networks that, with varying degrees of realism, depict real-world situations. A number of such networks are mentioned in the Introduction. Applications of a rigorous mathematical theory to these examples could be valuable.
matseccnt1
Methods
Main ideas of the proofs are explained above, and technical proofs are given in SI Appendix.
Supplementary Material
Appendix 01 (PDF)
Acknowledgments
This research was partially supported by NSF Grants DMS 1901009 (LSY), DMS 1952876, and DMS 2154725 (PN). Part of the work was done when the authors held visiting positions at the Institute for Advanced Study, Princeton.
Author contributions
P.N. and L.-S.Y. designed research, performed research, contributed new reagents/analytic tools, and wrote the paper.
Competing interest
The authors declare no competing interest.
Footnotes
Reviewers: C.L., Universita degli Studi di Roma Tor Vergata; and L.R.-B., University of Massachusetts Amherst.
Data, Materials, and Software Availability
There are no data underlying this work.
Supporting Information
References
- 1.Feinberg M.. Foundations of Chemical Reaction Network Theory (Network Theory, Springer Applied Math. Sciences, AMS, 1988), vol. 202.
- 2.Duarte N. C., et al. , Global reconstruction of the human metabolic network based on genomic and bibliomic data. Proc. Natl. Acad. Sci. U.S.A. 104, 1777–1782 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Barenholz B., et al. , Design principles of autocatalytic cycles constrain enzyme kinetics and force low substrate saturation at flux branch points. eLife (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Kondo Y., Kaneko K., Growth states of catalytic reaction networks exhibiting energy metabolism. Phys. Rev. E. 84, 011927 (2011), 10.1103/PhysRevE.84.011927. [DOI] [PubMed] [Google Scholar]
- 5.Becks L., Hilker F. M., Malchow H., Jürgens K., Arndt H., Experimental demonstration of chaos in a microbial food web. Nature 435, 1226–1229 (2005). [DOI] [PubMed] [Google Scholar]
- 6.Posfai A., Taillefumier T., Wingreen N. S., Metabolic trade-offs promote diversity in a model ecosystem. Phys. Rev. Lett. 118, 028103 (2017), 10.1103/PhysRevLett.118.028103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Allesina S., Pascual M., Network structure, predator-prey modules, and stability in large food webs. Theor. Ecol. 1, 55–64 (2008). [Google Scholar]
- 8.Keeling M. J., Eames K. T. D., Networks and epidemic models. J. R. Soc. Interface 2, 295–307 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.ten Raa T., The Economics of Input-Output Analysis (Cambridge University Press, 2006), https://EconPapers.repec.org/RePEc:cup:cbooks:9780521841795. [Google Scholar]
- 10.Lin W.-H., Kussell E., Young L.-S., Jacobs-Wagner C., Origin of exponential growth in nonlinear reaction networks. Proc. Natl. Acad. Sci. U.S.A. 117, 27795–27804 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bunimovich L. A., Sinai Ya. G., Spacetime chaos in coupled map lattices. Nonlinearity 1, 491–516 (1988). [Google Scholar]
- 12.Chazottes J.-R., Fernandez B., Dynamics of coupled map lattices and of related spatially extended systems. Lect. Notes Phys. 617 (2005). [Google Scholar]
- 13.Agazzi A., Dembo A., Eckmann J. P., On the geometry of chemical reaction networks: Lyapunov function and large deviation. J. Stat. Phys. 172, 321–352 (2018). [Google Scholar]
- 14.Kurtz T. G., Solutions of ordinary differential equations as limits of pure jump markov processes. J. Appl. Probab. 7, 49–58 (1970). [Google Scholar]
- 15.Kurtz T. G., Limit theorems for sequences of jump markov processes approximating ordinary differential processes. J. Appl. Probab. 8, 344–356 (1971). [Google Scholar]
- 16.Athreya K. B., Ney P. E., Branching Processes (Springer, 1972). [Google Scholar]
- 17.Janson S., Functional limit theorems for multitype branching processes and generalized pólya urns. Stoch. Process. Their Appl. 110, 177–245 (2004), https://EconPapers.repec.org/RePEc:eee:spapps:v:110:y:2004:i:2:p:177-245. [Google Scholar]
- 18.Pemantle R., A survey of random processes with reinforcement. Probab. Surv. 4, 1–79 (2007). [Google Scholar]
- 19.Pemantle R., Nonconvergence to Unstable Points in Urn Models and Stochastic Approximations. Ann. Probab. 18, 698–712 (1990), 10.1214/aop/1176990853. [DOI] [Google Scholar]
- 20.Katok A., Hasselblatt B., Introduction to the Modern Theory of Dynamical Systems. Encyclopedia of Mathematics and its Applications (Cambridge University Press, 1995), 10.1017/CBO9780511809187. [DOI] [Google Scholar]
- 21.Hairer M., Majda A. J., A simple framework to justify linear response theory. Nonlinearity 23, 909–922 (2010), 10.1088/0951-7715/23/4/008. [DOI] [Google Scholar]
- 22.Hairer M., Mattingly J. C., “Yet another look at harris’ ergodic theorem for markov chains” in Seminar on Stochastic Analysis, Random Fields and Applications VI, R. Dalang, M. Dozzi, F. Russo, Eds. (Springer Basel, 2011), pp. 109–117, ISBN 978-3-0348-0021-1.
- 23.Stroock D. W., Varadhan S. R.S., Diffusion processes with boundary conditions. Commun. Pure Appl. Math. 24, 147–225 (1971), ISSN 0010-3640, 10.1002/cpa.3160240206. [DOI] [Google Scholar]
- 24.Stroock D. W., Varadhan S. R.S., Multidimensional diffusion processes (Springer-Verlag, 2006). [Google Scholar]
- 25.Ethier S. N., Kurtz T. G., Markov processes: Characterization and Convergence (John Wiley & Sons, 1986). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Appendix 01 (PDF)
Data Availability Statement
There are no data underlying this work.


