Abstract
In this paper we generalize the Linear Chain Trick (LCT; aka the Gamma Chain Trick) to help provide modelers more flexibility to incorporate appropriate dwell time assumptions into mean field ODEs, and help clarify connections between individual-level stochastic model assumptions and the structure of corresponding mean field ODEs. The LCT is a technique used to construct mean field ODE models from continuous-time stochastic state transition models where the time an individual spends in a given state (i.e., the dwell time) is Erlang distributed (i.e., gamma distributed with integer shape parameter). Despite the LCT’s widespread use, we lack general theory to facilitate the easy application of this technique, especially for complex models. Modelers must therefore choose between constructing ODE models using heuristics with oversimplified dwell time assumptions, using time consuming derivations from first principles, or to instead use non-ODE models (like integro-differential or delay differential equations) which can be cumbersome to derive and analyze. Here, we provide analytical results that enable modelers to more efficiently construct ODE models using the LCT or related extensions. Specifically, we provide (1) novel LCT extensions for various scenarios found in applications, including conditional dwell time distributions; (2) formulations of these LCT extensions that bypass the need to derive ODEs from integral equations; and (3) a novel Generalized Linear Chain Trick (GLCT) framework that extends the LCT to a much broader set of possible dwell time distribution assumptions, including the flexible phase-type distributions which can approximate distributions on and can be fit to data.
Keywords: Gamma chain trick, Linear Chain Trick, Distributed delay, Mean field model, Phase-type distributions, Time lag
Introduction
Many scientific applications involve systems that can be framed as continuous time state transition models (e.g., see Strogatz 2014), and these are often modeled using mean field ordinary differential equations (ODE) of the form
1 |
where , parameters , and is smooth. The abundance of such applications, and the accessibility of ODE models in terms of the analytical techniques and computational tools for building, simulating, and analyzing ODE models [e.g., straightforward simulation methods and bifurcation analysis software like MatCont (Dhooge et al. 2003) and XPPAUT (Ermentrout 1987, 2019)], have made ODEs one of the most popular modeling frameworks in scientific applications.
Despite their widespread use, one shortcoming of ODE models is their inflexibility when it comes to specifying probability distributions that describe the duration of time individuals spend in a given state. The basic options available for assuming specific dwell time distributions within an ODE framework can really be considered as a single option: the 1st event time distribution for a (nonhomogeneous) Poisson process, which includes the exponential distribution as a special case.
To illustrate this, consider the following SIR model of infectious disease transmission by Kermack and McKendrick (1927),
2a |
2b |
2c |
where S(t), I(t), and R(t) correspond to the number of susceptible, infected, and recovered individuals in a closed population at time t and is the per-capita infection rate (also called the force of infection by Anderson and May (1992) and others). This model can be thought of as the mean field model for some underlying stochastic state transition model where a large but finite number of individuals transition from state S to I to R [see the Appendix, Kermack and McKendrick (1927) for a derivation, and see Armbruster and Beck (2017), Banks et al. (2013), and references therein for examples of the convergence of stochastic models to mean field ODEs].
Although multiple stochastic models can yield the same mean field deterministic model, it is common to consider a stochastic model based on Poisson processes. For the SIR model above, for example, this stochastic analog would assume that, over the time interval (for very small ), an individual in state S or I at time t is assumed to transition from S to I with probability , or from I to R with probability , respectively. Taking yields the desired continuous time stochastic model. Here, the linear rate of transitions from I to R () arises from assuming the dwell time for an individual in the infected state (I) follows an exponential distribution with rate (i.e., the 1st event time distribution for a homogeneous Poisson process with rate ). Similarly, assuming the time spent in state S follows the 1st event time distribution under a nonhomogeneous (also called inhomogeneous) Poisson process with rate yields a time-varying per capita transition rate . This association of a mean field ODE with a specific underlying stochastic model provides very valuable intuition in an applied context. For example, it allows modelers to ascribe application-specific (e.g., biological) interpretations to parameters and thus estimate parameter values (e.g., for above, the mean time spent infectious is ), and it provides intuition and a clear mathematical foundation from which to construct and evaluate mean field ODE models based on individual-level, stochastic assumptions.
To construct similar models using other dwell time distributions, a standard approach is to formulate a continuous time stochastic model and from it derive mean field distributed delay equations, typically represented as integro-differential equations (IDEs) or integral equations (IEs) (e.g., see Kermack and McKendrick 1927; Hethcote and Tudor 1980; Feng et al. 2007, 2016, or see the example derivation in the Appendix). Readers unfamiliar with IEs and IDEs are referred to Burton (2005) or similar texts. IEs and IDEs have proven to be quite useful models in biology, e.g., they have been used to model chemical kinetics (Roussel 1996), gene expression (Smolen et al. 2000; Takashima et al. 2011; Guan and Ling 2018), physiological processes such as glucose-insulin regulation (Makroglou et al. 2006, and references therein), cell proliferation and differentiation (Özbay et al. 2008; Clapp and Levy 2015; Yates et al. 2017), cancer biology and treatment (Piotrowska and Bodnar 2018; Krzyzanski et al. 2018; Câmara De Souza et al. 2018), pathogen and immune response dynamics (Fenton et al. 2006), infectious disease transmission (Anderson and Watson 1980; Lloyd 2001a, b; Feng and Thieme 2000; Wearing et al. 2005; Lloyd 2009; Feng et al. 2007; Ciaravino et al. 2018), and population dynamics (MacDonald 1978a; Blythe et al. 1984; Metz and Diekmann 1986; Boese 1989; Nisbet et al. 1989; Cushing 1994; Wolkowicz et al. 1997; Gyllenberg 2007; Wang and Han 2016; Lin et al. 2018; Robertson et al. 2018). See also Campbell and Jessop (2009) and the applications reviewed therein.
However, while distributed delay equations are very flexible, in that they can incorporate arbitrary dwell time distributions, when compared to ODEs they also can be more challenging to derive, to analyze mathematically, and to simulate (Cushing 1994; Burton 2005). Thus, many modelers face a trade-off between building appropriate dwell time assumptions into their mean field models (i.e., opting for an IE or IDE model) and constructing parsimonious models that are more easily analyzed both mathematically and computationally (i.e., opting for an ODE model). For example, the following system of integral equations generalizes the SIR example above by incorporating an arbitrary distribution for the duration of infectiousness (i.e., the dwell time in state I):
3a |
3b |
3c |
where , is the survival function for the distribution of time spent in susceptible state S [i.e., the 1st event time under a Poisson process with rate ], and is the survival function for the time spent in the infected state I (related models can be found in, e.g., Feng and Thieme 2000; Ma and Earn 2006; Krylova and Earn 2013; Champredon et al. 2018). A different choice of the CDF allows us to generalize the SIR model to other dwell time distributions that describe the time individuals spend in the infected state. Integral equations like those above can also be differentiated (assuming the integrands are differentiable) and represented as integrodifferential equations (e.g., as in Hethcote and Tudor 1980).
There have been some efforts in the past to identify which categories of integral and integro-differential equations can be reduced to systems of ODEs (e.g., MacDonald 1989; Metz and Diekmann 1991; Ponosov et al. 2002; Jacquez and Simon 2002; Burton 2005; Goltser and Domoshnitsky 2013; Diekmann et al. 2017, and references therein), but in practice the most well known case is the reduction of IEs and IDEs that assume Erlang1 distributed dwell times. This is done using what has become known as the Linear Chain Trick (LCT, also referred to as the Gamma Chain Trick; MacDonald 1978b; Smith 2010) which dates at least back to Fargue (1973) and earlier work by Theodore Vogel (e.g., Vogel 1961, 1965, according to Câmara De Souza et al. 2018). However, for more complex models that exceed the level of complexity that can be handled by existing “rules of thumb” like the LCT, the current approach is to derive mean field ODEs from mean field integral equations that might themselves first need to be derived from system-specific stochastic state transition models (e.g., Kermack and McKendrick 1927; Feng et al. 2007; Banks et al. 2013; Feng et al. 2016, and see the Appendix for an example.). Unfortunately, modelers often avoid these extra (often laborious) steps in practice by assuming (sometimes only implicitly) very simplistic dwell time distributions based on Poisson process 1st event times as in the SIR example above.
In light of the widespread use of ODE models, these challenges and trade-offs underscore a need for a more rigorous theoretical foundation to more effectively and more efficiently construct mean field ODE models that include more flexible dwell time distribution assumptions (Wearing et al. 2005; Feng et al. 2016; Robertson et al. 2018). The goal of this paper is to address these needs by (1) providing a theoretical foundation for constructing the desired system of ODEs directly from “first principles” (i.e., stochastic model assumptions), without the need to derive ODEs from intermediate IDEs or explicit stochastic models, and by (2) providing similar analytical results for novel extensions of the LCT which allow more flexible dwell time distributions, and conditional relationships among dwell time distributions, to be incorporated into ODE models. We also aim to clarify how underlying (often implicit) stochastic model assumptions are reflected in the structure of corresponding mean field ODE model equations.
The remainder of this paper is organized as follows. An intuitive description of the Linear Chain Trick (LCT) is given in Sect. 1.1 as a foundation for the extensions that follow. In Sect. 2 we review key notation and properties of Poisson processes and certain probability distributions needed for the results that follow. In Sect. 3.1.1 we highlight the association between Poisson process intensity functions and per capita rates in mean field ODEs, and in Sect. 3.1.2 we introduce what we call the weak memorylessness property of (nonhomogeneous) Poisson process 1st event time distributions. In Sects. 3.2 and 3.3 we give a formal statement of the LCT and in Sect. 3.4 a generalization that allows time-varying rates in the underlying Poisson processes. We then provide similar LCT generalizations for more complex cases: In Sect. 3.5 we provide results for multiple ways to implement transitions from one state to multiple states (which arise from different stochastic model assumptions and lead to different systems of mean field ODEs), and we address dwell times that obey Erlang mixture distributions. In Sect. 3.6 we detail how to construct mean field ODEs in which a sub-state transition (e.g., from infected to quarantined) does or does not alter the overall dwell time distribution in that state (e.g., the duration of infection). Lastly, in Sect. 3.7 we present a Generalized Linear Chain Trick (GLCT) which describes how to construct mean field ODEs from first principles based on assuming a very flexible family of dwell time distributions that include the phase-type distributions, i.e., hitting time distributions for continuous time Markov chains (Reinecke et al. 2012a; Horváth et al. 2016) which we address more specifically in Sect. 3.7.1. Tools for fitting phase-type distributions to data, or using them to approximate other distributions, are also mentioned in Sect. 3.7.1.
Intuitive description of the Linear Chain Trick
To begin, an intuitive understanding of the Linear Chain Trick (LCT) based on properties of Poisson processes, is helpful for drawing connections between underlying stochastic model assumptions and the structure of corresponding mean field ODEs. Here we consider a very basic case (detailed in Sect. 3.2): the mean field ODE model for a stochastic process in which particles in state X remain there for an Erlang(r, k) distributed amount of time before exiting to some other state.
In short, the LCT exploits a natural stage structure within state X imposed by assuming an Erlang(r, k) distributed dwell time [i.e., a gamma(r, k) distribution with integer shape k]. Recall that the time to the kth event under a homogeneous Poisson process with rate is Erlang(r, k) distributed. In that context, each event is preceded by a length of time that is exponentially distributed with rate r, and thus the time to the kth event is the sum of k independent and identically distributed exponential random variables [i.e., the sum of kiid exponential random variables with rate r is Erlang(r, k) distributed]. Particles in state X at a given time can therefore be classified by which event they are awaiting next, i.e., a particle is in state X if it is waiting for the ith event to occur, and X, , X is a partition of X. The dwell time distribution for each sub-state X is exponential with rate r, and particles leave the last state X (and thus X) upon the occurrence of the kth event.
This sub-state partition is useful to impose on X because the corresponding mean field equations for these sub-states are linear (or nearly linear) ODEs. Specifically, if is the amount in X at time t, these mean field ODEs are
4 |
where the total amount in X at time t is , and for .
As we show below, this Poisson process based perspective allows us to generalize the LCT to other more complex scenarios where we partition a focal state X in a similar fashion. Those results are further extended from exponential dwell times to 1st event time distributions under a nonhomogeneous Poisson processes with time varying rate r(t), allowing for time-varying (or state-dependent) dwell time distributions to be used in extensions of LCT.
Model framework
The context in which we consider applications of the Linear Chain Trick (LCT) is the derivation of continuous time mean field model equations for stochastic state transition models with a distributed dwell time in a focal state, X. Such mean field models might otherwise be modeled as integral equations (IEs) or integro-differential equations (IDEs), and we seek to identify generalizations of the LCT that allow us to replace mean field integral equations with equivalent systems of 1st order ODEs.
Distributions and notation
Below we will extend the LCT from Erlang(r, k) distributions (i.e., kth event time distributions under homogeneous Poisson processes with rate r) to event time distributions under nonhomogeneous Poisson processes with time varying rate r(t), and related distributions like the minimum of multiple Erlang random variables. In this section we will first review some basic properties of these distributions.
Gamma distributions can be parameterized2 by two strictly positive quantities: rater and shapek (sometimes denoted and , respectively). The Erlang family of distributions are the subset of gamma distributions with integer-valued shape parameters , or equivalently, the distributions resulting from the sum of kiid exponential distributions. That is, if a random variable , where all are independent exponential distributions with rate r, then T is Erlang(r, k) distributed. Since the inter-event times under a homogeneous Poisson process are exponentially distributed, the time to the kth event is thus Erlang(r, k). This construction is foundational to a proper intuitive understanding of the LCT and its extensions.
If random variable T is gamma(r, k) distributed, then its mean , variance , and coefficient of variation are given by
5 |
Note that gamma distributions can be parameterized by their mean and variance by rewriting rate r and shape k using Eq. (5):
6 |
However, to ensure this gamma distribution is also Erlang (i.e., to ensure the shape parameter k is an integer) one must adjust the assumed variance up or down by rounding the value of k in Eq. (6) down or up, respectively, to the nearest integer (see Appendix B for details, and alternatives).
The Erlang density function (g), CDF (G), and survival3 function (; also called the complementary CDF) are given by
7a |
7b |
7c |
The results in Sect. 3 use (and generalize) the property of Erlang distributions detailed in Lemma 1 (eqs. 7.11 in Smith 2010, restated here without proof), which is the linchpin of the LCT.
Lemma 1
Erlang density functions , with rate r and shape j, satisfy
8a |
8b |
Since homogeneous Poisson processes are a special case of nonhomogeneous Poisson processes4 from here on we will use “Poisson process” or “Poisson process with rate r(t)” to refer to cases that apply to both homogeneous [i.e., constant] and nonhomogeneous Poisson processes.
The more general kth event time distribution under a Poisson process with rate r(t), starting from some time has a density function (), survival function (), and CDF () given by
9a |
9b |
9c |
For an arbitrary survival function starting at time (i.e., over the period where ) we use the notation . In some instances, we also use the notation .
Lastly, in the context of state transitions models, it is common to assume that, upon leaving a given state (e.g., state X) at time t, individuals are distributed across multiple recipient states according to a generalized Bernoulli distribution (also known as the categorical distribution or the multinomial distribution with 1 trials) defined on the integers 1 through k where the probability of a particle entering the jth of k recipient states () is , which define probability vector and .
Results
The results below focus on one or more states, within a potentially larger state transition model, for which we would like to assume a particular dwell time distribution and derive a corresponding system of mean field ODEs using the LCT or a generalization of the LCT. In particular, the results below describe how to construct those mean field ODEs directly from stochastic model assumptions without needing to derive them from equivalent mean field integral equations (which themselves might need to be derived from an explicit continuous-time stochastic model).
Preliminaries
Before presenting extensions of the LCT, we first illustrate in Sect. 3.1.1 how mean field ODEs include terms that reflect underlying Poisson process rates. In Sect. 3.1.2, we highlight a key property of these Poisson process 1st event time distributions that we refer to as a weak memorylessness property since it is a generalization of the well known memorylessness property of the exponential and geometric distributions.
Transition rates in ODEs reflect underlying Poisson process rates
To build upon the intuition spelled out in Sect. 1.1, where particles exit X following an exponentially distributed dwell time, we now assume that particles exit X following the 1st event time under nonhomogeneous Poisson processes with rate r(t) (recall the 1st event time distribution is exponential if is constant). As illustrated by the corresponding mean field equations given below, the rate function r(t) can be viewed as either the intensity function5 for the Poisson process governing when individuals leave state X, or as the (mean field) per-capita rate of loss from state X as shown in Eq. (11). This dual perspective provides valuable intuition for critically evaluating mean field ODEs.
Example 1
(Equivalence between Poisson process rates and per capita rates in mean field ODEs) Consider the scenario described above. The survival function for the dwell time distribution for a particle entering X at time is , and it follows that the expected proportion of such particles remaining in X at time is given by . Let x(t) be the total amount in state X at time t, , and assume that and r(t) are integrable, non-negative functions of t. Then the corresponding mean field integral equation for this scenario is
10 |
and the integral equation (10) above is equivalent to
11 |
Proof
The Leibniz rule for differentiating integrals, Eq. (10), and Lemma 1 yield
12 |
Weak memoryless property of Poisson process 1st event time distributions
The intuition behind the LCT, and the history-independent nature of ODEs, are related to the memorylessness property of exponential distributions. For example, when particles accumulate in a state with an exponentially distributed dwell time distribution, then at any given time all particles currently in that state have iid exponentially distributed amounts of time left before they leave that state regardless of the duration of time already spent in that state. Each of these can be viewed as a reflection of the history-independent nature of Poisson processes.
Accordingly, the familiar memorylessness property of exponential and geometric distributions can, in a sense, be generalized to (nonhomogeneous) Poisson process 1st event time distributions. Recall that if an exponentially distributed (rate r) random variable T represents the time until some event, then if the event has not occurred by time s the remaining duration of time until the event occurs is also exponential with rate r. The analogous weak memorylessness property of nonhomogeneous Poisson process 1st event time distributions is detailed in the following definition.
Definition 1
(Weak memorylessness property of Poisson process 1st event times) Assume T is a Poisson process 1st event time starting at time , which has rate r(t) and CDF [see Eqs. (9) and (9c)]. If the event has not occurred by time the distribution of the remaining time follows a shifted but otherwise identical Poisson process 1st event time distribution with CDF . If is a positive constant we recover the memorylessness property of the exponential distribution.
Proof
The CDF of (for ) is given by
13 |
In other words, Poisson process 1st event time distributions are memoryless up to a time shift in their rate functions. In the context of multiple particles entering a given state X at different times and leaving according to independent Poisson process 1st event times with identical rates r(t) (i.e., t is absolute time, not time since entry into X), then for all particles in state X at a given time the distribution of time remaining in state X is (1) independent of how much time each particle has already spent in X and (2) follows iid Poisson process 1st event time distributions with rate r(t).
Simple case of the LCT
To illustrate how the LCT follows from Lemma 1, consider the simple case of the LCT illustrated in Fig. 1, where a higher dimensional model includes a state transition into, then out of, a focal state X. Assume the time spent in that state () is Erlang(r, k) distributed [i.e., Erlang(r, k)]. Then the LCT provides a system of ODEs equivalent to the mean field integral equations for this process as detailed in the following theorem:
Theorem 1
(Simple LCT) Consider a continuous time state transition model with inflow rate (an integrable function of t) into state X which has an Erlang(r, k) distributed dwell time [with survival function from Eq. (7c)]. Let x(t) be the (mean field) amount in state X at time t and assume .
The mean field integral equation for this scenario (see Fig. 1a) is
14 |
State X can be partitioned (see Fig. 1b) into k sub-states X, , where particles in X are those awaiting the ith event as the next event under a homogeneous Poisson process with rate r. Let be the amount in X at time t, and . Equation (14) is equivalent to the mean field ODEs
15a |
15b |
with initial conditions , for , and
16 |
Proof
Substituting Eq. (7c) into Eq. (14) and then substituting Eq. (16) yields
17 |
Differentiating Eq. (16) (for ) yields Eq. (15) as follows.
For , Eq. (16) reduces to
18 |
Differentiating using the Leibniz integral rule and substituting (18) yields
19 |
Similarly, for , Lemma 1 yields
20 |
The X dwell time distributions are exponential with rate r. To see why, let (t) (where ) be the (mean field) number of particles in state X at time t that have not reached the ith event. Then
21 |
and by Eq. (7) we see from Eqs. (18) and (21) that . That is, particles in state X are those for which the th event has occurred, but not the jth event. Thus, by properties of Poisson processes the dwell time in state X must exponential with rate r.
Standard LCT
The following theorem and corollary together provide a more general, formal statement of the standard Linear Chain Trick (LCT) as used in practice. These extend Theorem 1 (compare Figs. 1, 2) to explicitly include that particles leaving X enter state Y then remain in Y according to an arbitrary distribution with survival function S, and include transitions into Y from other sources.
Theorem 2
(Standard LCT) Consider a continuous time dynamical system model of mass transitioning among various states, with inflow rate to a state X and an Erlang(r, k) distributed delay before entering state Y. Let x(t) and y(t) be the amount in each state, respectively, at time t. Further assume an inflow rate into state Y from other non-X states, and that the underlying stochastic model assumes that the duration of time spent in state Y is determined by survival function . Assume are integrable non-negative functions of t, and assume non-negative initial conditions and .
The mean field integral equations for this scenario are
22a |
22b |
Equations (22) are equivalent to
23a |
23b |
23c |
where , , for , and
24 |
Proof
Eqs. (23a), (23b) and (24) follow from Theorem 1. Equation (23c) follows from substituting (24) into (22b). The definition of and initial condition together imply and for the remaining .
Corollary 1
Integral equations like Eq. (23c) may have ODE representations depending on the Y dwell time distribution, i.e., , for example:
- If particles leave Y after a Poisson process 1st event time distributed dwell time [i.e., the per-capita loss rate from Y is ], then , and letting , Theorem 2 yields
25 - As implied by 1 and 2 above, if the per-capita loss rate is constant or time spent in Y is otherwise exponentially distributed, then
27 Any of the more general cases considered in the sections below.
Example 2
To illustrate how the Standard LCT can be used to substitute an implicit exponential dwell time distribution with an Erlang distribution, consider the SIR example discussed in the Introduction [Eqs. (2) and (3), see also Anderson and Watson 1980; Lloyd 2001a, b], but assume the dwell time distribution for the infected state I is Erlang (still with mean ) with variance6, i.e., by Eq. (6), Erlang with a rate and shape .
By Theorem 2 and Corollary 1, with and , the corresponding mean field ODEs are
28a |
28b |
28c |
28d |
Notice that if (i.e., ), the dwell time in I is exponentially distributed with rate , , and Eq. (28) reduce to Eq. (2).
This example nicely illustrates how using Theorem 2 to relax an exponential dwell time assumption implicit in a system of mean field ODEs is much more straightforward than constructing them after first deriving the integral equations, like Eq. (3), and then differentiating them using Lemma 1. In the sections below, we present similar theorems intended to be used for constructing mean field ODEs directly from stochastic model assumptions.
Extended LCT for Poisson process kth event time distributed dwell times
The Standard LCT assumes an Erlang(r, k) distributed dwell time, i.e., a kth event time distribution under a homogeneous Poisson process with rate r. Here we generalize the Standard LCT by assuming the X dwell time follows a more general kth event time distribution under a Poisson process with rate r(t).
First, observe that Eq. (8) in Lemma 1 are more practical when written in terms of (see the proof of Theorem 1),
29a |
29b |
where and .
Lemma 2
A similar relationship to Eq. (29) above (i.e., to Lemma 1) holds true for the Poisson process jth event time distribution density functions given by Eq. (9a). Specifically,
30a |
30b |
where and for . Note that, if for some t, this relationship can be written in terms of
31 |
as shown in the proof below, where , , and for .
Proof
For ,
32 |
Likewise, for , we have
33 |
Lemma 2 allows us to generalize Erlang-based results like Theorem 2 to their time-varying counterparts with a time-dependent (or state-dependent) rate r(t), as in the following generalization of the Standard LCT (Theorem 2).
Theorem 3
(Extended LCT for dwell times distributed as Poisson process event times) Consider the Standard LCT in Theorem 2 but assume the X dwell time is a Poisson process kth event time, rate r(t). The corresponding mean field integral equations, where and are given in Eq. (9), are
34a |
34b |
The above Eq. (34), with , are equivalent to
35a |
35b |
35c |
with initial conditions , for and
36 |
Equation (35c) may be further reduced to ODEs, e.g., via Corollary 1.
Proof
Substituting Eq. (9b) into Eq. (34a) and substituting Eq. (36) yields . Differentiating Eq. (36) with using the Liebniz integration rule as well as Eq. (30a) from Lemma 2 yields Eq. (35a). Likewise, for , differentiation of Eq. (36) and Lemma 2 yields
37 |
Equation (35c) follows from substituting (36) into (34b). The definition of and initial condition together imply and for the remaining . If for some t, Eq. (35) still hold, since Eqs. (36) and (37) can be rewritten using u as in the proof of Lemma 2.
Having generalized the Standard LCT (Lemma 1 and Theorem 2) to include Poisson process kth event time distributed dwell times, we may now address more complex stochastic model assumptions and how they are reflected in the structure of corresponding mean field ODEs.
Transitions to multiple states
Modeling the transition from one state to multiple states following a distributed delay (as illustrated in Fig. 3) can be done under different sets of assumptions about the underlying stochastic processes, particularly with respect to the rules governing how individuals are distributed across multiple recipient states upon exiting X, and how those rules depend on the dwell time distribution(s) for individuals in that state. Importantly, those different sets of assumptions can yield very different mean field models (e.g., see Feng et al. 2016) and so care must be taken to make those assumptions appropriately for a given application.
While modelers have some flexibility to choose appropriate assumptions, in practice modelers sometimes make unintended and undesirable implicit assumptions, especially when constructing ODE models using “rules of thumb” instead of deriving them from first principles. In this section we present results aimed at helping guide (a) the process of picking appropriate dwell time distribution assumptions, and (b) directly constructing corresponding systems of ODEs without deriving them from explicit stochastic models or intermediate integral equations.
Each of the three cases detailed below yield different mean field ODE models for the scenario depicted in Fig. 3.
First, in Sect. 3.5.1, we consider the extension of Theorem 3 where upon leaving X particles are distributed across recipient states according to a generalized Bernoulli distribution with (potentially time varying) probabilities/proportions , . Here the outcome of which state a particle transitions to is independent of the time spent in the first state.
Second, in Sects. 3.5.2 and 3.5.3, particles entering the first state (X) do not all follow the same dwell time distribution in X. Instead, upon entering X they are distributed across sub-states of X, X, according to a generalized Bernoulli distribution, and each sub-state X has a dwell time given by a Poisson process th event time distribution with rate . That is, the X dwell time is a finite mixture of Poisson process event time distributions. Particles transition out of X into m subsequent states Y according to the probabilities/proportions , the probability of going to Y from X, and . Here the determination of which recipient state Y a particle transitions to depends on which sub-state of X the particle was assigned to upon entering X (see Fig. 5).
Third, in Sect. 3.5.4, the outcome of which recipient state a particle transitions to upon leaving X is determined by a “race” between multiple competing Poisson process kth event time distributions, and is therefore not independent of the time spent in the first state (as in Sect. 3.5.1), nor is it pre-determined upon entry into X (as in Sects. 3.5.2 and 3.5.3). This result is obtained using yet another novel extension of Lemma 1 in which the dwell time in state X is the minimum of independent Poisson process event time distributions.
Lastly (Sect. 3.5.5), we describe an equivalence between (1) the more complex case addressed in Sect. 3.5.4 assuming a dwell time that obeys the minimum of Poisson process 1st event times, before being distributed across m recipient states, and (2) the conceptually simpler case in Sect. 3.5.1 where the dwell time follows a single Poisson process 1st event time distribution before being distributed among m recipient states. This is key to understanding the scope of the Generalized Linear Chain Trick in Sect. 3.7.
Transition to multiple states independent of the X dwell time distribution
Here we extend the case in the previous section and assume that, upon leaving state X, particles can transition to one of m states (call them , ), and that a particle leaving X at time t enters state with probability , where [i.e., particles are distributed across all Y following a generalized Bernoulli distribution with parameter vector ]. See Fig. 4 for a simple example with constant and . An important assumption in this case is that the determination about which state a particle goes to after leaving X is made once it leaves X, and thus the state it transitions to upon exiting X is determined independent of the dwell time in X. Examples from the literature include Model II in Feng et al. (2016), where infected individuals (state X) either recovered (Y) or died (Y) after an Erlang distributed time delay.
Theorem 4
(Extended LCT with proportional output to multiple states) Consider the case addressed by Theorem 3, and further assume particles go to one of m states (call them Y) with being the probability of going to Y. Let be the survival functions for the dwell times in Y.
The mean field integral equations for this case, with and , are
38a |
38b |
These integral equations are equivalent to the following system of equations:
39a |
39b |
39c |
where , , for , and
40 |
Equations (39c) may be further reduced to ODEs, e.g., via Corollary 1.
Proof
Equations (39a), (39b) and (40) follow from Theorem 3. Equation (39c) follows from substitution of Eq. (40) into (38b). The derivation of Eq. (38b) is similar to the derivation in Appendix A.1 but accounts for the expected proportion entering each Y at time t being equal to .
Example 3
Consider the example shown in Fig. 4, where the dwell time distribution for X is Erlang(r, k) and the dwell times in Y and Z follow 1st event times under nonhomogeneous Poisson processes with respective rates and . By Theorem 4 the corresponding mean field ODEs are
41a |
41b |
41c |
41d |
Transition from sub-states of X with differing dwell time distributions and differing output distributions across states Y
In this second case, particles in state X can be treated as belonging to a heterogeneous population, where each remains in that state according to one of N possible dwell time distributions, the ith of these being the th event time distribution under a Poisson process with rate . Each particle is assigned one of these N dwell time distributions (i.e., it is assigned to sub-state X) upon entry into X according to a generalized Bernoulli distribution with a (potentially time varying) probability vector , . In contrast to the previous case, here the outcome of which recipient state a particle transitions to is not necessarily independent of the dwell time distribution.
Note that the dwell time distribution in this case is a finite mixture of N independent Poisson processes event time distributions. If a random variable T is a mixture of Erlang distributions, or more generally a mixture of N independent Poisson process event time distributions, then the corresponding density function (f) and survival function () are
42a |
42b |
where is the potentially time varying parameter vector for the N distributions that constitute the mixture distribution. Note that if all are constant, this is a mixture of Erlang distributions, or if also all , a mixture of exponentials.
Theorem 5
(Extended LCT for dwell times given by mixtures of Poisson process event time distributions and outputs to multiple states) Consider a continuous time state transition model with inflow rate into state X. Assume that the duration of time spent in state X follows a finite mixture of N independent Poisson process event time distributions. That is, X can be partitioned into N sub-states X, , each with dwell time distributions given by a Poisson process th event time distributions with rates . Suppose the inflow to state X at time t is distributed among this partition according to a generalized Bernoulli distribution with probabilities , where , so that the input rate to X is . Assume that particles leaving sub-state X then transition to state Y with probability , , where the duration of time spent in state Y follows a delay distribution give by survival function . Then we can partition each X into X, , according to Theorem 3 and let x(t), , , and be the amounts in states X, X, X, and Y at time t, respectively. Assume non-negative initial conditions , , , for , and .
The mean field integral equations for this scenario are
43a |
43b |
The above system of equations (43) are equivalent to
44a |
44b |
44c |
with initial conditions , for , where , and . The amounts in X and X are
45 |
46 |
Equations (44c) may be reduced to ODEs, e.g., via Corollary 1.
Proof
Substituting Eq. (42b) into Eq. (43a) and then substituting Eq. (45) yields . Applying Theorem 3 to each X [i.e., to each Eq. (45)] then yields Eqs. (46), (44a) and (44b). (Alternatively, one could prove this directly by differentiating Eq. (46) using Eq. (30) from Lemma 2). The equations (44c) are obtained from (43b) by substitution of Eq. (46).
Example 4
Consider the scenario in Fig. 5, where particles entering state X at rate enter sub-state X with probability , , and the X dwell time is Erlang distributed. Particles exiting X and X transition to Y with probability 1, while particles exiting X transition either to state Y or Z with probabilities and . Assume particle may also enter states Y and Z from sources other than state X (at rates and , respectively), and the dwell times in those two states follow the 1st event times of independent nonhomogeneous Poisson processes with rates and . Theorem 5 yields the following mean field ODEs (see Fig. 5).
47a |
47b |
47c |
47d |
Extended LCT for dwell times given by finite mixtures of Poisson process event time distributions
It’s worth noting that in some applied contexts one may want to approximate a non-Erlang delay distribution with a mixture of Erlang distributions (see Sect. 3.7.1 and Appendix B for more details on making such approximations). Theorem 5 above details how assuming such a mixture distribution would be reflected in the structure of the corresponding mean field ODEs. This case can also be addressed in the more general context provided in Sect. 3.7.1.
Transition to multiple states following “competing” Poisson processes
We now consider the case where T, the time a particle spends in a given state X, follows the distribution given by , the minimum of independent random variables , where has either an Erlang() distribution or, more generally, a Poisson process th event time distribution with rate . Upon leaving state X, particles have the possibility of transitioning to any of m recipient states , , where the probability of transitioning to state Y depends on which of the n random variables was the minimum. That is, if a particle leaves X at time , then the probability of entering state Y is .
The distribution associated with T is not itself an Erlang distribution or a Poisson process event time distribution, however its survival function is the product7 of such survival functions, i.e.,
48 |
As detailed below, we can further generalize the recursion relation in Lemma 1 for the distributions just described above, which can then be used to produce a mean field system of ODEs based on appropriately partitioning X into sub-states.
Before considering this case in general, it is helpful to first describe the sub-states of X imposed by assuming the dwell time distribution described above, particularly the case where the distribution for each is based on 1st event times (i.e., all ). Recall that the minimum of n exponential random variables (which we may think of as 1st event times under a homogeneous Poisson process) is exponential with a rate that is the sum of the individual rates . More generally, it is true that the minimum of n 1st event times under independent Poisson processes with rates is itself distributed as the 1st event time under a single Poisson processes with rate , thus in this case . Additionally, if particles leaving state X are then distributed across the recipient states Y as described above, then this scenario is equivalent to the proportional outputs case described in Theorem 4 with a dwell time that follows a Poisson process 1st event time distribution with rate and a probability vector , since . (This mean field equivalence of these two cases is detailed in Sect. 3.5.5.) Thus, the natural partitioning of X in this case is into sub-states with dwell times that follow iid 1st event time distributions with rate .
We may now describe the mean field ODEs for the more general case using the following notation. To index the sub-states of X, consider the ith Poisson process and its th event time distribution which defines the distribution of . Let denote the event number a particle is awaiting under the ith Poisson process. Then we can describe the particle’s progress through X according to its progress along each of these n Poisson processes using the index vector , where
49 |
Let denote the subset of indices where (where we think of particles in these sub-states as being poised to reach the th event related to the ith Poisson process, and thus poised to transition out of state X).
To extend Lemma 2 for these distributions, define
50 |
where , and if and otherwise. Note that (c.f. Lemma 2). Applying Eq. (9b) to in Eq. (48), the survival function given by Eq. (48) [c.f. Eq. (31) and (9b)] can be written
51 |
We will also refer to the quantities u and with the jth element of each product (in u) removed using the notation
52a |
52b |
This brings us to the following lemma, which generalizes Lemma 1 and Lemma 2 to distributions that are the minimum of n different (independent) Poisson process event times. As with the above lemmas, Lemma 3 will allow one to partition X into sub-states corresponding to each of the event indices in describing the various stages of progress along each Poisson process prior to the first of them reaching the target event number.
Lemma 3
For u as defined in Eq. (50), differentiation with respect to t yields
53 |
where the notation denotes the index vector generated by decrementing the jth element of , [assuming ; for example, ], and the indicator function is 1 if and 0 otherwise.
Proof
Using the definition of u in Eq. (50) above, it follows that
54 |
The next theorem details the LCT extension that follows from Lemma 3.
Theorem 6
(Extended LCT for dwell times given by competing Poisson processes) Consider a continuous time dynamical system model of mass transitioning among multiple states, with inflow rate to a state X. The distribution of time spent in state X (call it T) is the minimum of n random variables, i.e., , , where are either Erlang() distributed or follow the more general (nonhomogeneous) Poisson process th event time distribution with rate . Assume particles leaving X can enter one of m states Y, . If a particle leaves X at time (i.e., occurred first, so ), and then the particle transitions into state with probability . Let x(t), and be the amount in each state, respectively, at time t, and assume non-negative initial conditions.
The mean field integral equations for this scenario, for and , are
55a |
55b |
Equations (55) above are equivalent to
56a |
56b |
56c |
for all , , , and
57 |
The equations (56c) may be further reduced to a system of ODEs, e.g., via Corollary 1.
Proof
Substituting Eq. (51) into Eq. (55a) yields
58 |
Differentiating (57) yields equations Eqs. (56a) and (56b) as follows. First, if then by Lemma 3
59 |
Next, if has any , differentiating Eq. (57) and applying Lemma 3 yields
60 |
Note that, by the definitions of and u that initial condition becomes and for the remaining .
Eqs. (55b) become (56c), where , by substituting Eqs. (52), (57), and , which yields
61 |
Example 5
See Fig. 6. Suppose the X dwell time is where and are the th and th event time distributions under independent Poisson processes (call these PP1 and PP2) with rates and , respectively. Assume that, upon leaving X, particles transition to Y if or to Y if . By Theorem 6, we can partition X into sub-states defined by which event (under each Poisson process) particles are awaiting next. Upon entry into X, all particles enter sub-state X where they each await the 1st events under PP1 or PP2. If the next event to occur for a given particle is from PP1, the particle transitions to X where it awaits a 2nd event from PP1 or 1st event from PP2 (hence the subscript notation). Likewise, if PP2’s 1st event occurs before PP1’s 1st event, the particle would transition to X, and so on. Particles would leave these two states to either X, Y, or Y depending on which event occurs next. Under these assumptions, with and exponential Y dwell times with rates , then the corresponding mean field equations (using ) are
62a |
62b |
62c |
62d |
62e |
62f |
It’s worth pointing out that the dwell times for the above sub-states of X are all identically distributed Poisson process 1st event times (note the loss rates in Eqs. (62a)–(62d), and recall the weak memorylessness property from Sect. 3.1.2). All particles in a X sub-state at time will spend a remaining amount of time in that state that follows a 1st event time distributions under a Poisson process with rate . This is a slight generalization of the familiar fact that the minimum of n independent exponentially distributed random variables (with respective rates ) is itself an exponential random variable (with rate ). The next section addresses the generality of this observation about the sub-states of X.
Mean field equivalence of proportional outputs and competing Poisson processes for 1st event time distributions
The scenarios described in Sect. 3.5.1 (proportional distribution across multiple states after an Erlang dwell time in X) and Sect. 3.5.4 (proportional distribution across multiple states based upon competing Poisson processes), can lead to equivalent mean field equations when the X dwell times follow Poisson process 1st event time distributions, as is Example 5. This equivalence is detailed in Theorem 7, and is an important aspect of the GLCT detailed in Sect. 3.7 because it helps to show how sub-states with dwell times distributed as Poisson process 1st event times are the fundamental buildings blocks of the GLCT.
Theorem 7
(Mean field equivalence of proportional outputs and competing Poisson processes for 1st event time distributions) Consider the special case of Theorem 6 (the Extended LCT for competing Poisson processes) where X has a dwell time given by , where each is a Poisson process 1st event time with rate , and particles transition to Y with probability when . The corresponding mean field model is equivalent to the special case of Theorem 4 (the Extended LCT for multiple outputs) where the X dwell time is a Poisson process 1st event time distribution with rate , and the transition probability vector for leaving X and entering state Y is given by .
Proof
First, in this case . Since all , the probability that is , thus the probability that a particle leaving X at t goes to Y is . Substituting the above equalities into the mean field Eq. (56a) (where there’s only one possible index in ) and (56c) gives
63a |
63b |
which are the mean field equations for the aforementioned special case of Theorem 4.
Modeling intermediate state transitions: reset the clock, or not?
We next describe how to apply extensions of the LCT in two similar but distinctly different scenarios (see Fig. 7) where the transition to one or more intermediate sub-states either resets an individual’s overall dwell time in state X (by assuming the time spent in an intermediate sub-state X is independent of time already spent in X; see Sect. 3.6.1), or instead leaves the overall dwell time distribution for X unchanged (by conditioning the time spent in intermediate state X on the time already spent in X; see Sect. 3.6.2).
An example of these different assumptions leading to important differences in practice comes from Feng et al. (2016) where individuals infected with Ebola can either leave the infected state (X) directly (either to a recovery or death), or after first transitioning to an intermediate hospitalized state (X) which needs to be explicitly modeled in order to incorporate a quarantine effect into the rates of disease transmission (i.e., the force of infection should depend on the number of non-quarantined individuals, i.e., X). As shown in Feng et al. (2016), the epidemic model output depends strongly upon whether or not it is assumed that moving into the hospitalized sub-state impacts the distribution of time spent in the infected state X.
To most simply illustrate these two scenarios, consider the simple case in Fig. 7 where a single intermediate sub-state is being modeled, and particles enter X into sub-state X at rate . Let XXX. Both cases assume particles transition out of X either to sub-state X or they leave state X directly and enter state Y. Both cases also assume the distribution of time spent in X is min() where particles transition to X if (i.e., if ) or to Y if (where each is the th event time under Poisson processes with rates and (see Sects. 3.5.4 and 3.5.5). Let denote the distribution of time spent in intermediate state X. The first case assumes is independent of time spent in X (i.e., the transition to X ‘resets the clock’; see Sect. 3.6.1). The second case assumes is conditional on time already spent in X (call it ), such that the total amount of time spent in X, , is equivalent in distribution to (i.e., the transition to X does not change the overall distribution of time spent in X; see Sect. 3.6.2).
In the next two sections, we provide extensions of the LCT for generalizations of these two scenarios, extended to multiple possible intermediate states with eventual transitions out of X into multiple recipient states.
Intermediate states that reset dwell time distributions
First, we consider the case in which the time spent in the intermediate state X is independent of the time already spent in X (i.e., in the base state X). This is arguably the more commonly encountered (implicit) assumption found in ODE models that aren’t explicitly derived from a stochastic model and/or mean field integro-differential delay equations.
The construction of mean field ODEs for this case is a straightforward application of Theorem 6 from the previous section, combined with the extended LCT with output to multiple states (Theorem 4). Here we have extended this scenario to include intermediate sub-states X where the transition to those sub-states from base state X is based on the outcome of N competing Poisson process event time distributions (), and upon leaving the intermediate states particles transition out of state X into one of possible recipient states Y.
Theorem 8
(Extended LCT with dwell time altering intermediate sub-state transitions) Suppose particles enter X at rate into a base sub-state X. Assume particles remain in X according to a dwell time distribution given by T, the minimum of independent Poisson process th event time distributions with rates , , i.e., . Particles leaving X transition to one of intermediate sub-states X or to one of recipient states according to which . If then the particle leaves X and the probability of transitioning to Y is , where . If for then the particle transitions to X with probability , where . Particles in intermediate state remain there according to the th event times under a Poisson process with rate , and then transition to state Y with probability , where (for fixed t) , and they remain in Y according to a dwell time with survival function .
In this case the corresponding mean field equations are
64a |
64b |
64c |
64d |
64e |
where the amount in base sub-state X is , the amount in the jth intermediate state X is (see Theorem 6 for notation), , , , , and in Eq. (64b) . Note that the equations (64e) may be further reduced to a system of ODEs, e.g., via Corollary 1, and that more complicated distributions for dwell times in intermediate states X (e.g., an Erlang mixture) could be similarly modeled according to other cases addressed in this manuscript.
Proof
This follows from applying Theorem 6 to X and treating the intermediate states X as recipient states, then applying Theorem 4 to each intermediate state to partition each X into X, , yielding Eq. (64).
Example 6
Consider the scenario in Fig. 7. Let the X dwell time be the minimum of Erlang() and Erlang(), with intermediate state dwell time Erlang() and an exponential (rate ) dwell time in Y. Assume the only inputs into X are into X at rate . By Theorem 8 the corresponding mean field ODEs (see Fig. 8) are Eq. (65), where and .
65a |
65b |
65c |
65d |
65e |
65f |
65g |
65h |
Intermediate states that preserve dwell time distributions
In this section we address how to modify the outcome in the previous section to instead construct mean field ODE models that incorporate ‘dwell time neutral’ sub-state transitions, i.e., where the distribution of time spent in X is the same regardless of whether or not particles transition (within X) from some base sub-state X to one or more intermediate sub-states X. This is done by conditioning the dwell time distributions in X on time spent in X in a way that leverages the weak memorylessness property discussed in Sect. 3.1.2.
In applications, this case (in contrast to the previous case) is perhaps the more commonly desired assumption, since modelers often seek to partition states into sub-states where key characteristics, like the overall X dwell time distribution, remain unchanged, but where the different sub-states have functional differences elsewhere in the model. For example, transmission rate reductions from quarantine in SIR type infectious disease models.
One approach to deriving such a model is to condition the dwell time distribution for an intermediate state X on the time already spent in X (as in Feng et al. 2016). We take a slightly different approach and exploit the weak memoryless property of Poisson process 1st event time distributions (see Theorem 1 in Sect. 3.1.2, and the notation used in the previous section) to instead condition the dwell time distribution for intermediate states X on how many of the events have already occurred when a particle transitions from X to X (rather than conditioning on the exact elapsed time spent in X). In this case, since each sub-state of X has iid dwell time distributions that are Poisson process 1st event times with rate , if i of the events had occurred prior to the transition out of X, then the weak memoryless property of Poisson process 1st event time distributions implies that the remaining time spent in X should follow a th event time distribution under an Poisson process with rate , thus ensuring that the total time spent in X follows a th event time distribution with rate . With this realization in hand, one can then apply Theorem 6 and Theorem 4 as in the previous section to obtain the desired mean field ODEs, as detailed in the following Theorem, and as illustrated in Fig. 8.
Theorem 9
(Extended LCT with dwell time preserving intermediate states) Consider the mean field equations for a system of particles entering state X (into sub-state X) at rate . As in the previous case, assume the time spent in X follows the minimum of independent Poisson process th event time distributions with respective rates , (i.e., ). Particles leaving X at time T transition to recipient state Y with probability if , or if () to the jth of intermediate sub-states, X, with probability . If , we may define a random variable indicating how many events had occurred under the Poisson process associated with at the time of the transition out of X (at time T). In order to ensure the overall time spent in X follows a Poisson process th event time distribution with rate , it follows that particles entering state, X will remain there for a duration of time that is conditioned on such that the conditional dwell time for that particle in X will be given by a Poisson process th event time with rate . Finally, assume that particles leaving X via intermediate sub-state X at time t transition to Y with probability , where they remain according to a dwell time determined by survival function .
The corresponding mean field equations are
66a |
66b |
66c |
66d |
where , , , , , are the subset of indices where , are the subset of indices where and , , , and . The equations (66d) may be reduced to ODEs, e.g., via Corollary 1.
Proof
The proof of Theorem 9 parallels the proof of Theorem 8, but with the following modifications. First, each sub-state of X (for all j) has the same dwell time distribution, namely, they are all 1st event time distributions under a Poisson process with rate . Second, upon leaving X where and (i.e., when only events have occurred under the 0th Poisson process; see the definition of K in the text above) particles will enter (with probability ) the jth intermediate state X by entering sub-state X which (due to the weak memorylessness property described in Theorem 1) ensures that, upon leaving X particles will have spent a duration of time that follows the Poisson process th event time distribution with rate .
Example 7
Consider Example 6 in the previous section, but instead assume that the transition to the intermediate state does not impact the overall time spent in state X, as detailed in Theorem 9. The corresponding mean field ODEs for this case are given by Eq. (67) below [compare Eqs. (67e)–(67g) to Eqs. (65e)–(65h)].
67a |
67b |
67c |
67d |
67e |
67f |
67g |
Generalized Linear Chain Trick (GLCT)
In the preceding sections we have provided various extensions of the Linear Chain Trick (LCT) that describe how the structure of mean field ODE models reflects the assumptions that define corresponding continuous time stochastic state transition models. Each case above can be viewed as a special case of the following more general framework for constructing mean field ODEs, which we refer to as the Generalized Linear Chain Trick (GLCT).
The cases we have addressed thus far share the following stochastic model assumptions, which constitute the major assumptions of the GLCT.
A focal state (which we call state X) can be partitioned into a finite number of sub-states (e.g., XX), each with independent (across states and particles) dwell time distributions that are either exponentially distributed with rates or, more generally, are distributed as independent 1st event times under nonhomogeneous Poisson processes with rates , . Recall the equivalence relation in Sect. 3.5.5.
Inflow rates into X can be described by non-negative, integrable inflow rates into each of these sub-states (e.g., ), some or all of which may be zero. This includes the case where particles enter X at rate and are distributed across sub-states X according to the probabilities (i.e., we let ) where .
Particles that transition out of a sub-state X at time t transition either into sub-state X with probability , or into one of recipient states Y, , with probability . That is, let denote the probability that a particle leaving state X at time t enters either X if or Y if , where , .
Recipient states Y, , also have dwell time distributions defined by survival functions and integrable, non-negative inflow rates that describe inputs from all other non-X sources.
The GLCT (Theorem 10) describes how to construct mean field ODEs for states X and Y for state transition models satisfying the above assumptions.
Theorem 10
(Generalized Linear Chain Trick) Consider a stochastic, continuous time state transition model of particles entering state X and transitioning to states Y, , according to the above assumptions A1-A4. Then the corresponding mean field model is given by
68a |
68b |
where , and we assume non-negative initial conditions , . Note that the equations might be reducible to ODEs, e.g., via Corollary 1 or other results presented above.
Furthermore, Eq. (68a) may be written in vector form where () is the matrix of (potentially time-varying) probabilities describing which transitions out of X at time t go to X (likewise, one can define , , , which is the matrix of probabilities describing which transitions from X at time t go to Y), , , and which yields
69 |
where indicates the Hadamard (element-wise) product.
Proof
The proof of the theorem above follows directly from applying Theorem 4 to each sub-state.
Example 8
(Dwell time given by the maximum of independent Erlang random variables) We here illustrate how the GLCT can provide a conceptually simpler framework for deriving ODEs relative to derivation from mean field integral equations by assuming the X dwell time obeys the maximum of multiple Erlang distributions. While the survival function for this distribution is not straightforward to write down, it is fairly straightforward to construct a Markov Chain that yields such a dwell time distribution (see Fig. 9).
Recall that, in Sect. 3.5.4, we considered a dwell time given by the minimum of N Erlang distributions. Here we instead consider the case where the dwell time distribution is given by the maximum of multiple Erlang distributions, where Erlang(). For simplicity, assume the dwell time in a single recipient state Y is exponential with rate . We again partition X according to which events (under the two independent homogeneous Poisson processes associated with each of and ) particles are awaiting, and index those sub-states accordingly (see Fig. 9). These sub-states are X, X, X, X, X, X, X, and X, where a ‘’ in the ith index position indicates that particles in that sub-state have already had the ith Poisson process reach the th event (in this case, the 2nd event). Each such sub-state has exponentially distributed dwell times, but rates for these dwell time distributions differ (unlike the cases in Sect. 3.5.4 where all sub-states had the same rate): the Poisson process rates for sub-states X, X, X, and X are (see Fig. 9 and compare to Theorem 6 and Fig. 6), but the rate for the states X and X (striped circles in Fig. 9) are , and for X and X (shaded circles in Fig. 9) are .
In the context of the GLCT, let [, , , , , , , then by the assumptions above [r, r, r, , r, , , , and hence . Denote and (à la Theorem 7 in Sect. 3.5.5). Then the first eight rows of matrix are given by
70 |
Thus, by the GLCT (Theorem 10), the corresponding mean field ODEs are
71a |
71b |
71c |
71d |
71e |
71f |
71g |
71h |
71i |
GLCT for phase-type distributions
The GLCT above extends the LCT to a very flexible family of dwell time distributions known as (continuous) phase-type distributions (Asmussen et al. 1996; Pérez and Riaño 2006; Osogami and Harchol-Balter 2006; Thummler et al. 2006; Reinecke et al. 2012a; Horváth et al. 2012; Komárková 2012; Horváth et al. 2016; Okamura and Dohi 2015; Horváth and Telek 2017), i.e., the hitting time distributions for Continuous Time Markov Chains (CTMC). These CTMC hitting time distributions include the hypoexponential distributions, hyper-exponential and hyper-Erlang distributions, and generalized Coxian distributions (Reinecke et al. 2012a; Horváth et al. 2016). Importantly, these distributions can be fit to data or can be used to approximate other named distributions (e.g., see Horváth and Telek 2017; Osogami and Harchol-Balter 2006; Altiok 1985; Pérez and Riaño 2006; Komárková 2012; Reinecke et al. 2012b, and related publications). As detailed below, this enables modelers to incorporate a much broader set of dwell time distributions into ODEs than is afforded by the standard LCT.
Consider the assumptions of Theorem 10 above. Assume vectors and , and matrices , and are all constant. As above, assume the probability of entering states in Y is zero, thus our initial distribution vector for this CTMC (with states) is fully determined by the (length n) vector . Also assume—just to define the CTMC that describes transitions among transient states X up to (but not after) entering states Y—that each state in Y is absorbing (i.e., , ). Then the X dwell time distribution follows the hitting time distribution for a CTMC with initial distribution vector and ()() transition probability matrix
72 |
To clearly state the GLCT for phase-type distributions, we must reparameterize the above CTMC. First, there is an equivalent parameterization of this CTMC which corresponds to thinning the underlying Poisson processes so that we only track transitions between distinct states, and ignore when an individual leaves and instantly returns to it’s current state (this thinned process is often called the embedded jump process).
The rate for the thinned process that determines a particle’s dwell time in transient state i goes from to
73 |
If , the transition probabilities out of state i then get normalized to
74 |
The rows for absorbing states (Y) remain unchanged, i.e., for . The resulting transition probability matrix and rate vector define the embedded jump process description of the CTMC with transition probability matrix and rate vector (initial probability vector is the same for both representations of this CTMC).
Lastly, this CTMC can again be reparameterized by combining the jump process transition probability matrix and rate vector to yield this CTMC’s transition rate matrix (also sometimes called the infinitesimal generator matrix or simply the generator matrix) which is defined as follows. Let matrix be the same dimension as (and thus, ) and let the first n terms in the diagonal of be the negative of the jump process rates, (i.e., , ). Let the off diagonal entries of the first n rows of be the jump process transition probabilities multiplied by the ith rate (i.e., , ). Thus, the first row of is
and so on. Since the transition rates out of absorbing states (e.g., the last m rows of ) are 0, has the form
75 |
Note that and can be recovered from using the definitions above.
This third parameterization of the given CTMC, determined solely by initial distribution and transition rate matrix , can now to used to formally describe the phase-type distribution associated with this CTMC, i.e., the distribution of time spent in the transient states X before hitting absorbing states Y. Specifically, the phase-type distribution density function and CDF are
76a |
76b |
where is a vector of ones. Importantly, this distribution depends only on the matrix and length n initial distribution vector .
The GLCT for phase-type distributions can now be stated as follows.
Corollary 2
(GLCT for phase-type distributions) Assume particles enter state X at rate and that the dwell time distribution for a state X follows a continuous phase-type distribution given by the initial probability vector and matrix . Let and . Then Eq. (69) in Theorem 10 becomes
77 |
Example 9
(Serial LCT and hypoexponential distributions) Assume the dwell time in state X is given by the sum of independent (not identically distributed) Erlang distributions or, more generally, Poisson process th event time distributions with rates , i.e., , (note the special case where all and are constant, which yields that T follows a hypoexponential distribution). Let and further assume particles go to with probability upon leaving X, . Using the GLCT framework above, this corresponds to partitioning X into sub-states X, where , and
78 |
where the first elements of are , the next are , etc., and
79 |
By the GLCT (Theorem 10), using to denote the jth element of in Eq. (78), the corresponding mean field equations are
80a |
80b |
80c |
Note, the phase-type distribution form of the hypoexponential distribution could be used with Corollary 2 to derive the equations in Eq. (9).
Example 10
(SIR model with Phase-Type Duration of Infectiousness.) Consider the simple SIR model given by Eq. (2) with mass action transmission . Suppose the assumed exponentially distributed dwell times in state I were to be replaced by a phase-type distribution with initial distribution vector and matrix (note that, in some cases, it is possible to match the first three or more moments using only a or matrix and note that Matlab and Python routines for making such estimates are freely available in Horváth and Telek 2017). Then, letting be the vector of substates of I and , by the GLCT for phase-type distributions (Corollary 2) the corresponding mean field ODE model, with , is
81a |
81b |
Discussion
ODEs are arguably the most popular framework for modeling continuous time dynamical systems in applications. This is in part due to the relative ease of building, simulating, and analyzing ODE models, which makes them very accessible to a broad range of scientists and mathematicians. The above results generalize the Linear Chain Trick (LCT), and detail how to construct mean field ODE models for a broad range of scenarios found in applications based on explicit individual-level stochastic model assumptions. Our hope is that these contributions improve the speed and efficiency of constructing mean field ODE models, increase the flexibility to make more appropriate dwell time assumptions, and help clarify (for both modelers and those reading the results of their work) how individual-level stochastic assumptions are reflected in the structure of mean field ODE model equations. We have provided multiple novel theorems that describe how to construct such ODEs directly from underlying stochastic model assumptions, without formally deriving them from an explicit stochastic model or from intermediate integral equations. The Erlang distribution recursion relation (Lemma 1) that drives the LCT has been generalized to include the time-varying analogues of Erlang distributions—i.e., kth event time distributions under nonhomogeneous Poisson processes (Lemma 2)—and distributions that reflect “competing Poisson process event times” defined as the minimum of a finite number of independent Poisson process event times (Lemma 3). These new lemmas, and our generalization of the memorylessness property of the exponential distribution (which we refer to in Sect. 3.1.2 as the weak memorylessness property of nonhomogeneous Poisson process 1st event time distributions) together allow more flexible dwell time assumptions to be incorporated into mean field ODE models. We have also introduced a novel Generalized Linear Chain Trick (GLCT; Theorem 10 in Sect. 3.7) which complements previous extensions of the LCT (e.g., Jacquez and Simon 2002; Diekmann et al. 2017) and allows modelers to construct mean field ODE models for a broad range of dwell time assumptions and associated sub-state configurations (e.g., conditional dwell time distributions for intermediate sub-state transitions), including the phase-type family of distributions and their time-varying analogues as detailed in Sect. 3.7.1. The GLCT also provides a framework for considering other scenarios not specifically addressed by the preceding results, as illustrated by Example 8 in which the dwell time distribution follows the maximum of multiple Erlang distributions.
These results not only provide a framework to incorporate more accurate dwell time distributions into ODE models, but also hopefully encourage more comparative studies, such as Feng et al. (2016), that explore the dynamic and application-specific consequences of incorporating non-Erlang dwell time distributions, and conditional dwell time distributions, into ODE models. More study is needed, for example, comparing integral equation models with non-exponential dwell time distributions and their ODE counterparts based on approximating those distributions with Erlang and phase-type distributions.
The GLCT also permits modelers to incorporate the flexible phase-type distributions (i.e., the hitting-time distributions for Continuous Time Markov Chains) into ODE models. This family of probability distributions includes mixtures of Erlang distributions (a.k.a. hyper-Erlang distributions), the minimum or maximum of multiple Erlang distributions, the hypoexponential distributions, generalized Coxian distributions, and others (Reinecke et al. 2012a; Horváth et al. 2016). While the phase-type distributions are currently mostly unknown to mathematical biologists, they have received some attention in other fields and modelers can take advantage of existing methods that have been developed to fit phase-type distributions to other distributions on and to data (see Sect. 3.7.1 for references). These results provide a flexible framework for approximating dwell time distributions, and incorporating those empirically or analytically derived dwell time distributions into ODE models.
There are some additional considerations, and potential challenges to implementing these results in applications, that are worth addressing. First, the increase in the number of state variables may lead to both computational and analytical challenges, however we have a growing number of tools at our disposal for tackling higher dimensional systems (e.g., van den Driessche and Watmough 2002; Guo et al. 2008; Li and Shuai 2010; Du and Li 2014; Bindel et al. 2014). We intend to explore the costs associated with this increase in dimensionality in future publications. Second, it is tempting to assume that the sub-states resulting from the above theorems correspond to some sort of sub-state structure in the actual system being modeled. This is not necessarily the case, and we should be cautious about interpreting these sub-states as evidence of, e.g., cryptic population structure. Third, some of the above theorems make a simplifying assumption that, either at time or upon entry into X, the initial distribution of particles is only into the first sub-state. This may not be the appropriate assumption to make in some applications, but it is fairly straight forward to modify these these initial condition assumptions within the context of the GLCT. Fourth, it may be appropriate in some applications to avoid mean field models, and instead analyze the stochastic model dynamics directly [e.g., see Allen (2010, 2017) and references therein]. Lastly, the history of successful attempts to garner scientific insights from mean field ODE models seems to suggest that such distributional refinements are unnecessary. However, this is clearly not always the case, as evidenced by the many instances in which modelers have abandoned ODEs and instead opted to use integral equations to model systems with non-Erlang dwell time distributions. We hope these results will facilitate more rigorous comparisons between such models and their simplified and/or ODE counterparts, like Feng et al. (2016) and Piotrowska and Bodnar (2018), to help clarify when the use of these ODE models is warranted.
In closing, these novel extensions of the LCT help clarify how underlying assumptions are reflected in the structure of mean field ODE models, and provide a means for incorporating more flexible dwell time assumptions into mean field ODE models directly from first principles without a need to derive ODEs from stochastic models or intermediate mean field integral equations. The Generalized Linear Chain Trick (GLCT) provides both a conceptual framework for understanding how individual-level stochastic assumptions are reflected in the structure of mean field model equations, and a practical framework for incorporating exact, approximate, or empirically derived dwell time distributions and related assumptions into mean field ODE models.
Acknowledgements
The authors thank Michael H. Cortez, Jim Cushing, Marisa Eisenberg, Jace Gilbert, Zoe Haskell, Tomasz Kozubowski, Miles Moran, Catalina Medina, Amy Robards, Deena R. Schmidt, Joe Tien, Narae Wadsworth, and an anonymous referee for comments and suggestions that improved this work.
Appendix A Deterministic models as mean field equations
We here give a brief description of the process of deriving deterministic mean field equations from stochastic first principles.
Consider a stochastic model of discrete particles (e.g., individual organisms in a population, molecules in a solution, etc.) that transition among a finite number of n states in continuous time. Let the state variables of interest be the amount of particles in each state. This set of counts in each state can be thought of as a state vector in the state space, and our model describes the (stochastic) rules governing transitions from one state vector to the next (i.e., from a given state there is some probability distribution across the state space describing how the system will proceed).
Mean field models essentially average that distribution, and thus describe the mean state transitions from any given state of the system. That is, for a given state in , we can think of a probability distribution that describes where the system would move from that point in , and find the mean transition direction in state space, which then defines a deterministic dynamical system on which we refer to as a mean field model for the given stochastic process. To derive a continuous time mean field model, we do this for an arbitrary discrete step size so we can then take the limit as .
A.1 Deriving mean field integral equations and ODEs
Here we illustrate one approach to derive mean field equations starting from an explicit stochastic model using a simple toy example. To do this, we begin by describing a discrete time approximation (with arbitrarily small time step ) of a stochastic system of particles transitioning among multiple states, then derive the corresponding discrete-time mean field model which then yields the desired mean field model by taking the limit as . Other more systematic approaches exist, e.g., see Kurtz (1970, 1971).
Consider a large number of particles that begin (at time ) in state W and then each transitions to state X after an exponentially distributed amount of time (with rate a). Particles remain in state X according to a Erlang(r, k) distributed delay before entering state Y. They then go to state Z after an exponentially distributed time delay with rate . Let , , , and be the amount in each of the corresponding states at time , with , and . Then, as we will derive below, the corresponding continuous time mean field model is given by Eq. (A1) where the state variables w, x, y, and z correspond to the amount in each of the corresponding states, and we assume the initial conditions and .
A1a |
A1b |
A1c |
A1d |
First, to derive the linear ODEs (A1a) and (A1d), note that the number of particles that transition from state W to state X in a short time interval is binomially distributed: if we think of a transition from W to X as a “success” then the number of “trials” and the probability of success p is given by the exponential CDF value . For sufficiently small this implies . Let for . Since the expected value of a binomial random variable is np it follows that
A2 |
Dividing both sides of (A2) by and then taking the limit as yields
A3 |
Similarly, define z(t) in terms of then it follows that
A4 |
Next, we derive the integral equations (A1b) and (A1c) by similarly deriving a discrete time mean field model and then taking its limit as bin width (i.e., as the number of bins ).
Partition the interval [0, t] into equally wide intervals of width (see Fig. 10). Let denote the ith such time interval () and let denote the start time of the ith interval. We may now account for the number transitioning into and out of state X during time interval , and sum these values to compute x(t).
The number in state X at time t () is the number that entered state X between time 0 and t, less the number that transitioned out of X before time t (for now, assume ). A particle that enters state X at time will still be in state X according to a Bernoulli random variable with (the expected proportion under the given gamma distribution). Therefore, to compute we can sum over our M intervals and add up the number that entered state X during interval and, from each of those M cohorts, count how many remain in X at time t. Specifically, the number entering X during the ith interval [] is given by (see Fig. 10), and thus the number remaining in X at time t is the sum of the number remaining at time t from each such cohort (i.e., the sum over to M) where the number remaining in X at t from each cohort follows a compound binomial distribution given by the sum of Bernoulli random variables each with probability . This defines our stochastic state transition model, which yields a mean field model as follows.
The expected amount entering X during [ is , and the expected proportion of the ith cohort remaining at time t is . Thus, the expected number from the ith cohort remaining in X at time t is . Summing these expected values over all intervals, and letting , yields
A5 |
To calculate y(t), let be the number entering state Y during interval that are still in state Y at time t. As above, can be calculated by summing (over to ) the number in each cohort that entered state X during then transitioned to state Y during time interval and are still in state Y at t. Therefore can be written as the sum of compound distributions given by counting how many of the particles that entered state X during then transitioned to state Y during interval and then persisted until time t without transitioning to Z. To count these, notice that each such particle entering state X during , state Y during and persisting in Y at time t follows a Bernoulli random variable with probability
A6 |
Therefore, the number of particles that entered Y at and remain in state Y at time t, , can be written as a compound random variable . The expected is thus
A7 |
Summing over all intervals and letting () gives
A8 |
Applying Theorem 2, the mean field ODEs equivalent to Eq. (A1), where , are given by
A9a |
A9b |
A9c |
A9d |
A9e |
Appendix B Erlang mixture approximation of Gamma()
Here we give a simple example of analytically approximating a gamma distribution with a mixture of two Erlang distributions by moment matching.
Suppose random variable T follows a gamma() distribution, which has mean and variance , and shape is not an integer. One can approximate this gamma distribution with a mixture of two Erlang distributions that yields the same mean and variance.
These Erlang distributions are TErlang() and TErlang() where the shape parameters are obtained by rounding down and up, respectively, to the nearest integer ( and ) and the rate parameters are given by () and (). This ensures that T and T have mean .
To calculate their variance, let and (note ). By rounding shape down(up) the resulting Erlang distribution has higher(lower) variance, i.e.,
B1 |
To calculate the mixing proportion, let the mixture distribution , where is a Bernoulli random variable with and
B2 |
This Erlang mixture has the desired mean and variance , since
B3 |
Numerical comparisons suggest this mixture is a very good approximation of the target gamma() distribution for shape values larger than roughly 3 to 5, depending (to a lesser extent) on .
Alternatively, to approximate a gamma() distribution with a mixture of Erlang distributions as described above, one could also select the mixing probabilities by, for example, using an alternative metric such as a distance in probability space (e.g., see Rachev 1991), e.g., the -norm on their CDFs, or information-theoretic quantities such as KL or Jensen-Shannon divergence.
Funding
This work was supported by funds provided to PJH by the University of Nevada, Reno (UNR) Office of Research and Innovation, and by a grant from the Sloan Scholars Mentoring Network of the Social Science Research Council with funds provided by the Alfred P. Sloan Foundation.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
Footnotes
Erlang distributions are Gamma distributions with integer-valued shape parameters.
They can also be parameterized in terms of their mean and variance (see Appendix B), or with a shape and scale parameters, where the scale parameter is the inverse of the rate.
A useful interpretation of survival functions, which is used below, is that they give the expected proportion remaining after a give amount time.
...despite the implied exclusivity of the adjective nonhomogeneous.
That is, the probability of a given individual exiting state X during a brief time period [] is approximately .
Here the variance is assumed to have been chosen so that the resulting shape parameter is integer valued. See Appendix B for related details.
It is generally true that the survival function for a minimum of multiple independent random variables is the product of their survival functions.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Allen LJS. An introduction to Stochastic processes with applications to biology. 2. Boca Raton: Chapman and Hall/CRC; 2010. [Google Scholar]
- Allen LJ. A primer on stochastic epidemic models: formulation, numerical simulation, and analysis. Infect Dis Model. 2017;2(2):128–142. doi: 10.1016/j.idm.2017.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Altiok T. On the phase-type approximations of general distributions. IIE Trans. 1985;17(2):110–116. doi: 10.1080/07408178508975280. [DOI] [Google Scholar]
- Anderson RM, May RM. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1992. [Google Scholar]
- Anderson D, Watson R. On the spread of a disease with gamma distributed latent and infectious periods. Biometrika. 1980;67(1):191–198. doi: 10.1093/biomet/67.1.191. [DOI] [Google Scholar]
- Armbruster B, Beck E. Elementary proof of convergence to the mean-field model for the SIR process. J Math Biol. 2017;75(2):327–339. doi: 10.1007/s00285-016-1086-1. [DOI] [PubMed] [Google Scholar]
- Asmussen S, Nerman O, Olsson M. Fitting phase-type distributions via the EM algorithm. Scand J Stat. 1996;23(4):419–441. [Google Scholar]
- Banks HT, Catenacci J, Hu S. A comparison of stochastic systems with different types of delays. Stoch Anal Appl. 2013;31(6):913–955. doi: 10.1080/07362994.2013.806217. [DOI] [Google Scholar]
- Bindel D, Friedman M, Govaerts W, Hughes J, Kuznetsov YA. Numerical computation of bifurcations in large equilibrium systems in matlab. J Comput Appl Math. 2014;261:232–248. doi: 10.1016/j.cam.2013.10.034. [DOI] [Google Scholar]
- Blythe S, Nisbet R, Gurney W. The dynamics of population models with distributed maturation periods. Theor Popul Biol. 1984;25(3):289–311. doi: 10.1016/0040-5809(84)90011-X. [DOI] [PubMed] [Google Scholar]
- Boese F. The stability chart for the linearized Cushing equation with a discrete delay and with gamma-distributed delays. J Math Anal Appl. 1989;140(2):510–536. doi: 10.1016/0022-247X(89)90081-4. [DOI] [Google Scholar]
- Burton TA. Volterra integral and differential equations, mathematics in science and engineering. 2. Amsterdam: Elsevier; 2005. [Google Scholar]
- Câmara De Souza D, Craig M, Cassidy T, Li J, Nekka F, Bélair J, Humphries AR. Transit and lifespan in neutrophil production: implications for drug intervention. J Pharmacokinet Pharmacodyn. 2018;45(1):59–77. doi: 10.1007/s10928-017-9560-y. [DOI] [PubMed] [Google Scholar]
- Campbell SA, Jessop R. Approximating the stability region for a differential equation with a distributed delay. Math Model Nat Phenom. 2009;4(2):1–27. doi: 10.1051/mmnp/20094201. [DOI] [Google Scholar]
- Champredon D, Dushoff J, Earn D (2018) Equivalence of the Erlang SEIR epidemic model and the renewal equation. bioRxiv 10.1101/319574
- Ciaravino G, García-Saenz A, Cabras S, Allepuz A, Casal J, García-Bocanegra I, Koeijer AD, Gubbins S, Sáez J, Cano-Terriza D, Napp S. Assessing the variability in transmission of bovine tuberculosis within Spanish cattle herds. Epidemics. 2018;23:110–120. doi: 10.1016/j.epidem.2018.01.003. [DOI] [PubMed] [Google Scholar]
- Clapp G, Levy D. A review of mathematical models for leukemia and lymphoma. Drug Discov Today Dis Models. 2015;16:1–6. doi: 10.1016/j.ddmod.2014.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cushing JM. The dynamics of hierarchical age-structured populations. J Math Biol. 1994;32(7):705–729. doi: 10.1007/BF00163023. [DOI] [Google Scholar]
- Dhooge A, Govaerts W, Kuznetsov YA. MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs. ACM Trans Math Softw. 2003;29(2):141–164. doi: 10.1145/779359.779362. [DOI] [Google Scholar]
- Diekmann O, Gyllenberg M, Metz JAJ. Finite dimensional state representation of linear and nonlinear delay systems. J Dyn Differ Equ. 2017 doi: 10.1007/s10884-017-9611-5. [DOI] [Google Scholar]
- Du P, Li MY. Impact of network connectivity on the synchronization and global dynamics of coupled systems of differential equations. Phys D Nonlinear Phenom. 2014;286–287:32–42. doi: 10.1016/j.physd.2014.07.008. [DOI] [Google Scholar]
- Ermentrout B (1987) Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students. Software, environments and tools (Book 14), Society for Industrial and Applied Mathematics
- Ermentrout B (2019) XPP/XPPAUT homepage. http://www.math.pitt.edu/~bard/xpp/xpp.html. Accessed Apr 2019
- Fargue D. Réductibilité des systèmes héréditaires à des systèmes dynamiques (régis par des équations différentielles ou aux dérivées partielles) C R Acad Sci Paris Sér A-B. 1973;277:B471–B473. [Google Scholar]
- Feng Z, Thieme H. Endemic models with arbitrarily distributed periods of infection I: fundamental properties of the model. SIAM J Appl Math. 2000;61(3):803–833. doi: 10.1137/S0036139998347834. [DOI] [Google Scholar]
- Feng Z, Xu D, Zhao H. Epidemiological models with non-exponentially distributed disease stages and applications to disease control. Bullet Math Biol. 2007;69(5):1511–1536. doi: 10.1007/s11538-006-9174-9. [DOI] [PubMed] [Google Scholar]
- Feng Z, Zheng Y, Hernandez-Ceron N, Zhao H, Glasser JW, Hill AN. Mathematical models of Ebola-Consequences of underlying assumptions. Math Biosci. 2016;277:89–107. doi: 10.1016/j.mbs.2016.04.002. [DOI] [PubMed] [Google Scholar]
- Fenton A, Lello J, Bonsall M. Pathogen responses to host immunity: the impact of time delays and memory on the evolution of virulence. Proc Biol Sci. 2006;273(1597):2083–2090. doi: 10.1098/rspb.2006.3552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goltser Y, Domoshnitsky A. About reducing integro-differential equations with infinite limits of integration to systems of ordinary differential equations. Adv Differ Equ 3013. 2013;1:187. doi: 10.1186/1687-1847-2013-187. [DOI] [Google Scholar]
- Guan Zhi-Hong, Ling Guang. Emergence, Complexity and Computation. Berlin, Heidelberg: Springer Berlin Heidelberg; 2017. Dynamic Analysis of Genetic Regulatory Networks with Delays; pp. 285–309. [Google Scholar]
- Guo H, Li MY, Shuai Z. A graph-theoretic approach to the method of global lyapunov functions. Proc Am Math Soc. 2008;136(08):2793–2802. doi: 10.1090/S0002-9939-08-09341-6. [DOI] [Google Scholar]
- Gyllenberg M. Mathematical aspects of physiologically structured populations: the contributions of J. A. J. Metz. J Biol Dyn. 2007;1(1):3–44. doi: 10.1080/17513750601032737. [DOI] [PubMed] [Google Scholar]
- Hethcote HW, Tudor DW. Integral equation models for endemic infectious diseases. J Math Biol. 1980;9(1):37–47. doi: 10.1007/BF00276034. [DOI] [PubMed] [Google Scholar]
- Horváth G, Telek M (2017) BuTools 2: a rich toolbox for Markovian performance evaluation. In: ValueTools 2016—10th EAI international conference on performance evaluation methodologies and tools, association for computing machinery, pp 137–142. 10.4108/eai.25-10-2016.2266400
- Horváth Gábor, Reinecke Philipp, Telek Miklós, Wolter Katinka. Analytical and Stochastic Modeling Techniques and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. Efficient Generation of PH-Distributed Random Variates; pp. 271–285. [Google Scholar]
- Horvath Andras, Scarpa Marco, Telek Miklos. Springer Series in Reliability Engineering. Cham: Springer International Publishing; 2016. Phase Type and Matrix Exponential Distributions in Stochastic Modeling; pp. 3–25. [Google Scholar]
- Jacquez JA, Simon CP. Qualitative theory of compartmental systems with lags. Math Biosci. 2002;180(1):329–362. doi: 10.1016/S0025-5564(02)00131-1. [DOI] [PubMed] [Google Scholar]
- Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proc R Soc Lond Ser A Contain Pap Math Phys Character. 1927;115(772):700–721. doi: 10.1098/rspa.1927.0118. [DOI] [Google Scholar]
- Komárková Z (2012) Phase-type approximation techniques. Bachelor’s Thesis, Masaryk University, https://is.muni.cz/th/ysfsq
- Krylova O, Earn DJD. Effects of the infectious period distribution on predicted transitions in childhood disease dynamics. J R Soc Interface. 2013 doi: 10.1098/rsif.2013.0098. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krzyzanski W, Hu S, Dunlavey M. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression. J Pharmacokinet Pharmacodyn. 2018;45(2):329–337. doi: 10.1007/s10928-018-9575-z. [DOI] [PubMed] [Google Scholar]
- Kurtz TG. Solutions of ordinary differential equations as limits of pure jump Markov processes. J Appl Probab. 1970;7(1):49–58. doi: 10.2307/3212147. [DOI] [Google Scholar]
- Kurtz TG. Limit theorems for sequences of jump Markov processes approximating ordinary differential processes. J Appl Probab. 1971;8(2):344–356. doi: 10.2307/3211904. [DOI] [Google Scholar]
- Li MY, Shuai Z. Global-stability problem for coupled systems of differential equations on networks. J Differ Equ. 2010;248(1):1–20. doi: 10.1016/j.jde.2009.09.003. [DOI] [Google Scholar]
- Lin CJ, Wang L, Wolkowicz GSK. An alternative formulation for a distributed delayed logistic equation. Bull Math Biol. 2018;80(7):1713–1735. doi: 10.1007/s11538-018-0432-4. [DOI] [PubMed] [Google Scholar]
- Lloyd AL. Destabilization of epidemic models with the inclusion of realistic distributions of infectious periods. Proc R Soc Lond B Biol Sci. 2001;268(1470):985–993. doi: 10.1098/rspb.2001.1599. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lloyd AL. Realistic distributions of infectious periods in epidemic models: changing patterns of persistence and dynamics. Theor Popul Biol. 2001;60(1):59–71. doi: 10.1006/tpbi.2001.1525. [DOI] [PubMed] [Google Scholar]
- Lloyd A.L. Mathematical and Statistical Estimation Approaches in Epidemiology. Dordrecht: Springer Netherlands; 2009. Sensitivity of Model-Based Epidemiological Parameter Estimation to Model Assumptions; pp. 123–141. [Google Scholar]
- Ma J, Earn DJD. Generality of the final size formula for an epidemic of a newly invading infectious disease. Bull Math Biol. 2006;68(3):679–702. doi: 10.1007/s11538-005-9047-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacDonald Norman. Time Lags in Biological Models. Berlin, Heidelberg: Springer Berlin Heidelberg; 1978. [Google Scholar]
- MacDonald Norman. Time Lags in Biological Models. Berlin, Heidelberg: Springer Berlin Heidelberg; 1978. Stability Analysis; pp. 13–38. [Google Scholar]
- MacDonald N. Biological delay systems: linear stability theory, cambridge studies in mathematical biology. Cambridge: Cambridge University Press; 1989. [Google Scholar]
- Makroglou A, Li J, Kuang Y. Mathematical models and software tools for the glucose-insulin regulatory system and diabetes: an overview. Appl Numer Math. 2006;56(3):559–573. doi: 10.1016/j.apnum.2005.04.023. [DOI] [Google Scholar]
- Metz J. A. J., Diekmann O., editors. The Dynamics of Physiologically Structured Populations. Berlin, Heidelberg: Springer Berlin Heidelberg; 1986. [Google Scholar]
- Metz J, Diekmann O (1991) Exact finite dimensional representations of models for physiologically structured populations. I: The abstract formulation of linear chain trickery. In: Goldstein JA, Kappel F, Schappacher W (eds) Proceedings of differential equations with applications in biology, physics, and engineering 1989, vol 133, pp 269–289
- Nisbet R. M., Gurney W. S. C., Metz J. A. J. Applied Mathematical Ecology. Berlin, Heidelberg: Springer Berlin Heidelberg; 1989. Stage Structure Models Applied in Evolutionary Ecology; pp. 428–449. [Google Scholar]
- Okamura Hiroyuki, Dohi Tadashi. Quantitative Evaluation of Systems. Cham: Springer International Publishing; 2015. mapfit: An R-Based Tool for PH/MAP Parameter Estimation; pp. 105–112. [Google Scholar]
- Osogami T, Harchol-Balter M. Closed form solutions for mapping general distributions to quasi-minimal PH distributions. Perform Eval. 2006;63(6):524–552. doi: 10.1016/j.peva.2005.06.002. [DOI] [Google Scholar]
- Özbay H, Bonnet C, Clairambault J (2008) Stability analysis of systems with distributed delays and application to hematopoietic cell maturation dynamics. In: 2008 47th IEEE conference on decision and control, pp 2050–2055. 10.1109/CDC.2008.4738654
- Pérez JF, Riaño G (2006) jPhase: an object-oriented tool for modeling phase-type distributions. In: Proceeding from the 2006 Workshop on tools for solving structured Markov chains, ACM, New York, NY, USA, SMCtools ’06. 10.1145/1190366.1190370
- Piotrowska M, Bodnar M. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model. Commun Nonlinear Sci Numer Simul. 2018;54:389–415. doi: 10.1016/j.cnsns.2017.06.003. [DOI] [Google Scholar]
- Ponosov A, Shindiapin A, Miguel JJ. The W-transform links delay and ordinary differential equations. Funct Differ Equ. 2002;9(3–4):437–469. [Google Scholar]
- Rachev ST (1991) Probability metrics and the stability of stochastic models. Wiley Series in Probability and Mathematical Statistics, Wiley, New York
- Reinecke Philipp, Bodrog Levente, Danilkina Alexandra. Resilience Assessment and Evaluation of Computing Systems. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. Phase-Type Distributions; pp. 85–113. [Google Scholar]
- Reinecke P, Krauß T, Wolter K. Cluster-based fitting of phase-type distributions to empirical data. Comput Math Appl. 2012;64(12):3840–3851. doi: 10.1016/j.camwa.2012.03.016. [DOI] [Google Scholar]
- Robertson SL, Henson SM, Robertson T, Cushing JM. A matter of maturity: to delay or not to delay? Continuous-time compartmental models of structured populations in the literature 2000–2016. Nat Resour Model. 2018;31(1):e12160. doi: 10.1111/nrm.12160. [DOI] [Google Scholar]
- Roussel MR. The use of delay differential equations in chemical kinetics. J Phys Chem. 1996;100(20):8323–8330. doi: 10.1021/jp9600672. [DOI] [Google Scholar]
- Smith H. An introduction to delay differential equations with applications to the life sciences. Berlin: Springer; 2010. [Google Scholar]
- Smolen P, Baxter DA, Byrne JH. Modeling transcriptional control in gene networks—methods, recent results, and future directions. Bull Math Biol. 2000;62(2):247–292. doi: 10.1006/bulm.1999.0155. [DOI] [PubMed] [Google Scholar]
- Strogatz SH (2014) Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering, 2nd edn. Studies in Nonlinearity, Westview Press
- Takashima Y, Ohtsuka T, González A, Miyachi H, Kageyama R. Intronic delay is essential for oscillatory expression in the segmentation clock. Proc Natl Acad Sci. 2011;108(8):3300–3305. doi: 10.1073/pnas.1014418108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thummler A, Buchholz P, Telek M. A novel approach for phase-type fitting with the EM algorithm. IEEE Trans Dependable Secure Comput. 2006;3(3):245–258. doi: 10.1109/TDSC.2006.27. [DOI] [Google Scholar]
- van den Driessche P, Watmough J. Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math Biosci. 2002;180(1–2):29–48. doi: 10.1016/S0025-5564(02)00108-6. [DOI] [PubMed] [Google Scholar]
- Vogel T (1961) Systèmes déferlants, systèmes héréditaires, systèmes dynamiques. In: Proceedings of the international symposium nonlinear vibrations, IUTAM, Kiev, pp 123–130
- Vogel T (1965) Théorie des Systèmes Évolutifs. No. 22 in Traité de physique théorique et de physique mathématique, Gauthier-Villars, Paris
- Wang N, Han M. Slow-fast dynamics of Hopfield spruce-budworm model with memory effects. Adv Differ Equ. 2016;1:73. doi: 10.1186/s13662-016-0804-8. [DOI] [Google Scholar]
- Wearing HJ, Rohani P, Keeling MJ. Appropriate models for the management of infectious diseases. PLOS Med. 2005 doi: 10.1371/journal.pmed.0020174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolkowicz G, Xia H, Ruan S. Competition in the chemostat: a distributed delay model and its global asymptotic behavior. SIAM J Appl Math. 1997;57(5):1281–1310. doi: 10.1137/S0036139995289842. [DOI] [Google Scholar]
- Yates CA, Ford MJ, Mort RL. A multi-stage representation of cell proliferation as a Markov process. Bull Math Biol. 2017;79(12):2905–2928. doi: 10.1007/s11538-017-0356-4. [DOI] [PMC free article] [PubMed] [Google Scholar]