Skip to main content
Infectious Disease Modelling logoLink to Infectious Disease Modelling
. 2018 Sep 25;3:192–248. doi: 10.1016/j.idm.2018.08.001

A primer on the use of probability generating functions in infectious disease modeling

Joel C Miller 1
PMCID: PMC6326237  PMID: 30839899

Abstract

We explore the application of probability generating functions (PGFs) to invasive processes, focusing on infectious disease introduced into large populations. Our goal is to acquaint the reader with applications of PGFs, moreso than to derive new results. PGFs help predict a number of properties about early outbreak behavior while the population is still effectively infinite, including the probability of an epidemic, the size distribution after some number of generations, and the cumulative size distribution of non-epidemic outbreaks. We show how PGFs can be used in both discrete-time and continuous-time settings, and discuss how to use these results to infer disease parameters from observed outbreaks. In the large population limit for susceptible-infected-recovered (SIR) epidemics PGFs lead to survival-function based models that are equivalent to the usual mass-action SIR models but with fewer ODEs. We use these to explore properties such as the final size of epidemics or even the dynamics once stochastic effects are negligible. We target this primer at biologists and public health researchers with mathematical modeling experience who want to learn how to apply PGFs to invasive diseases, but it could also be used in an applications-based mathematics course on PGFs. We include many exercises to help demonstrate concepts and to give practice applying the results. We summarize our main results in a few tables. Additionally we provide a small python package which performs many of the relevant calculations.

1. Introduction

The spread of infectious diseases remains a public health challenge. Increased interaction between humans and wild animals leads to increased zoonotic introductions, and modern travel networks allows these diseases to spread quickly. Many mathematical approaches have been developed to give us insight into the early behavior of disease outbreaks. An important tool for understanding the stochastic behavior of an outbreak soon after introduction is the probability generating function (PGF) (Allen, 2010; Wilf, 2005; Yan, 2008).

Specifically, PGFs frequently give insight about the statistical behavior of outbreaks before they are large enough to be affected by the finite-size of the population. In these cases, both susceptible-infected-recovered (SIR) disease (for which nodes recover with immunity) and susceptible-infected-susceptible (SIS) disease (for which nodes recover and can be reinfected immediately) are equivalent. In the case of SIR disease PGFs can also be used to study the dynamics of disease once an epidemic is established in a large population.

We can investigate properties such as the early growth rate of the disease, the probability the disease becomes established, or the distribution of final sizes of outbreaks that fail to become established. Similar questions also arise in other settings where some introduced agent can reproduce or die, such as invasive species in ecological settings (Lewis, Petrovskii, & Potts, 2016), early within-host pathogen dynamics (Conway & Coombs, 2011), and the accumulation of mutations in precancerous and cancerous cells (Antal & Krapivsky, 2011; Durrett, 2015) or in pathogen evolution (Volz, Romero-Severson, & Leitner, 2017). These are all examples of branching processes, and PGFs are a central tool for the analysis of branching processes (Bartlett, 1949; Kendall, 1949; Kimmel & Axelrod, 2002). Except for Section 4 where we develop deterministic equations for later-time SIR epidemics, based on (Miller, 2011; Miller, Slim, & Volz, 2012; Volz, 2008), the approaches we describe here have direct application in these other branching processes as well.

Before proceeding, we define what a PGF is. Let ri denote the probability of drawing the value i from a given distribution of non-negative integers. Then f(x)=irixi is the PGF of this distribution. We should address a potential confusion caused by the name. A “generating function” is a function which is defined from (or “generated by”) a sequence of numbers ai and takes the form iaixi. So a “probability generating function” is a generating function defined from a probability distribution on integers. It is not a function that generates probabilities when values are plugged in for x. There are other generating functions, including the “moment generating function”, defined to be mimxm where im=iriim (the moment and probability generating functions turn out to be closely related).

PGFs have a number of useful properties which we derive in Appendix A. We have structured this paper so that a reader can skip ahead now and read Appendix A in its entirety to get a self-contained introduction to PGFs, or wait until a particular property is referenced in the main text and then read that part of the appendix.

As we demonstrate in Table 1, for many important distributions the PGF takes a simple form. We derive this for the Poisson distribution.

Example 1.1

Consider the Poisson distribution with mean λ

ri=eλλii!.

For this we find

f(x)=ieλλii!xi=eλi(λx)ii!=eλeλx=eλ(x1).

Table 1.

A few common probability distributions and their PGFs.

Distribution PGF f(x)=irixi
Poisson, mean λ: ri=eλλii! eλ(x1)

Uniform: rλ=1 xλ

Binomial: n trials, with success probability p: ri=(ni)piqni for q=1p [q+px]n

Geometrica: ri=qip for q=1p and i=0,1, p/(1qx)

Negative binomialb: ri=(i+rˆ1i)qrˆpi for q=1p (q1px)rˆ
a

Another definition of the geometric distribution with different indexing, ri=qi1p for i=1,2,, gives a different PGF.

b

Typically the negative binomial is expressed in terms of a parameter r which is the number of failures at which the experiment stops, assuming each with success probability p. For us ri plays an important role, so to help distinguish these, we use rˆ rather than r. Then ri is the probability of i successes.

In this primer, we explore the application of PGFs to the study of disease spread. We will use PGFs to answer questions about the early-time behavior of an outbreak (neglecting depletion of susceptibles):

  • What is the probability an outbreak goes extinct within g generations (or by time t) in an arbitrarily large population?

  • What is the probability an index case causes an epidemic?

  • What is the final size distribution of small outbreaks?

  • What is the size distribution of outbreaks at generation g (or time t)?

  • How fast is the initial growth for those outbreaks that do not go extinct?

Although we present these early-time results in the context of SIR outbreaks they also apply to SIS outbreaks and many other invasive processes.

We can also use PGFs for some questions about the full behavior accounting for depletion of susceptibles. Specifically:

  • In a continuous-time Markovian SIR or SIS outbreak spreading in a finite population, what is the distribution of possible system states at time t?

  • In the large-population limit of an SIR epidemic, what fraction of the population is eventually infected?

  • In the large-population limit of an SIR epidemic, what fraction of the population is infected or recovered at time t?

We will consider both discrete-time and Markovian continuous-time models of disease. In the discrete-time case each infected individual transmits to some number of “offspring” before recovering. In the continuous-time case each infected individual transmits with a rate β and recovers with a rate γ.

In Section 2 we begin our study investigating properties of epidemic emergence in a discrete-time, generation-based framework, focusing on the probability of extinction and the sizes of outbreaks assuming that the disease is invading a sufficiently large population with enough mixing that we can treat the infections caused by any one infected individual as independent of the others. We also briefly discuss how we might use our observations to infer disease parameters from observed small outbreaks. In Section 3, we repeat this analysis for a continuous-time case treating transmission and recovery as Poisson processes, and then adapt the analysis to a population with finite size N. Next in Section 4 we use PGFs to derive simple models of the large-time dynamics of SIR disease spread, once the infection has reached enough individuals that we can treat the dynamics as deterministic. Finally, in Section 5 we explore multitype populations in which there are different types of infected individuals, which may produce different distributions of infections. We provide three appendices. In Appendix A, we derive the relevant properties of PGFs, in Appendix B we provide elementary (i.e., not requiring Calculus) derivations of two important theorems, and in Appendix C we provide details of a Python package Invasion_PGF available at https://github.com/joelmiller/Invasion_PGF that implements most of the results described in this primer. Python code that uses this package to implement the figures of Section 2 is provided in the supplement.

Our primary goal here is to provide modelers with a useful PGF-based toolkit, with derivations that focus on developing intuition and insight into the application rather than on providing fully rigorous proofs. Throughout, there are exercises designed to increase understanding and help prepare the reader for applications. This primer (and Appendix A in particular) could serve as a resource for a mathematics course on PGFs. For readers wanting to take a deep dive into the underlying theory, there are resources that provide a more technical look into PGFs in general (Wilf, 2005) or specifically using PGFs for infectious disease (Yan, 2008).

1.1. Summary

Before presenting the analysis, we provide a collection of tables that summarize our main results. Table 2 summarizes our notation. Table 3, Table 4 summarize our main results for the discrete-time and continuous-time models. Table 5 shows applications of PGFs to the continuous-time dynamics of SIR epidemics once the disease has infected a non-negligible proportion of a large population, effectively showing how PGFs can be used to replace most common mass-action models. Finally, Table 6 provides the probability of each finite final outbreak size assuming a sufficiently large population that susceptible depletion never plays a role.

Table 2.

Common function and variable names. When we use a PGF for the number of susceptible individuals, active infections, and/or completed infections x and s correspond to susceptible individuals, y and i to active infections, and z and r to completed infections.

Function/variable name Interpretation
fx=ipixigx=iqixi Arbitrary PGFs.

μy=ipiyiμˆy=βy2+γ/β+γμˆy,z=βy2+γz/β+γ Without hats: The PGF for the offspring distribution in discrete time.
With hats: The PGF for the outcome of an unknown event in a continuous-time Markovian outbreak: y accounts for active infections and z accounts for completed infections.

α, αg, α(t) Probability of either eventual extinction, extinction by generation g, or by time t in an infinite population.

Φgy=iφigyiΦy,t=iφityi PGF for the number of active infections in generation g or at time t in an infinite population.

Ωz=r<ωrzr+ωzΩgz=rωrgzrΩz,t=rωrtzr The PGF for the distribution of completed infections at the end of a small outbreak, in generation g, or at time t in an infinite population. If 0>1, then one of the terms in the expansion of Ω(z) is ωz where ω is the probability of an epidemic.

Πgy,z=i,rπi,rgyizrΠy,z,t=i,rπi,rtyizr The PGF for the joint distribution of current infections and completed infections either at generation g or time t in an infinite population.

Ξ(x,y,t)=s,iξs,i(t)xsyi The PGF for the joint distribution of susceptibles and current infections at time t in a finite population of size N (used for continuous time only). In the SIR case we can infer the number recovered from this and the total population size.

χ(x)=ipixi PGF for the ‘‘ancestor distribution’’, analogous to the offspring distribution.

ψ(x)=κP(κ)xκ PGF for the distribution of susceptibility for the continuous time model where rate of receiving transmission is proportional to κ.

β, γ The individual transmission and recovery rates for the Markovian continuous time model.

Table 3.

A summary of our results for application of PGFs to discrete-time SIS and SIR disease processes in the infinite population limit. The function μ(x) is the PGF for the offspring distribution. The notation [g] in the exponent denotes function composition g times. For example, μ[2](y)=μ(μ(y)).

Question Section Solution
Basic Reproductive Number 0 [the average number of transmissions an infected individual causes early in an outbreak].
Intro to 2 0=μ(1).

Probability of extinction, α, given a single introduced infection. 2.1 α=limgμ[g](0) or, equivalently, the smallest x in [0,1] for which x=μ(x).

Probability of extinction within g generations 2.1.2 αg=μ[g](0).

PGF of the distribution of the number of infected individuals in the g-th generation. 2.2 Φg(y) where Φg solves Φg(y)=μ[g](y).

Average number of active infections in generation g and average number if the outbreak has not yet gone extinct. 2.2 0g, and 0g1αg.

PGF of the number of completed cases at generation g in an infinite population. 2.3.1 Ωg(z) where Ωg solves Ωg(z)=zμ(Ωg1(z)) with Ω0(z)=1.

PGF of the joint distribution of the number of current and completed cases at generation g in an infinite population. 2.3.2 Πg(y,z) where Πg solves Πg(y,z)=zμ(Πg1(y,z)) with Π0(y,z)=y.

PGF of the final size distribution. 2.4 Ω(z) where Ω solves Ω(z)=limgΩg(z). It also solves Ω(z)=zμ(Ω(z)). This has a discontinuity at |z|=1 if epidemics are possible.

Probability an outbreak infects exactly j individuals 2.4 pj1(j)j where pi(j) is the coefficient of yi in the expansion of [μ(y)]j.

Probability a disease has a particular set of parameters Θ given a set of observed independent outbreak sizes X=(j1,,j) and a prior belief P(Θ). 2.4.1 P(Θ|X)=P(j1|Θ)P(j|Θ)P(Θ)Θ'P(j1|Θ)P(j|Θ), which can be solved numerically using our prior knowledge P(Θ) and our knowledge of the probability of each ji given Θ.

Table 4.

A summary of our results for application of PGFs to the continuous-time disease process. We assume individuals transmit with rate β and recover with rate γ. The functions μˆ(y)=(βy2+γ)/(β+γ) and μˆ(y,z)=(βy2+γz)/(β+γ) are given in System (14).

Question Section Solution
Probability of eventual extinction α given a single introduced infection. 3.1 α=min(1,γ/β)

Probability of extinction by time t, α(t). 3.1.1 α(t) where α˙=(β+γ)[μˆ(α)α] and α(0)=0.

PGF of the distribution of number of infected individuals at time t (assuming one infection at time 0). 3.2 Φ(y,t) where Φ(y,0)=y and Φ solves either tΦ=(β+γ)[μˆ(y)y]yφ or tΦ=(β+γ)[μˆ(Φ)Φ].

PGF of the number of completed cases at time t. 3.4 Ω(z,t) where Ω(z,0)=1 and Ω solves tΩ=(β+γ)[μˆ(Ω,z)Ω]

PGF of the joint distribution of the number of current and completed cases at time t (assuming one infection at time 0). 3.3 Π(y,z,t) where Π(y,z,0)=y and Π solves either tΠ=(β+γ)[μˆ(y,z)y]yΠ or tΠ=(β+γ)[μˆ(Π,z)Π].

PGF of the final size distribution. 3.4 Ω(z)=limtΩ(z,t). This also solves Ω(z)=μˆ(Ω(z),z). If epidemics are possible this has a discontinuity at |z|=1.

Probability an outbreak infects exactly j individuals 3.4 1jβj1γj(β+γ)2j1(2j2j1).

PGF for the joint distribution of the number susceptible and infected at time t for SIS dynamics in a population of size N. 3.5.1 Ξ(x,y,t) where Ξ solves tΞ=βN(y2xy)xyΞ+γ(xy)yΞ

PGF for the joint distribution of the number susceptible and infected at time t for SIR dynamics in a population of size N. 3.5.2 Ξ(x,y,t) where Ξ solves tΞ=βN(y2xy)xyΞ+γ(1y)yΞ

Table 5.

A summary of our results for application of PGFs to the final size and large-time dynamics of SIR disease. The PGFs χ and ψ encode the heterogeneity in susceptibility. The PGF χ is the PGF of the ancestor distribution (an ancestor of u is any individual who, if infected, would infect u). The PGF ψ(x)=κp(κ)xκ encodes the distribution of the contact rates.

Question Section Solution
Final size relation for an SIR epidemic assuming a vanishingly small fraction ρ randomly infected initially with ρN1. 4.2 r()=1χ(1r()). [For standard assumptions, including the usual continuous-time assumptions, χ(x)=e0(1x).]

Discrete-time number susceptible, infected, or recovered in a population with homogeneous susceptibility and given 0, assuming an initial fraction ρ is randomly infected with ρN1. 4.3 For g>0:Sg=N1ρeR01Sg1/NIg=NSgRgRg=Rg1+Ig1 with the initial condition S(0)=(1ρ)N, I(0)=ρN, and R(0)=0.

Discrete-time number susceptible, infected, or recovered in a population with heterogeneous susceptibility for SIR disease after g generations with an initial fraction ρ randomly infected where ρN1. 4.3 For g>0:Sg=N1ρχSg1/NIg=NSgRgRg=Rg1+Ig1 with the initial condition S(0)=(1ρ)N, I(0)=ρN, and R(0)=0.

Continuous time number susceptible, infected, or recovered for SIR disease as a function of time with an initial fraction ρ randomly infected where ρN1. Assumes u receives infection at rate βIκu/NK 4.4 For t>0:St=1-ρNψθtIt=N-St-RtRt=γNKβlnθtθ˙t=-βNKIθt with the initial condition θ(0)=1.

Table 6.

The probability of j total infections in an infinite population for different offspring distributions, derived using Theorem 2.7 and the corresponding log-likelihoods. For any one of these, if we sum the probability of j over (finite) j, we get the probability that the outbreak remains finite in an infinite population. This is particularly useful when inferring disease parameters from observed outbreak sizes (Section 2.4.1). The parameters' interpretations are given in Table 1.

Distribution PGF Probability of j infections Log-Likelihood of Parameters given j
Poisson eλ(y1) (jλ)j1j!ejλ jλ+(j1)log(jλ)log(j!)

Uniform yλ {1j=1,λ=00otherwise {0j=1,]λ=0otherwise

Binomial (q+py)n 1j(njj1)pj1qnjj+1 lognj!lognjj+1!logj!+j1logp+njj+1logq

Geometric p/(1qy) 1j(2j2j1)pjqj1 log2j2!logj1!logj!+jlogp+j1logq

Negative Binomial (q1py)rˆ 1j(rˆj+j2j1)qrˆjpj1 logrˆj+j1!logrˆj1!logj!+rˆjlogq+j1logp

1.2. Exercises

We end each section with a collection of exercises. We have designed these exercises to give the reader more experience applying PGFs and to help clarify some of the more subtle points.

Exercise 1.1

Except for the Poisson distribution handled in Example 1.1, derive the PGFs shown in Table 1 directly from the definition f(x)=irixi.

For the negative binomial, it may be useful to use the binomial series:

(1+δ)η=1+ηδ+η(η1)2!δ2++η(η1)(ηi+1)i!δi+

using η=rˆ and δ=px.

Exercise 1.2

Consider the binomial distribution with n trials, each having success probability p=λ/n Using Table 1, show that the PGF for the binomial distribution converges to the PGF for the Poisson distribution in the limit n , if λ is fixed.

2. Discrete-time spread of a simple disease: early time

We begin with a simple model of disease transmission using a discrete-time setting. In the time step after becoming infected, an infected individual causes some number of additional cases and then recovers. We let pi denote the probability of causing exactly i infections (referred to as “offspring”) before recovering. It will be useful to define the PGF for the offspring distribution

μ(y)=i=0piyi. (1)

For results related to early extinction or early-time dynamics, we will assume that the population is large enough and sufficiently well-mixed that the transmissions in successive generations are all independent events and unaffected by depletion of susceptible individuals. Before deriving our results for the early-time behavior of our discrete-time model, we offer a summary in Table 3.

Often in disease spread we are interested in the expected number of infections caused by an infected individual early in an outbreak, which we define to be 0.

0=iipi=μ(1) (2)

where μ(x)=ddxμ(x). The value of 0 is related to disease dynamics, but it is not the only important property of μ.

Example 2.1

We demonstrate a few sample outbreaks in Fig. 1. Here we take a bimodal case with 0=0.9 such that a proportion 0.3 of the population cause 3 infections and the remaining 0.7 cause none. Most of the outbreaks die out immediately, but some persist, surviving multiple generations before extinction.

Example 2.2

Throughout Section 2 we compare simulated SIR outbreaks with the theoretical predictions which we calculate using the Python package Invasion_PGF described in Appendix C. We assume that all individuals are equally likely to be infected by any transmission, and we focus on 0=0.75 and 0=2. For each 0, we consider two distributions for the number of new infections an infected individual causes:

  • a Poisson-distributed number of infections with mean 0, or

  • a bimodal distribution with either 0 or 3 infections, with the proportion chosen to give a mean of 0. The probabilities are p0=10/3 and p3=0/3 (0>3 is impossible).

The bimodal distribution is similar to that of Fig. 1, but with different probabilities of 0 or 3. After an individual chooses the number of infections to cause, the recipients are selected uniformly at random (with replacement) from the population. If they are susceptible, an infection occurs at the next time step, otherwise nothing happens. We use 5×105 simulations for N=100 and N=1000.

Fig. 2 looks at the final size distribution. The distribution of the number infected in small outbreaks (insets) is not significantly affected by the total population size. This is because they do not grow large enough to “see” the system size. They would die out even in an infinite population. Large outbreaks, or epidemics, on the other hand would grow without bound in an infinite population, and their growth is limited by the finiteness of the population. We will see that (assuming homogeneous susceptibility and the large population limit), the proportion infected in an SIR epidemic depends only on 0.

Fig. 1.

Fig. 1

A sample of 10 outbreaks starting with a bimodal distribution having 0=0.9 in which 3/10 of the population causes 3 infections and the rest cause none. The top row denotes the initial states, showing each of the 10 initial infections. An edge from one row to the next denotes an infection from the higher node to the lower node. Most outbreaks die out immediately.

Fig. 2.

Fig. 2

Simulated outcomes of SIR outbreaks in populations as described in Example 2.2. Outbreaks tend to be either small or large. The typical number infected in small outbreaks (insets) is affected by the details of the offspring distribution, but not the population size. The typical proportion infected in large outbreaks (epidemics) appears to depend on the average number of transmissions an individual causes, but not the population size or the offspring distribution. These observations will be explained later. These simulations are reused throughout this section to show how PGFs capture different properties of the distributions.

2.1. Early extinction probability

A common misconception is that if 0>1 an epidemic is inevitable. In fact, if we are lucky an outbreak can die out stochastically before the number infected is large. Conversely, if we are not lucky it may initially grow faster than our deterministic models predict.

In any finite population a disease will eventually go extinct because the disease interferes with its own spread. Our observations show that the typical final outcomes of an outbreak are either an “epidemic” which grows until the number infected is limited by the finiteness of the population or a small outbreak which dies out before it can see the system size. One of our first questions about a possible disease emergence is “what is the probability that an outbreak will grow into an epidemic?” We focus on the equivalent question, “what is the probability the outbreak goes extinct before causing an epidemic?”. We aim to calculate the probability that the disease would go extinct if it never interferes with its own spread, or in other words, if it were spreading through an unlimited population. Throughout we assume that disease is introduced with a single randomly chosen index case.

The theory for the extinction probability in an unbounded population has been developed extensively in the context of Galton–Watson processes (Watson & Galton, 1875). It has been applied to infectious disease many times, e.g. (Easley & Kleinberg, 2010, section 21.8), and (Getz & Lloyd-Smith, 2006; Lloyd-Smith, Schreiber, Kopp, & Getz, 2005).

2.1.1. Derivation as a fixed point equation

We present two derivations of the extinction probability. Our first is quicker, but gives less insight. We start with the a priori observation that the extinction probability takes some value between 0 and 1 inclusive. Our goal is to filter out the vast majority of these options by finding a property of the extinction probability that most values between 0 and 1 do not have.

Let α be the probability of extinction if the spread starts from a single infected individual. Then from Property A.1 of Appendix A we have α=ipiαˆi=μ(αˆ) where αˆ is the probability that, in isolation, an offspring of the initial infected individual would not cause an epidemic. Because we assume that the offspring distribution of later cases is the same as for the index case, we must have αˆ=α and so the extinction probability solves α=μ(α).

We have established:

Theorem 2.1

Assuming that each infected individual produces an independent number of offspring i chosen from a distribution having PGF μ(y), then α, the probability an outbreak starting from a single infected individual goes extinct, satisfies

α=μ(α). (3)

Not all solutions to x=μ(x) must give the extinction probability.

There can be more than one x solving x=μ(x). In fact 1=μ(1) is always a solution, and from Property A.9 it follows that there is another solution if and only if 0=μ(1)>1. In this case, our derivation of Theorem 2.1 does not tell us which of the solutions is correct. However, Section 2.1.2 shows that the correct solution is the smaller solution when it exists. More specifically the extinction probability is α=limgαg where αg=μ(αg1) starting with α0=0. This gives a condition for a nonzero epidemic probability. Namely 0=μ(1)=iipi>1.

Example 2.3

We now consider the Poisson and bimodal offspring distributions described in Example 2.2. We saw that typically an outbreak either affects a small proportion of the population (a vanishing fraction in the infinite population limit) or a large number (a nonzero fraction in the infinite population limit).

By plotting the cumulative density function (cdf) of proportion infected in Fig. 3, we extend our earlier observations. The cdf is steep near zero (becoming vertical in the infinite population limit). Then it is effectively flat for a while. Finally if 0>1 it again grows steeply at some proportion infected well above 0 (the size of epidemic outbreaks).

The plateau’s height is the probability that an outbreak dies out while small. Fig. 3 shows that this is well-predicted by choosing the smaller of the solutions to x=μ(x).

For a fixed 0>1, the the plateau’s height (i.e., the early extinction probability) depends on the details of the offspring distribution and not simply 0. However, the critical value at which the cdf increases for the second time depends only on 0. This suggests that even though the probability of an epidemic depends on the details of the offspring distribution, the proportion infected in an SIR epidemic depends only on 0, the reproductive number. We explore this in more detail in Section 4.2.

Fig. 3.

Fig. 3

Illustration ofTheorem 2.1. The cumulative density function (cdf) for the total proportion ever infected (effectively the integral of Fig. 2). For small 0, all outbreaks die out without affecting a sizable portion of the population. For larger 0, there are many small outbreaks and many large outbreaks, but very few outbreaks in between, so the cdf is flat in this range. The height of this plateau is the probability the outbreak dies out while small. This is approximately the predicted extinction probability for an infinite population (dashed). The probability of a small outbreak is different for the different offspring distributions, but the proportion infected corresponding to epidemics is the same (for given 0).

2.1.2. Derivation from an iterative process

In our second derivation, we calculate the probability that the outbreak dies out within g “generations”. Then the probability the outbreak would die out after a finite number of steps in an infinite population is simply the limit of this as g. In our counting of “generations”, we consider the index case to be generation 0. An individual's generation is equal to the number of transmissions occurring in the chain from the index case to that individual.

We define αg to be the probability that the longest chain an index case will initiate has fewer than g transmissions. So because there are always at least 0 transmissions, α0=0. The probability that there is no transmission is by definition α1. Recalling that the probability the index case causes zero infections is p0, we have

α1=p0=μ(0)=μ(α0)

is the probability that the index case does not cause a chain of 1 or more transmissions. The probability that all chains die out after at most 1 transmission (that is, there are no second generation cases) is the probability that the index case causes i infections, pi, times the probability none of those i individuals causes further infections, α1i, summed over all i. We introduce the notation μ[g](x) to be the result of iterative applications of μ to x g times, so μ[1](x)=μ(x) and for g>1, μ[g](x)=μ(μ[g1](x)). Then following Property A.1 we have

α2=p0+p1α1+p2α12+=μ(α1)=μ[2](0)

We generalize this by stating that the probability an initial infection fails to initiate any length g chains is equal to the probability that all of its i offspring fail to initiate a chain of length g1.

αg=ipiαg1i=μ(αg1)=μ[g](0).

So the probability of not starting a chain of length at least g is found by iteratively applying the function μ g times to x=0. Taking g gives the extinction probability (Getz & Lloyd-Smith, 2006):

α=limgμ[g](0). (4)

The fact that there is a biological interpretation of αg starting with α0=0 is important. It effectively guarantees that the iterative process converges and that the speed of convergence reflects the typical speed of extinction. Iteration appears to be an efficient way to solve x=μ(x) numerically and because of the biological interpretation, we can avoid questions that might arise about whether there are multiple solutions of x=μ(x) and, if so, which of them corresponds to the biological problem. Instead we simply iterate starting from 0 and the result must converge to the probability that in an infinite population the outbreak would go extinct in finite time, regardless of what other solutions x=μ(x) might have.

Exercise 2.1 shows that if μ(0)0 then the limit of the sequence αg is 1 if 01 and some α<1 satisfying α=μ(α) if 0>1. This proves:

Theorem 2.2

Assume that each infected individual produces an independent number of offspring i chosen from a distribution having PGF μ(y). Then

  • The probability an outbreak goes extinct within g generations is

αg=μ[g](0). (5)
  • The probability of extinction in an infinite population is

α=limgαg.
  • If 0=μ'(1)1 and μ(0)0 then α=1. If 0>1 extinction occurs with probability α<1.

Example 2.4

We now consider the Poisson and bimodal offspring distributions described in Example 2.2

Fig. 4 shows that starting with α0=0 and defining αg=μ(αg1), the values of αg emerging from the iterative process correspond to the observed probability that outbreaks have gone extinct by generation g for early values of g.

In the infinite population limit, this provides a match for all g. So this gives the probability the outbreak goes extinct by generation g assuming it has not grown large enough to see the finite-size of the population (i.e., assuming it has not become an epidemic). For SIR epidemics in the finite populations we use for simulations, the plateaus eventually give way to extinction because eventually there are not enough remaining susceptibles.

Fig. 4.

Fig. 4

Illustration ofTheorem 2.2. Left: Cobweb diagrams showing convergence of iterations to the predicted outbreak extinction probability (see Fig. A.10). Right: Observed probabilities of no infections remaining after each generation for simulations of Fig. 2 showing the probability of extinction by generation g. Thin lines show the relation between the cobweb diagram and the extinction probabilities. The simulated probability initially rises quickly, representing outbreaks that die out early on, then it remains steady at a level representing the probability of outbreaks dying out while small. For 0>1 it increases again because the epidemics burn through the finite population (and so the infinite population theory breaks down). The values match the corresponding iteration of the cobweb diagrams.

2.2. Early-time outbreak dynamics

We now explore the number of active infections present in generation g. We will need to use i=1 in this subsection, so to avoid confusion we use as our indexing variable rather than i. Setting φg to be the probability active infections exist at generation g, we define the PGF Φgy=φgy. Assuming at generation 0 there is a single infection (φ1(0)=1) then the initial condition is Φ0(y)=y. From inductive application of Property A.8 for composition of PGFs (exercise 2.7) it is straightforward to conclude that for g>0, Φg(y)=μ[g](y) where μ(y) is the PGF for the offspring distribution.

Theorem 2.3

Assuming that each infected individual produces an independent number of offspring chosen from a distribution with PGF μ(y), the number infected in the g-th generation has PGF

Φg(y)=φ(g)y=μ[g](y) (6)

where φg is the probability there are active infections in generation g. This does not provide information about the cumulative number infected.

It is worth highlighting that for general distributions, the calculation of coefficients of Φg(y) may seem quite challenging. Luckily, it is not so difficult. Property A.3 states (taking i=1)

φ(g)1Mm=1MΦg(Re2πim/M)Re2πim/M

for large M and any R1. For each ym=Re2πim/M we can calculate Φg(ym)=μ[g](ym) by numerically iterating μ g times. Then for large enough M, this gives a remarkably accurate and efficient approximation to the individual coefficients.

Example 2.5

We demonstrate Theorem 2.3 in Fig. 5, using the simulations from Example 2.2. Simulations and predictions are in excellent agreement.

There is a mismatch noticeable for the bimodal distribution with 0=2 particularly with N=100, which is a consequence of the fact that the population is finite. In stochastic simulations, occasionally an individual receives multiple transmissions even early in the outbreak, but in the PGF theory this does not happen.

We are often interested in the expected number of active infections in generation g, φ(g) (however, as seen below this is not the most relevant measure to use if 0>1). Property A.5 shows that this is given by yΦg(y)|y=1. To calculate this we use Φg(1)=1 for all g (Property A.4) and μ(1)=0. Then through induction and the chain rule we show that yΦg(y)|y=1=0g:

yΦg(y)|y=1=yμ(Φg1(y))|y=1=(μ'(Φg1(y))×yΦg1(y))|y=1=μ'(1)×0g1=0g.

we initialized the induction with the case g=1 which is the definition of 0. If 0<1, this shows that we expect decay.

Fig. 5.

Fig. 5

Illustration ofTheorem 2.3. Comparison of predictions and the simulations from Fig. 2 for the number of active infections in the third generation. The bimodal case with N=100 shows a clear impact of population size as a sizable number of transmissions fail because the population is finite. The predictions were made numerically using the summation in Property A.3.

If 0>1, there is a more relevant measure. On average we see growth, but a sizable fraction of outbreaks may go extinct, and these zeros are included in the average, which alters our prediction. This is closely related to the “push of the past” effect observed in phylodynamics (Nee, Holmes, May, & Harvey, 1994). For policy purposes, we are more interested in the expected size if the outbreak is not yet extinct because a response that is scaled to deal with the average size (where the average includes those that are extinct) is either too big (if the disease has gone extinct) or too small (if the disease has become established) (Miller, Davoudi, Meza, Slim, & Pourbohloul, 2010). It is very unlikely to be just right. The expected number infected in generation g conditional on the outbreaks not dying out by generation g is 0g/(1αg). This has an important consequence. We can have different extinction probabilities for different offspring distributions with the same 0. The disease with a higher extinction probability tends to have considerably more infections in those outbreaks that do not go extinct.

We have

Corollary 2.1

In the infinite population limit, the expected number infected in generation g starting from a single infection is

[I]g=0g (7)

and the expected number starting from a single infection conditional on the disease persisting to generation g is

Ig=0g1αg (8)

We can explore higher moments of the distribution of the number infected by taking more derivatives of Φg(y) and evaluating at y=1.

2.3. Cumulative size distribution

We now look at the total number infected while the outbreak is small. There are multiple ways to calculate how the cumulative size of small outbreaks is distributed. We look at two of these. The first focuses just on the number of completed infections by generation g. The second calculates the joint distribution of the number of completed infections and the number of active infections at generation g. Later we address the distribution of final sizes of small outbreaks.

2.3.1. Focused approach to find the cumulative size distribution

We begin by calculating just the number of completed infections at generation g. We define ωj(g) to be the probability that there are j completed infections at generation g (by “completed” we only include individuals who are no longer infectious in generation g). We will use PGFs of the variable z when focusing on completed infections.

We define

Ωg(z)=jωj(g)zj

to be the PGF for the number of completed infections j at generation g. Although we use j to represent recoveries, this model is still appropriate for SIS disease because we are interested in small outbreak sizes in a well-mixed infinite population for which we can assume no previously infected individuals have been reexposed. If the outbreak begins with a single infection, then

Ω0(z)=1andΩ1(z)=z

showing that the first individual (infectious during generation 0) completes his infection at the start of generation 1. For generation 2 we have the initial individual and his direct offspring, so Ω2(z)=zμ(z).

More generally, to calculate for g>1, the completed infections consist of

  • the initial infection

  • the active infections in generation 1.

  • any descendants of those active infections in generation 1 that will have recovered by generation g.

The distribution of the number of descendants of a generation 1 individual (including that individual) who have recovered by generation g is given by Ωg1(z). That is each generation 1 individual and its descendants for the following g1 infections have the same distribution as an initial infection and its descendants after g1 generations.

From Property A.8 the number of descendants by generation g (not counting the initial infection) that have recovered is distributed like μ(Ωg1(z)). Accounting for the initial individual requires that we increment the count by 1 which requires increasing the exponent of z by 1. So we multiply by z. This yields

Ωg(z)=zμ(Ωg1(z))

To sustain an outbreak up to generation g there must be at least one infection in each generation from 0 to g1. So any outbreak with fewer than g completed infections at generation g must be extinct. So the coefficient of zj does not change once g>j. Thus we have shown

Theorem 2.4

Assuming a single initial infection in an infinite population, the PGF Ωg(z)=jωj(g)zj for the distribution of the number of completed infections at generation g>1 is given by

Ωg(z)=zμ(Ωg1(z)) (9)

with Ω1(z)=z. Once g>j, the coefficient ωj(g) is constant.

Example 2.6

We test Theorem 2.4 in Fig. 6, using the simulations from Example 2.2. Simulations and predictions are in excellent agreement.

Example 2.7

Expected cumulative size It is instructive to calculate the expected number of completed infections at generation g. Note that Ωg(z)|z=1=1, μ(1)=1, and μ(1)=0. We use induction to show that for g1 the expected number of completed infections is j=0g10j:

zΩg(z)|z=1=zzμ(Ωg1(z))|z=1=μ(Ωg1(z))+zμ'(Ωg1(z))zΩg1(z)|z=1=μ(1)+μ'(1)j=0g20j=1+0j=0g20j=j=0g10j

This is in agreement with our earlier result that the expected number that are infected in generation j is 0j.

This is

zΩg(z)|z=1={10g1001g0=1

As with our previous results, the sum shows a threshold behavior at 0=1. If 0<1, then in the limit g, the expected cumulative outbreak size converges to the finite value 1/(10). If 01, it diverges.

Fig. 6.

Fig. 6

Illustration ofTheorem 2.4 Comparison of predictions with the simulations from Fig. 2 for the number of completed infections at the start of the third generation. The predictions were calculated using Property A.3.

This example shows

Corollary 2.2

In the infinite population limit the expected number of completed infections at the start of generation g assuming a single randomly chosen initial infection is

zΩg(z)|z=1={10g1001g0=1 (10a)

For 01 this diverges as g. Otherwise it converges to 1/(10).

2.3.2. Broader approach

An alternate approach calculates both the current and cumulative size at generation g. We let πi,r(g) be the probability that there are i actively infected individuals and r completed infections in generation g. We define Πg(y,z)=i,rπi,r(g)yizr, so y represents the active infections and z the completed infections.

Assume we know the values ig1 and rg1 for generation g1. Then rg is simply ig1+rg1 and ig is distributed according to μ(y)ig1. So given those known ig1 and rg1, the distribution for the next generation would be [zμ(y)]ig1zrg1. Summing over all possible ig1 and rg1 yields

Πg(y,z)=i,rπi,r(g1)[zμ(y)]izr=Πg1(zμ(y),z)

with the initial condition

Π0(y,z)=y

The first few iterations are

Π1(y,z)=zμ(y)Π2(y,z)=zμ(zμ(y))

and we can use induction on this to show that in general

Πg(y,z)=zμ(Πg1(y,z))
Theorem 2.5

Given a single initial infection in an infinite population, the PGF Πg(y,z)=i,rπi,r(g)yizr for the joint distribution of the number of active i and completed infections r in generation g is given by

Πg(y,z)=zμ(Πg1(y,z)) (11)

with Π0(y,z)=y.

Example 2.8

We demonstrate Theorem 2.5 in Fig. 7, using the same simulations as in Example 2.2. Simulations and predictions are in excellent agreement.

Fig. 7.

Fig. 7

Illustration ofTheorem 2.5. Comparison of predictions and simulations for the joint distribution of the number of current and completed infections at generation g=3. The predictions were calculated using Property A.3. Left: simulations from Fig. 2 for N=1000 and Right: predictions (note vertical scales on left and right are the same). Top to Bottom: Poisson 0=0.75, Bimodal 0=0.75, Poisson 0=2, and Bimodal 0=2. The predictions match our observations, with some difference for two reasons: 1) because 5×105 simulations cannot resolve events with probabilities as small as 1012, but the PGF approach can, and 2) due to finite-size effects as occasionally an individual receives multiple transmissions even early on. The plots also show the marginal distributions, matching Fig. 5, Fig. 6.

2.4. Small outbreak final size distribution

There are many diseases for which there have been multiple small outbreaks in recent years but no large-scale epidemics (such as Nipah, H5N1 avian influenza, Pneumonic Plague, Monkey pox, and — prior to 2013 — Ebola). A natural question emerges: what can we infer about the epidemic potential of these diseases? The size distribution may help us to infer properties of the disease and in particular to estimate the probability that 0>1 (Blumberg & Lloyd-Smith, 2013; Kucharski & Edmunds, 2015; Nishiura, Yan, Sleeman, & Mode, 2012).

We have found that Ωg(z) gives the PGF for the number of completed infections by generation g. We noted earlier that for a given r, once g>r, the coefficient of zr in Ωg(z) is fixed and equal to the probability that the outbreak goes extinct after exactly r infections. Motivated by this, we look for the limit as g. We define

Ω(z)=limgΩg(z)

We expect this to be the PGF for the final size of the outbreaks.

We can express the pointwise limit1 as

Ω(z)=rωrzr+ωz

where for r< the coefficient ωr is the probability an outbreak causes exactly r infections in an infinite population. We use ω to denote the probability that the outbreak is infinite in an infinite population (i.e., that it is an epidemic), and we interpret z as 1 when z=1 and 0 for 0z<1. So if epidemics are possible, Ω(z) has a discontinuity at z=1, and the limit as z1 from below gives r<ωr=1ω which is the extinction probability α.

We now look for a recurrence relation for Ω(z) in the infinite population limit. Each offspring of the initial infection independently causes a set of infections. The distribution of these new infections (including the original offspring) also has PGF Ω(z). So the distribution of the number of descendants of the initial infection (but not including the initial infection) has PGF μ(Ω(z)). To include the initial infection, we must increase the exponent of z by one, which we do by multiplying by z. We conclude that Ω(z)=zμ(Ω(z)). Although we have shown that Ω(z) solves f(z)=zμ(f(z)), we have not shown that there is only one function that solves this.

We may be interested in the outbreak size distribution conditional on the outbreak going extinct. For this we are looking at Ω(z)/α for any z<1, and at z=1, this is simply 1. Note that if 0<1 then α=1.

Summarizing this we have

Theorem 2.6

Given a single initial infection in an infinite population, consider Ω(z), the PGF for the final size distribution: Ω(z)=(r<ωrzr)+ωz where z=0 if |z|<1 and 1 if |z|=1.

  • Then

Ω(z)={zμ(Ω(z))z11z=1. (12)
  • We have limz1Ω(z)=α=1ω. If 0>1 then Ω(z) is discontinuous at z=1, with a jump discontinuity of ω, the probability of an epidemic.

  • The PGF for outbreak size distribution conditional on the outbreak being finite is

{Ω(z)/α0<z<11z=1

Perhaps surprisingly we can often find the coefficients of Ω(z) analytically if μ(y) is known. We use a remarkable result showing that the probability of infecting exactly n individuals is equal to the coefficient of zn1 in [μ(z)]n (Blumberg & Lloyd-Smith, 2013; Dwass, 1969; van der Hofstad & Keane, 2008; Wendel, 1975). The theorem is

Theorem 2.7

Given an offspring distribution with PGF μ(y), for j< the coefficient of zj in Ω(z) is 1jpj1(j) where pi(j) is defined by [μ(y)]j=ipi(j)yi.

That is, for j < ∞ the probability of having exactly j infections in an outbreak starting from a single infection is 1j times the coefficient of yj1 in [μ(y)]j.

We prove this theorem in Appendix B. The proof is based on observing that if we draw a sequence of j numbers from the offspring distribution, the probability they sum to j1 (corresponding to j1 transmissions and hence j infected individuals including the index case) is the coefficient of zj1 in [μ(z)]j. A fraction 1/j of these satisfy additional constraints needed to correspond to a valid transmission tree2 and thus the probability of a valid transmission tree with exactly j1 transmissions is 1/j times pj1(j).

Because the coefficient of yj1 in [μ(y)]j is 1(j1)!(ddy)j1[μ(y)]j|y=0 (by Property A.2), we have that the probability of an outbreak of size j is

1j!(ddy)j1[μ(y)]j|y=0

It is enticing to think there may be a similar theorem for coefficients of Π(y,z), but we are not aware of one. The theorem has been generalized to models having multiple types of individuals (Kucharski & Edmunds, 2015).

Example 2.9

We demonstrate Theorems 2.6 and 2.7 in Fig. 8, using the same simulations as in Example 2.2.

Example 2.10

The PGF for the negative binomial distribution with parameters p and rˆ (with q=1p) is

μ(y)=(q1py)rˆ

We can rewrite this as

μ(y)=qrˆ(1py)rˆ

We will use this to find the final size distribution. We expand [μ(y)]j=qrˆj(1py)rˆj using the binomial series

(1+δ)η=1+ηδ+η(η1)2!δ2++η(η1)(ηi+1)i!δi+

which holds for integer or non-integer η. Then with py, rˆj, and j1 playing the role of δ, η, and i:

[μ(y)]j=qrˆj(1py)rˆj=qrˆj(1+rˆjpy+rˆj(rˆj+1)2!p2y2++rˆj(rˆj+1)(rˆj+j2)(j1)!pj1yj1+)

[the negatives all cancel]. So the coefficient of yj1 is qrˆjpj1(rˆj+j2)!(rˆj1)!(j1)!=(rˆj+j2j1)qrˆjpj1 (assuming rˆ is an integer). Looking at 1/j times this, we conclude that the probability an outbreak infects exactly j individuals is

1j(rˆj+j2j1)qrˆjpj1

A variation of this result for non-integer rˆ is commonly used in work estimating disease parameters (Blumberg & Lloyd-Smith, 2013; Nishiura et al., 2012). Exercise 2.12 generalizes the formula for this.

Fig. 8.

Fig. 8

Illustration ofTheorems 2.6 and 2.7. The final size of small outbreaks predicted by Theorem 2.6 and by Theorem 2.7 as calculated using Property A.3 matches observations from the simulations in Fig. 2 (see also insets of Fig. 2).

Applying Theorem 2.7 to several different families of distributions yields Table 6 for the probability of a final size j.

2.4.1. Inference based on outbreak sizes

A major challenge in infectious disease modeling is inferring the parameters of an infectious disease. In Section 2.4 we alluded to the use of PGFs to infer disease properties from observations of the size distribution of small outbreaks. In this section we describe how to do this using a Bayesian approach, using the probabilities given in Table 6. A number of researchers have used this approach to estimate disease parameters (Blumberg & Lloyd-Smith, 2013; Kucharski & Edmunds, 2015; Nishiura et al., 2012).

We assume that we know what type of distribution the offspring distribution, but that there are some unknown parameters (often it is assumed to be a negative binomial distribution). We also assume that we have some prior belief about the probability of various parameters. For practical purposes, we will assume that we have some finite number of possible parameter values, each with a probability.

We use Bayes' Theorem (Hoff, 2009):

P(Θ|X)=P(Θ,X)P(X)=P(X|Θ)P(Θ)P(X) (13)

Here we think of Θ as the specific parameter values and X as the observed data (typically the observed size of an outbreak or sizes of multiple independent outbreaks, in which case P(X|Θ) comes from Theorem 2.7 or Table 6). In our calculations we can simply use the fact that P(Θ|X)P(X|Θ)P(Θ) with a normalization constant which can be dealt with at the end.

The prior for Θ is the probability distribution we assume for the parameter values before observing the data, given by P(Θ). We often simply assume that all parameter values are equally probable initially.

The likelihood of the parameters Θ is defined to be P(X|Θ), the probability that we would observe X for the given parameter values. If we are choosing between two sets of parameter values Θ1 and Θ2 and the observations have consistently higher likelihood for Θ2, then we intuitively expect that Θ2 is the more probable parameter value.

In practice the likelihood may be very small which can lead to numerical error. It is often useful to instead look at log-likelihood,3 logP(X|Θ). For example, if we have many observed outbreak sizes, the likelihood P(X|Θ) under independence is the product of the probabilities of each individual outbreak size. The likelihood is thus quite small (perhaps less than machine precision), while the log-likelihood is simply the sum of the log-likelihoods of each individual observation.

We know that

logP(Θ|X)C=logP(X|Θ)+logP(Θ)

where C is the logarithm of the proportionality constant 1/P(X) in Equation (13). If we have a prior and the likelihood, the right hand side can be calculated. It is often possible (and advisable) to calculate the log likelihood logP(X|Θ) directly rather than calculating P(X|Θ) and then taking the logarithm.

Exponentiating the right hand side and then finding the appropriate normalization constant will yield P(Θ|X). Numerically the numbers may be very small when we exponentiate, so to prior to exponentiating it is advisable to add a constant value to all of the expressions. This constant is corrected for in the final normalization step.

We now provide the steps for a numerical calculation of P(Θ|X) given the prior P(Θ), the observations X, and the log likelihood logP(X|Θ).

  • 1.

    For each Θ, calculate f(Θ)=logP(X|Θ)+logP(Θ).

  • 2.

    Find the maximum Xmax over all Θ and subtract it to yield fˆ(Θ)=logP(X|Θ)+logP(Θ)Xmax. Note that Xmax0, and this brings all of our numbers closer to zero.

  • 3.

    Calculate g(Θ)=efˆ(Θ). This will be proportional to P(Θ|X). Note that by using efˆ(Θ) rather than ef(Θ) we have reduced the impact of roundoff error.

  • 4.

    Find the normalization constant Θg(Θ). Then

P(Θ|X)=g(Θ)Θ'g(Θ)

Note that if Θ comes from a continuous distribution rather than a discrete distribution, then the same approach works, except that P is a probability density and the summation in the final step becomes an integral.

Example 2.11

A frequent assumption is that the offspring distribution is negative binomial. Let us make this assumption with unknown p and rˆ.

To artificially simplify the problem, we assume that we know that there are only two possible pairs of Θ=(p,rˆ), namely Θ1=(p1,rˆ1)=(0.02,40) or Θ2=(p2,rˆ2)=(0.03,20), and that our a priori belief is that they are equally probable.

After observing 2 independent outbreaks, with total sizes j1=8 and j2=7, we want to use our observations to update P(Θ).

From Table 6, the likelihood of a given Θ given the two independent observations is

f(Θ)=(logj=7,81j(rˆj+j2j1)qrˆjpj1)+log0.5=(j=7,8log1j(rˆj+j2j1)qrˆjpj1)+log0.5=(j=7,8log((rˆj+j2)!)log(j!)log((rˆj1)!)+rˆjlogq+(j1)logp)+log0.5

In problems like this, we will often encounter logarithms of factorials. Many programming languages provide this, typically using Stirling's approximation. For example, Python, R, and C++ all have a special function lgamma which calculates the natural log of the absolute value of the gamma function.4 We find

f(Θ1)8.495f(Θ2)9.135

So fˆ(Θ1)=0 and fˆ(Θ2)0.640. Exponentiating, we have

g(Θ1)=1g(Θ2)0.5277

So now

p(Θ1|X)11.52770.6546p(Θ2|X)0.52771.52770.3454

So rather than the two parameter sets being equally probable, Θ2 is now about half as likely as Θ1 given the observed data.

2.5. Generality of discrete-time results

Thus far we have measured time in generations. However, many models measure time differently and different generations may overlap. For both SIS and SIR disease, our results above about final size distribution or extinction probability still apply. To see this, we note first that our results have been derived assuming that the population is infinite and well-mixed so no individuals receive multiple transmissions. Regardless of the clock time associated with transmission and recovery, there is still a clear definition of the length of the transmission chain to an infected individual. Once we group individuals by length of the transmission chain, we get the generation-based model used above. This equivalence is studied more in (Ludwig, 1975; Yan, 2008).

2.6. Exercises

Exercise 2.1

Monotonicity of αg

  • a.

    By considering the biological interpretation of αg, explain why the sequence of inequalities 0=α0α11 should hold. That is, explain why α0=0, why the αi form a monotonically increasing sequence, and why all of them are at most 1.

  • b.

    Show that αg therefore converges to some non-negative limit α that is at most 1 and that α=μ(α).

  • c.

    Use Property A.9 to show that if μ(0)0 there exists a unique α<1 solving α=μ(α) if and only if 0=μ'(1)>1.

  • d.

    Assuming μ(0)0, use Property A.9 to show that if 0>1 then αg converges to the unique α<1 solving α=μ(α), and otherwise αg converges to 1.

Exercise 2.2

Use Theorem 2.2 to prove Theorem 2.1.

Exercise 2.3

Show that if μ(0)=0, then limgαg=0. By referring to the biological interpretation of μ(0)=0, explain this result.

Exercise 2.4

Find all PGFs μ(y) with 01 and μ(0)=0. Why were these excluded from Theorem 2.2?

Exercise 2.5

Larger initial conditions

Assume that disease is introduced with m infections rather than just 1, or that it is not observed by surveillance until m infections are present. Assume that the offspring distribution PGF is μ(y).

  • a.

    If m is known, find the extinction probability.

  • b.

    If m is unknown but its distribution has PGF h(y), find the extinction probability.

Exercise 2.6

Extinction probability

Consider a disease in which p0=0.1, p1=0.2, p2=0.65, and p3=0.05 with a single introduced infection.

  • a.

    Numerically approximate the probability of extinction within 0, 1, 2, 3, 4, or 5 generations up to five significant digits (assuming an infinite population).

  • b.

    Numerically approximate the probability of eventual extinction up to five significant digits (assuming an infinite population).

  • c.

    A surveillance program is being introduced, and detection will lead to a response. But it will not be soon enough to affect the transmissions from generations 0 and 1. From then on p0=0.3, p1=0.4, p2=0.3, and p3=0. Numerically approximate the new probability of eventual extinction after an introduction in an unbounded population [be careful that you do the function composition in the right order – review Properties A.1 and A.8].

Exercise 2.7

We look at two inductive derivations of Φg(y)=μ[g](y). They are similar, but when adapted to the continuous-time dynamics we study later, they lead to two different models. We take as given that Φg1(y) gives the distribution of the number of infections caused after g1 generations starting from a single case. One argument is based on discussing the results of outcomes attributable to the infectious individuals of generation g1 in the next generation. The other is based on the outcomes indirectly attributable to the infectious individuals of generation 1 through their descendants after another g1 generations.

  • a.

    Explain why Property A.8 shows that Φg(y)=Φg1(μ(y)).

  • b.

    (without reference to a) Explain why Property A.8 shows that Φg(y)=μ(Φg1(y)).

Exercise 2.8

Use Theorem 2.3 to prove the first part of Theorem 2.2.

Exercise 2.9

How does Corollary 2.1 change if we start with k infections?

Exercise 2.10

Assume the PGF of the offspring size distribution is μ(y)=(1+y+y2)/3.

  • a.

    What offspring size distribution yields this PGF?

  • b.

    Find the PGF Ωg(z) for the number of completed infections at 0, 1, 2, 3, and 4 generations [it may be helpful to use a symbolic math program once g>2.].

  • c.

    Check that for these cases, once g>r, the coefficient of zr does not change.

Exercise 2.11

By setting y=1, use Theorem 2.5 to prove Theorem 2.4.

Exercise 2.12

Redo Example 2.10 if rˆ is a real number, rather than an integer. It may be useful to use the Γ–function, which satisfies Γ(x+1)=xΓ(x) for any x and Γ(n+1)=n! for integer n.

Exercise 2.13

Except for the negative binomial case done in Example 2.10, derive the probabilities in Table 6.

  • a.

    For the Poisson distribution, use Property A.2.

  • b.

    For the Uniform distribution, use Property A.2.

  • c.

    For the Binomial distribution, use the binomial theorem: (a+b)c=i=0c(ci)aibci.

  • d.

    For the Geometric distribution, follow Example 2.10 (noting that p and q interchange roles).

Exercise 2.14

To help model continuous-time epidemics, Section 3 will use a modified version of μ, which in some contexts will be written as μˆ(y,z). To help motivate the use of two variables, we reconsider the discrete case. We think of a recovery as an infected individual disappearing and giving birth to a recovered individual and a collection of infected individuals. Look back at the discrete-time calculation of Ωg and Πg. Define a two-variable version of μ as μ(y,z)=ziriyi=zμ(y).

  • a.

    What is the biological interpretation of μ(y,z)=zμ(y)?

  • b.

    Rewrite the recursive relations for Ωg using μ(y,z) rather than μ(y).

  • c.

    Rewrite the recursive relations for Πg using μ(y,z) rather than μ(y).

The choice to use μ(y,z) versus μ(y) is purely a matter of convenience.

Exercise 2.15

Consider Example 2.11. Assume that a third outbreak is observed with 4 infections. Calculate the probability of Θ1 and Θ2 given the data starting

  • a.

    with the assumption that P(Θ1)=P(Θ2)=0.5 and X consists of the three observations j=7, j=8, and j=4.

  • b.

    with the assumption that P(Θ1)=0.6546 and P(Θ2)=0.3454 and X consists only of the single observation j=4.

  • c.

    Compare the results and explain why they should have the relation they do.

Exercise 2.16

Assume that we know a priori that the offspring distribution for a disease has a negative binomial distribution with p=0.02. Assume that our a priori knowledge of rˆ is that it is an integer uniformly distributed between 1 and 80 inclusive. Given observed outbreaks of sizes 1, 4, 5, 6, and 10:

  • a.

    For each rˆ, calculate P(rˆ|X) where X is the observed outbreak sizes. Plot the result.

  • b.

    Find the probability that 0=μ'(1) is greater than 1.

3. Continuous-time spread of a simple disease

We now develop PGF-based approaches adapting the results above to continuous-time processes. In the continuous-time framework, generations will overlap, so we need a new approach if we want to answer questions about the probability of being in a particular state at time t rather than at generation g. Questions about the final state of the population can be answered using the same techniques as for the discrete case, but the techniques introduced here also apply and yield the same predictions. Unlike Section 2, we do not do a detailed comparison with simulation.

In the continuous-time model, infected individuals have a constant rate of recovery γ and a constant rate of transmission β. Then γ/(β+γ) is the probability that the first event is a recovery, while β/(β+γ) is the probability it is a transmission. If the event is a recovery, then the individual is removed from the infectious population. If the event is a transmission, then the individual is still available to transmit again, with the same rate. If the recipient of a transmission is susceptible, it becomes infectious.

Unlike the discrete-time case, we do not focus on the offspring distribution. Rather, we focus on the resulting number of infected individuals after an event. Early on we treat the process as if as if each infected individual were removed and replaced by either 2 or 0 new infections. Although this is not the true process (she either recovers or she creates one additional infection and remains present), it is equivalent as far as the number of infections at any early time is concerned. We focus on a PGF for the outcome of the next event.

We define μˆ(y)=ipˆiyi and so

μˆ(y)=ββ+γy2+γβ+γ (14a)

When we are calculating the number of completed cases, it will be useful to have a two-variable version of μˆ:

μˆ(y,z)=ββ+γy2+γβ+γz. (14b)

Most of the results in this section are the continuous-time analog of the discrete-time results above for the infinite population limit. In the discrete-time approach we did not attempt to address outbreaks in finite populations. However, we end this section by deriving the equations for Ξ(x,y,t), the PGF for the joint distribution of the number of susceptibles and active infections in a population of finite size N.

3.1. Extinction probability

For the extinction probability, we can apply the same methods derived in the discrete case to μˆ(y). Thus we can find the extinction probability iteratively starting from the initial guess α0=0 and setting αg=μˆ(αg1).

Exercises 3.1 and 3.2 each show that

Theorem 3.1

For the continuous-time Markovian model of disease spread in an infinite population, the probability of extinction given a single initial infection is

α=min(1,γ/β) (15)

3.1.1. Extinction probability as a function of time

In the discrete-time case, we were interested in the probability of extinction after some number of generations. When we are using a continuous-time model, we are generally interested in “what is the probability of extinction by time t?”

To answer this, we set α(t) to be the probability of extinction within time t. We will calculate the derivative of α at time t by using some mathematical sleight of hand to find α(t+Δt)α(t). Then dividing this by Δt and taking Δt0 will give the result. Our approach is closely related to backward Kolmogorov equations (described later below).

We choose the time step Δt to be small enough that we can assume that at most one event happens between time 0 and Δt. The probabilities of having 0, 1, or 2 infections are P(I(Δt)=0)=γΔt+O(Δt), P(I(Δt)=1)=1(β+γ)Δt+O(Δt) and P(I(Δt)=2)=βΔt+O(Δt) where the O notation means that the error goes to zero fast enough that O(Δt)/Δt0 as Δt0. The probability of having 3 or more infections (that is, multiple transmission events in the interval) is O(Δt) as well.

If there are two infected individuals at time Δt, then the probability of extinction by time t+Δt is α(t)2. Similarly, if there is one infected at time Δt, the probability of extinction by time t+Δt is α(t); and if there are no infections at time Δt, then the probability of extinction by time t+Δt is 1=α(t)0. So up to O(Δt) we have

α(t+Δt)=i=0P(I(Δt)=i)α(t)i=[γΔt]α(t)0+[1(β+γ)Δt]α(t)+[βΔt]α(t)2+O(Δt)=α(t)+Δt(β+γ)[μˆ(α(t))α(t)]+O(Δt) (16)

Thus

α˙=limΔt0[α(t+Δt)α(t)]/Δt=(β+γ)[μˆ(α)α]

and so

Theorem 3.2

Given an infinite population with constant transmission rate β and recovery rate γ, then α(t), the probability of extinction by time t assuming a single initial infection at time 0 solves

α˙=(β+γ)[μˆ(α)α] (17)

with μˆ(y)=(βy2+γ)/(β+γ) and the initial condition α(0)=0.

We could solve this analytically (Exercise 3.4), but most results are easier to derive directly from the ODE formulation.

3.2. Early-time outbreak dynamics

We now explore the number of infections at time t. We define the PGF

Φ(y,t)=iφi(t)yi

where φi(t) is the probability of i actively infected individuals at time t. We will derive equations for the evolution of Φ(y,t). We assume that Φ(y,0)=y so a single infected individual exists at time 0.

Our goal is to derive equations telling us how Φ changes in time. We will use two approaches which were hinted at in Exercise 2.7, yielding two different partial differential equations. Although their appearance is different, for the appropriate initial condition, their solutions are the same. These equations are called the forward and backward Kolmogorov equations.

We briefly describe the analogy between the forward and backward Kolmogorov equations and Exercise 2.7:

  • Our first approach finds the forward Kolmogorov equations. This is akin to Exercise 2.7 where we found Φg(y) by knowing the PGF Φg1(y) for the number infected in generation g1 and recognizing that since the PGF for the number of infections each of them causes is μ(y), we must have Φg(y)=Φg1(μ(y)).

  • Our second approach finds the backward Kolmogorov equations which are more subtle and can be derived similarly to how we derived the ODE for extinction probability in Theorem 3.2. This is akin to Exercise 2.7 where we found Φg(y) by knowing that the PGF for the number infected in generation 1 is μ(y), and recognizing that after another g1 generations each of those creates a number of infections whose PGF is Φg1(y) and so Φg(y)=μ(Φg1(y)).

For both approaches, we make use of the observation that for Δt1, we can write the PGF for the number of infections resulting from a single infected individual at time t=0 to be

Φ(y,Δt)=y+(y2y)βΔt+(1y)γΔt+O(Δt).

This says that with probability approximately βΔt a transmission happens and we replace y by y2, and with probability approximately γΔt a recovery happens and we replace y by 1. With probability O(Δt) multiple events happen. We can rewrite this as

Φ(y,Δt)=y+(β+γ)[μˆ(y)y]Δt+O(Δt).

Note that Φ(y,0)=y and tΦ(y,0)=(β+γ)[μˆ(y)y].

Both of our approaches rely on the observation that Φ(y,t1+t2)=Φ(Φ(y,t2),t1) by Property A.8. This states that if we take the PGF at time t1, and then substitute for each y the PGF for the number of descendants of a single individual after t2 units of time, the result is the PGF for the total number at time t1+t2.

Forward equations. For this we use Φ(y,t1+t2)=Φ(Φ(y,t2),t1) with t2 playing the role of Δt and t1 playing the role of t.

So Φ(y,t+Δt)=Φ(Φ(y,Δt),t). For small Δt (and taking Φy(Φ(y,0),t) to be the partial derivative of Φ with respect to its first argument), we have

Φ(y,t+Δt)=Φ(Φ(y,Δt),t)=Φ(Φ(y,0),t)+(Δt)Φy(Φ(y,0),t)tΦ(y,0)+O(Δt)=Φ(y,t)+(Δt)(β+γ)[μˆ(y)y]yΦ(y,t)+O(Δt).

Then

Φ˙(y,t)=limΔt0Φ(y,t+Δt)Φ(y,t)Δt=limΔt0Φ(y,t)+(Δt)(β+γ)[μˆ(y)y]yΦ(y,t)+O(Δt)Φ(y,t)Δt=(β+γ)[μˆ(y)y]yΦ(y,t).

More generally, we can directly apply Property A.10 to get this result. Exercise 3.6 provides an alternate direct derivation of these equations.

Backward equations. In the backward direction we have Φ(y,t1+t2)=Φ(Φ(y,t2),t1) with t2 playing the role of t and t1 playing the role of Δt.

So Φ(y,t+Δt)=Φ(y,Δt+t)=Φ(Φ(y,t),Δt). Note that because Φ(y,0)=y, we have Φ(Φ(y,t),0)=Φ(y,t). Thus for small Δt, we expand Φ as a Taylor Series in its second argument t

Φ(y,t+Δt)=Φ(Φ(y,t),Δt)=Φ(Φ(y,t),0)+(Δt)Φt(Φ(y,t),0)+O(Δt)=Φ(y,t)+(Δt)Φt(Φ(y,t),0)+O(Δt)=Φ(y,t)+(Δt)(β+γ)[μˆ(Φ(y,t))Φ(y,t)]+O(Δt).

To avoid ambiguity, we use Φt above to denote the partial derivative of Φ with respect to its second argument t. So

Φ˙(y,t)=limΔt0Φ(y,t+Δt)Φ(y,t)Δt=limΔt0Φ(y,t)+(Δt)(β+γ)[μˆ(Φ(y,t))Φ(y,t)]+O(Δt)Φ(y,t)Δt=(β+γ)[μˆ(Φ(y,t))Φ(y,t)].

This result also follows directly from Property A.12.

So we have

Theorem 3.3

The PGF Φ(y,t) for the distribution of the number of current infections at time t assuming a single introduced infection at time 0 solves

tΦ(y,t)=(β+γ)[μˆ(y)y]yΦ(y,t) (18)

as well as

tΦ(y,t)=(β+γ)[μˆ(Φ(y,t))Φ(y,t)]. (19)

both with the initial condition Φ(y,0)=y.

It is perhaps remarkable that such seemingly different equations yield the same solution for the given initial condition.

Example 3.1

The expected number of infections in the infinite population limit is given by [I]=iipi(t)=yΦ(1,t). From this we have

ddt[I]=tyΦ(y,t)|y=1=y[(β+γ)[μˆ(y)y]yΦ(y,t)]|y=1=(β+γ)[μˆ'(y)1]yΦ(y,t)+(β+γ)[μˆ(y)y]2y2Φ(y,t)]|y=1=(β+γ)[μˆ'(1)1][I]+(β+γ)[μˆ(1)1][2y2Φ(y,t)]|y=1=(β+γ)[(2β)/(β+γ)1][I]=(βγ)[I]

We used μˆ(1)=1 to eliminate the 2y2Φ(y,t) term and replaced μˆ'(1) with 2β/(β+γ). Using this and [I](0)=1, we have

[I]=e(βγ)t.

This example proves

Corollary 3.1

In the infinite population limit, if a disease starts with a single infection, then the expected number of active infections at time t solves

[I]=e(βγ)t (20)

3.3. Cumulative and current outbreak size distribution

Let πi,r(t) be the probability of having i currently infected individuals and r completed infections at time t. We define Π(y,z,t)=irπi,r(t)yizr to be the PGF at time t. We have Π(y,z,0)=y. As before we assume the population is large enough that the spread of the disease is not limited by the size of the population.

We give an abbreviated derivation of the Kolmogorov equations for Π. A full derivation is requested as an exercise.

Forward Kolmogorov formulation. To derive the forward Kolmogorov equations for the PGF Π(y,z,t), we use Property A.11, noting that all transition rates are proportional to i. The rate of transmission is βi and the rate of recovery is γi. There are no interactions to consider. So

tΠ(y,z,t)=(β+γ)(βy2β+γ+γzβ+γy)yΠ(y,z,t)=(β+γ)[μˆ(y,z)y]yΠ(y,z,t)

Backward Kolmogorov formulation. To derive the backward Kolmogorov equations for the PGF Π, we use a modified version of Property A.12 to account for two types of individuals (Exercise A.14, with events proportional only to the infected individuals). We find

Π˙(y,z,t)=(β+γ)[μˆ(Π(y,z,t),z)Π(y,z,t)].

Combining our backward and forward Kolmogorov equation results, we get

Theorem 3.4

Assuming a single initial infection in an infinite population, the PGF Π(y,z,t) for the joint distribution of the number of current and completed infections at time t solves

tΠ(y,z,t)=(β+γ)[μˆ(y,z)y]yΠ(y,z,t) (21)

as well as

tΠ(y,z,t)=(β+γ)[μˆ(Π(y,z,t),z)Π(y,z,t)] (22)

both with the initial condition Π(y,z,0)=y.

It is again remarkable that these seemingly very different equations have the same solution.

Example 3.2

The expected number of completed infections at time t is

[R]=j,kkpjk=zΠ(y,z,t)|y=z=1

(although we use R, this approach is equally relevant for counting completed infections in the SIS model because of the infinite population assumption). Its evolution is given by

ddt[R]=tzΠ(y,z,t)|y=z=1=z[(β+γ)[μˆ(y,z)y]yΠ(y,z,t)]|y,z=1=(β+γ)[zμˆ(y,z)yΠ(y,z,t)+[μˆ(y,z)y]zyΠ(y,z,t)]|y=z=1=(β+γ)[γβ+γyΠ(y,z,t)+0zyΠ(y,z,t)]|y=z=1=γ[I]

where we use the fact that μˆ(1,1)=1, zμˆ(y,z)=γ/(β+γ), and [I]=yΠ(y,z,t)|y=z=1. Our result says that the rate of change of the expected number of completed infections is γ times the expected number of current infections.

This example proves

Corollary 3.2

In the infinite population limit the expected number of recovered individuals as a function of time solves

ddt[R]=γ[I] (23)

We will see that this holds even in finite populations.

3.4. Small outbreak final size distribution

We define

Ω(z)=(j<ωjzj)+ωz

to be the PGF of the distribution of outbreak final sizes in an infinite population, with ωz representing epidemics and for j< ωj representing the probability that an outbreak infects exactly j individuals. We use the convention that z=0 for z<1 and 1 for z=1. To calculate Ω, we make observations that the outbreak size coming from a single infected individual is 1 if the first thing that individual does is a recovery or it is the sum of the outbreak sizes of two infected individuals if the first thing the individual does is to transmit (yielding herself and her offspring).

Thus we have

Ω(z)=ββ+γ[Ω(z)]2+γβ+γz=μˆ(Ω(z),z)

As for the discrete-time case we may solve this iteratively, starting with the guess Ω(z)=z. Once n iterations have occurred, the first n coefficients of Ω(z) remain constant. Note that unlike the discrete case, here Ω(z)zμˆ(Ω(z)). This yields

Theorem 3.5

The PGF Ω(z)=jωjzj+ωz for the final size distribution assuming a single initial infection in an infinite population solves

Ω(z)=μˆ(Ω(z),z) (24)

with Ω(1)=1. This function is discontinuous at z=1. For the final size distribution conditional on the outbreak being finite, the PGF is continuous and equals

{Ω(z)/α0z<11z=1

As in the discrete-time case, we can find the coefficients of Ω(z) analytically.

Theorem 3.6

Consider continuous-time outbreaks with transmission rate β and recovery rate γ in an infinite population with a single initial infection. The probability the outbreak causes exactly j infections for j< [that is, the coefficient of zj in Ω(z)] is

ωj=1jβj1γj(β+γ)2j1(2j2j1)

We prove this theorem in Appendix B. The proof is based on observing that if there are j total infected individuals, this requires j1 transmissions and j recoveries. Of the sequences of 2j1 events that have the right number of recoveries and transmissions, a fraction 1/(2j1) of these satisfy additional constraints required to be a valid sequence leading to j infections (the sequence cannot lead to 0 infections prior to the last step). Alternately, we can note that the offspring distribution is geometric and use Table 6.

3.5. Full dynamics in finite populations

We now derive the PGFs for continuous time SIS and SIR outbreaks in a finite population.

PGF-based techniques are easiest when we can treat events as independent. In the continuous-time model, when we look at the system in a given state, each event is independent of the others. Once the next event happens the possible events change, but conditional on the new state, they are still independent. Thus we can use the forward Kolmogorov approach.

The backward Kolmogorov approach will not work because in a finite population descendants of any individual are not independent. We do not look at the discrete-time version because in a single time step, multiple events can occur, some of which affect one another. So we would lose independence as we go from one time step to another. For these reasons we focus on the forward Kolmogorov formulations for the continuous-time models. Much of our approach here was derived previously in (Bailey, 1953; Bartlett, 1949). See also (Allen, 2008).

For a given population size N, we let s, i, and r be the number of susceptible, infected and immune (removed) individuals. For the SIS model r=0 and we have s+i=N while for the SIR model we have s+i+r=N.

3.5.1. SIS

We start with the SIS model. We set ξs,i(t) to be the probability of s susceptible and i actively infected individuals at time t. We define the PGF for the joint distribution of susceptible and infected individuals

Ξ(x,y,t)=iξs,i(t)xsyi

At rate βNsi, successful transmissions occur, moving the system from the state (s,i) to (s1,i+1), which is equivalent to removing one susceptible individual and one infected individual, and replacing them with two infected individuals. Following Property A.11, this is represented by

βN(y2xy)xyΞ.

At rate γi, recoveries occur, moving the system from the state (s,i) to (s+1,i1), which is equivalent to removing one infected individual and replacing it with a susceptible individual. This is represented by

γ(xy)yΞ.

So the PGF solves

Ξ˙=βN(y2xy)xyΞ+γ(xy)yΞ

It is sometimes useful to rewrite this as

Ξ˙=(yx)[βNyxγ]yΞ

We have

Theorem 3.7

For SIS dynamics in a finite population we have

tΞ=βN(y2xy)xyΞ+γ(xy)yΞ (25)

We can use this to derive equations for the expected number of susceptible and infected individuals.

Example 3.3

We use [S] and [I] to denote the expected number of susceptible and infected individuals at time t. We have

[S]=s,isξsi(t)=s,isξsi1s11i=xΞ(1,1,t)[I]=s,iiξsi(t)=s,iiξsi1s1i1=yΞ(1,1,t)

We also define the expected value of the product si,

[SI]=s,isiξsi(t)=xyΞ(1,1,t).

Then we have

[S˙]=txΞ(1,1,t)=xtΞ(x,y,t)|x=y=1=x((yx)[βNyxγ]yΞ)|x=y=1=(yx)x[(βNyxγ)yΞ][βNyxγ]yΞ|x=y=1=βN[SI]+γ[I]

In the final line, we eliminated the first term because yx is zero at x=y=1. Similar steps show that

[I˙]=βN[SI]γ[I]

but the derivation is faster if we simply note [S]+[I]=N is constant. This proves

Corollary 3.3

For SIS disease, the expected number infected and susceptible solves

ddt[S]=βN[SI]+γ[I] (26)
ddt[I]=βN[SI]γ[I] (27)

where [SI] is the expected value of the product si.

3.5.2. SIR

Now we consider the SIR model. A review of various techniques (including PGF-based methods) to find the final size distribution of outbreaks in finite-size populations can be found in (House, Ross, & Sirl, 2013). Here we focus on the application of PGFs to find the full dynamics. To reduce the number of variables we track, we focus just on s and i and use r=N-s-i to find the number recovered. So Ξ(x,y,t) does not have any dependence on z. For a given s and i, infection occurs at rate βsi/N. It appears as a departure from the state (s,i) and entry into (s1,i+1). Following Property A.11, this is captured by

βN(y2xy)xyΞ.

Recovery is captured by

γ(1y)yΞ

[note the difference from the SIS case in the recovery term]. So we have

Theorem 3.8

For SIR dynamics in a finite population we have

tΞ=β(y2xy)NxyΞ+γ(1y)yΞ (28)

We follow similar steps to Example 3.3 to derive equations for [S] and [I] in Exercise 3.16. The result of this exercise should show

Corollary 3.4

For SIR disease, the expected number of susceptible, infected, and recovered individuals solves

ddt[S]=βN[SI] (29)
ddt[I]=βN[SI]γ[I] (30)
ddt[R]=γ[I] (31)

where [SI] is the expected value of the product si.

3.6. Exercises

Exercise 3.1

Extinction Probability. Let β and γ be given with μˆ(y)=(βy2+γ)/(β+γ).

  • a.

    Analytically find solutions to y=μˆ(y).

  • b.

    Assume β<γ. Find all solutions in [0,1].

  • c.

    Assume β>γ. Find all solutions in [0,1].

Exercise 3.2

Consistency with discrete-time formulation. Although we have argued that a transmission in the continuous-time disease transmission case can be treated as if a single infected individual has two infected offspring and then disappears, this is not what actually happens. In this exercise we look at the true offspring distribution of an infected individual before recovery, and we show that the ultimate predictions of the two versions are equivalent. Consider a disease in which individuals transmit at rate β and recover at rate γ. Let pi be the probability an infected individual will cause exactly i new infections before recovering.

  • a.

    Explain why p0=γ/(β+γ).

  • b.

    Explain why pi=βiγ/(β+γ)i+1. So pi form a geometric distribution.

  • c.

    Show that μ(y)=ipiyi can be expressed as μ(y)=γ/(β+γβy). [This definition of μ without the hat corresponds to the discrete-time definition]

  • d.

    Show that the solutions to y=μ(y) are the same as the solutions to y=μˆ(y)=(βy2+γ)/(β+γ). So the extinction probability can be calculated either way. (You do not have to find the solutions to do this, you can simply show that the two equations are equivalent).

Exercise 3.3

Relation with 0. Take μ(y)=γ/(β+γβy) as given in Exercise 3.2 and μˆ=(βy2+γ)/(β+γ).

  • a.

    Show that μ(1)μˆ(1) in general.

  • b.

    Show that when 0=μ(1)=1, then μ(1)=μˆ(1)=1. So both are still threshold parameters.

Exercise 3.4

Revisiting eventual extinction probability. We revisit the results of Exercise 3.1 using Eq. (17) (without solving it).

  • a.

    By substituting for μˆ(α), show that α˙=(1α)(γβα).

We have α(0)=0. Taking this initial condition and expression for α˙, show that

  • b.

    α1 as t if β<γ (i.e., 0<1) and

  • c.

    αγ/β as t if β>γ (i.e., 0>1).

  • d.

    Set up (but do not solve) a partial fraction integration that would give α(t) analytically.

Exercise 3.5

Understanding the backward Kolmogorov equations. Let φi(t) denote the probability of having i active infections at time t given that at time 0 there was a single infection [φ1(0)=1]. We have φ0(t)=α(t). We extend the derivation of Eq. (16) to φ1. Assume φ0(t0) and φ1(t0) are known.

  • a.

    Following the derivation of Eq. (16), approximate φ0(Δt), φ1(Δt), and φ2(Δt) for small Δt.

  • b.

    From biological grounds explain why if there are 0 infections at time Δt then there are also 0 infections at time t0+Δt.

  • c.

    If there is 1 infection at time Δt, what is the probability of 1 infection at time t0+Δt?

  • d.

    If there are 2 infections at time Δt, what is the probability of 1 infection at time t0+Δt?

  • e.

    Write φ1(t0+Δt) in terms of φ0(t0), φ1(t0), φ1(Δt), and φ2(Δt).

  • f.

    Using the definition of the derivative, find an expression for φ˙1 in terms of φ1(t) and φ2(t).

Exercise 3.6

Derivation of the forward Kolmogorov equations. In this exercise we derive the PGF version of the forward Kolmogorov equations by directly calculating the rate of change of the probabilities of the states. Define φj(t) to be the probability that there are j active infections at time t.

We have the forward Kolmogorov equations:

φ˙j=β(j1)φj1+γ(j+1)φj+1(β+γ)jφj.
  • a.

    Explain each term on the right hand side of the equation for φ˙j.

  • b.

    By expanding Φ˙(y,t)=tjφjyj, arrive at Equation (18).

Exercise 3.7

Derivation of the backward Kolmogorov equations. In this exercise we follow (Allen, 2017; Bailey, 1964) and derive the PGF version of the backward Kolmogorov equations by directly calculating the rate of change of the probabilities of the states. Define φki(t) to be the probability of i infections at time t given that there were k infections at time 0. Although we assume that at time 0 there is a single infection, we will need to derive the equations for arbitrary k.

  • a.

    Explain why

φki(t+Δt)=φki(t)k(β+γ)φki(t)Δt+k(βφ(k+1)i(t)+γφ(k1)i(t))+O(Δt)

for small Δt.

  • b.

    By using the definition of the derivative φ˙ki=limΔt0φki(t+Δt)φki(t)Δt, find φ˙ki

Define Φ(y,t|k)=iφkiyi to be the PGF for the number of active infections assuming that there are k initial infections.

  • c.

    Show that

Φ˙(y,t|1)=(β+γ)Φ(y,t|1)+βΦ(y,t|2)+γΦ(y,t|0)
  • d.

    Explain why Φ(y,t|k)=Φ(y,t|1)k.

  • e.

    Complete the derivation of Equation (19).

Exercise 3.8

Define Φ(y,t|k) to be the PGF for the probability of having i infections at time t given k infections at time 0.

  • a.

    Explain why Φ(y,t|k)=[Φ(y,t)]k.

  • b.

    Show that if we substitute Φ(y,t|k)=[Φ(y,t)]k in place of Φ(y,t) in Eq. (18) the equation remains true with the initial condition yk.

  • c.

    Show that if we substitute Φ(y,t|k)=[Φ(y,t)]k in place of Φ(y,t) in equation (19) we do not get a true equation.

So Eq. (18) applies regardless of the initial condition, but Eq. (19) is only true for the specific initial condition of one infection.

Exercise 3.9

Let Φ(y,t|k) be the PGF for the number of infections assuming there are initially k infections. Derive the backward Kolmogorov equation for Φ(y,t|k). Note that some of the Φs in the derivation above would correspond to Φ(y,t|1) and some of them to Φ(y,t|k).

Exercise 3.10

Comparison of the formulations.

  • a.

    Using Eq. (18) derive an equation for α˙ where α(t)=Φ(0,t). What, if any, additional information would you need to solve this numerically?

  • b.

    Using Eq. (19), derive Equation (17) for α˙ where α(t)=Φ(0,t). What, if any, additional information would you need to solve this numerically?

Exercise 3.11

Full solution.

  • a.

    Show that Eq. (19) can be written

tΦ(y,t)=(γβΦ(y,t))(1Φ(y,t))
  • b.

    Using partial fractions, set up an integral which you could use to solve for Φ(y,t) analytically (you do not need to do all the algebra to solve it).

Exercise 3.12

Argue from their definitions that Φ(y,t)=Π(y,z,t)|z=1.

Exercise 3.13

Derive Theorem 3.3 from Theorem 3.4.

Exercise 3.14

Derive Theorem 3.5 from Theorem 3.4.

Exercise 3.15

Equivalence of continuous and discrete final size distributions.

Show by direct substitution that if Ω(z)=μˆ(Ω(z),z) then Ω(z)=zμ(Ω(z)) where μ(y)=γ/(β+γβy) is the PGF for the offspring distribution found in Exercise 3.2.

Exercise 3.16

We revisit the derivations of the usual mass action SIR ODEs. Following Example 3.3,

  • a.

    Derive [S˙] in terms of [SI].

  • b.

    Derive [I˙] in terms of [SI] and [I].

  • c.

    Using [S]+[I]+[R]=N, derive [R˙].

4. Large-time dynamics

We now look at how PGFs can be used to develop simple models of SIR disease spread in the large population limit when the disease infects a nonzero fraction of the population. In this limit, the early-time approaches derived before break down because depletion of the susceptible population is important. The later-time models of Section 3.5 are impractical because of the N limit and are more restricted due to the continuous-time assumption.

4.1. SIR disease and directed graphs

In Section 2.5 we argued that for early times the continuous-time predictions can be framed in terms of the discrete-time predictions because we can classify infections by the length of the transmission chain to them from the index case. For SIR disease this argument extends beyond early times.

To see this, we assume that prior to the disease introduction, we know for each individual what would happen if he ever becomes infected as in Fig. 9. In particular, we know how long his infection would last, to whom he would transmit, and how long the delays from his infection to onwards transmission would be. The process of choosing these in advance, selecting the initial infection(s), and tracing infection from there is equivalent to choosing the initial infection(s) and then choosing the transmissions while the infection process is traced out.

Fig. 9.

Fig. 9

(Left) A twelve-individual population, after the a priori assignment of who would transmit to whom if ever infected by the SIR disease (the delay until transmission is not shown). Half of the nodes have zero potential infectors and half have 3. Half of the nodes have 1 potential offspring and half have 2. So the offspring distribution has PGF (x+x2)/2 while the ancestor distribution has PGF χ(x)=(1+x3)/2. (Middle) If node 6 is initially infected, the infection will reach node 4 who will transmit to 5 and 7, and eventually infection will also reach 8 and 2 before further transmissions fail because nodes are already infected. If however, it were to start at 9, then it would reach 2, from which it would spread only to 10. (Right) By tracing backwards from an individual, we can determine which initial infections would lead to infection of that individual. For example individual 4 will become infected if and only if it is initially infected or any of 0, 3, 5, 6, 7, 8, or 11 is an initial infection.

By assigning who transmits to whom (and how long the delays are), we have defined a weighted directed graph whose edges represent the potential transmissions and weights represent the delays (Kenah & Miller, 2011; Kiss, Miller, & Simon). A node v will become infected if and only if there is at least one directed path from an initially infected node u to v. The time of v's infection is given by the least sum of all paths from initially infected nodes to v. We note that the transmission process could be quite complex: the duration of a node's infection and the delays from time of infection to time of onwards transmissions can have effectively arbitrary distributions, and we could still build a similar directed graph.

This directed graph is a useful structure to study because it encodes the outbreak in a single static object, as opposed to a dynamic process. There is significant study of the structure of such directed graphs (Broder et al., 2000; Dorogovtsev, Mendes, & Samukhin, 2001). Much of it focuses on the size of out-components of a node (that is, for a given node, what fraction of the population can be reached following the edges forwards) or the in-components (that is, from what fraction of the population is it possible to reach a given node by following edges forwards).

4.2. Final size relations for SIR epidemics

We now derive final size relations for SIR epidemics in the large population limit. We begin with the assumption that a single node is initially infected and that an epidemic happens.

We use the mapping of the SIR epidemic to a directed graph G. Assume that a single node u is chosen to be infected. Consider a node v. The probability v is infected is the probability that u is in her in-component, and so it equals the proportion of G that is in the in-component of v. In the limit as G becomes infinite, there are a few possibilities. We are interested in what happens when an epidemic occurs, so we can assume that u has a large out-component (in the sense that the out-component takes up a non-zero fraction of G in the N limit) (Broder et al., 2000):

  • If v has a small in-component, then almost surely u is not in the in-component and so almost-surely v is not infected.

  • If v has a large in-component, then almost surely it contains a node w that lies in the out-component of u. The existence of w then implies the existence of a path from u to w to v, so v is in u's out-component and v becomes infected.

Thus, if u causes an epidemic in the large N limit, then the probability that v becomes infected equals the probability that v has a large in-component. So the size of an epidemic (if it happens) is simply the probability a random individual has a large in-component.

We approach the question of whether v has a large in-component in the same way we approached the question of whether u causes a large chain of infections (i.e., whether u has a large out-component). We define the PGF of the ancestor distribution to be the function χ(x) defined by

χ(x)=ipixi

where pi is the probability that a random node in the directed graph has in-degree i. That is, there are exactly i nodes that would directly transmit to the randomly chosen node if they were ever infected. So the probability an individual is not infected S()/N solves x=χ(x), choosing the smaller solution when two solutions exist. Since the proportion infected is r()=R()/N=1S()/N, we can conclude

Theorem 4.1

Assume that an outbreak begins with a single infected individual and an epidemic results. In the large N limit, the expected cumulative proportion infected r()=R()/N solves

r()=1χ(1r())

where χ(x) is the PGF of the ancestor distribution. If there are multiple solutions we choose the larger solution for r() in [0,1].

Under common assumptions, the population is large, the average number of transmissions an individual causes is 0, and the recipient is selected uniformly at random. Under these assumptions the ancestor distribution is Poisson with mean 0. So χ(x)=e0(1x). xThen

r()=1e0r(). (32)

Deriving this result does not depend on the duration of infections, or even on the distribution of factors affecting infectiousness. The assumptions required are that an epidemic starts from a single infected individual, that each transmission reaches a randomly chosen member of the population, that all individuals have equal susceptibility, and the average individual will transmit to 0 others. This result is general across a wide range of assumptions about the infectious process.

Restating this we have:

Corollary 4.1

Assume that an SIR disease is spreading in a well-mixed population with homogeneous susceptibility. Assuming that the initial fraction infected is infinitesimal and an epidemic occurs, the final size satisfies

r()=1e0r() (33)

where 0 is the reproductive number of the disease.

This explains many of the results of (Ma & Earn, 2006; Miller, 2012), and our observation in Example 2.3 that the epidemic size depended on 0 and not on any other property of the offspring distribution. A closely-related derivation is provided by (Diekmann & Heesterbeek, 2000, Section 1.3).

4.3. Discrete-time SIR dynamics

We now take a discrete-time approach, similar to (Miller et al., 2012; Valdez, Macri, & Braunstein, 2012) and (Kiss, Miller, & Simon, chapter 6). We will assume that at generation g=0 the disease is introduced by infecting a proportion ρ uniformly at random leaving the remainder susceptible. We assume that the population is very large and that the number of infections is large enough that the dynamics can be treated as deterministic. Our results can be adapted to other initial conditions (for example, to account for nonzero R in the initial condition).

We assume that χ(x) is known and that there is no correlation between how susceptible an individual is and how infectious that individual is. Thus at generation g, the expected number of transmissions occurring is 0I(g), and how the recipients are chosen depends on χ.

Let v be a randomly chosen member of the population. The probability that v's randomly chosen ancestor has not yet been infected by generation g1 is S(g1)/N. The probability v is susceptible at generation g is the probability v was initially susceptible, 1ρ, times the probability v has not received any transmissions, χ(S(g1)/N) (see Exercise 4.2).

So for g>0 we arrive at

S(g)=(1ρ)Nχ(S(g1)/N)I(g)=NR(g)S(g)R(g)=R(g1)+I(g1)

with

S(0)=1ρ,I(0)=ρ,R(0)=0.

So we have

Theorem 4.2

Assume that χ(x) is the PGF of the ancestor distribution and assume there is no correlation between infectiousness and susceptibility of a given individual. Further assume that at generation 0 a fraction ρ is randomly infected in the generation-based discrete-time model. Then in the large population limit

S(g)=(1ρ)Nχ(S(g1)/N) (34a)
I(g)=NR(g)S(g) (34b)
R(g)=R(g1)+I(g1). (34c)

With initial conditions

S(0)=(1ρ)N,I(0)=ρN,R(0)=0. (34d)

We can interpret this in the context of survival functions. The function (1ρ)χ(S(g1)/N) gives the probability that a node has lasted g generations without being infected.

4.4. Continuous-time SIR epidemic dynamics

We now move to continuous-time SIR epidemics. We allow for heterogeneity, assuming that each susceptible individual u receives transmissions at some rate κuβI(t)/NK, and that the PGF of κ is ψ(x)=κP(κ)xκ. We assume κ takes only non-negative integer values.

For an initially susceptible individual u with a given κu, the probability of not yet receiving a transmission by time t solves s˙u=κuβI(t)su/NK, which has solution

su=eκuβ0tI(τ)dτNK.

So we can write

su=θκu

where θ=eβ0tI(τ)dτNK and

θ˙=βθI/NK.

Considering a random individual of unknown κ, the probability she was initially susceptible is 1ρ and the probability she has not received any transmissions is ψ(θ). So

S(t)=(1ρ)Nψ(θ)

Taking R˙=γI, we have

R˙=γI=γNKβθ˙θ.

Integrating both sides, taking θ(0)=1 and R(0)=0, we have

R=γNKβlnθ

Taking I=NSR we get

I=N(1(1ρ)ψ(θ)+γKβlnθ)

and so θ˙ becomes

θ˙=βθ(1(1ρ)ψ(θ)+γKβlnθ)/K

Theorem 4.3

Assuming that at time t=0 a fraction ρ of the population is randomly infected and that the susceptible individuals each have a κ such that they become infected as a Poisson process with rate κβI/NK, in the large population limit we have

S=N(1ρ)ψ(θ) (35a)
I=N(1(1ρ)ψ(θ)+γKβlnθ) (35b)
R=γNKβlnθ (35c)

where ψ(x)=kP(k)xk and the system is governed by a single ODE

θ˙=βθ(1(1ρ)ψ(θ)+γKβlnθ)K (35d)

with initial condition

θ(0)=1. (35e)

As in the discrete-time case, this can be interpreted as a survival function formulation of the SIR model. Most, if not all, mass-action formulations of the SIR model can be re-expressed in a survival function formulation. Some examples are shown in the Exercises.

Some very similar systems of equations are developed in (Kiss, Miller, & Simon, chapter 6) and (Miller, 2011; Miller et al., 2012; Valdez et al., 2012; Volz, 2008) where the focus is on networks for which the value of κ not only affects the probability of becoming infected, but also of transmitting further. These references focus on the assumption that an individual's infector remains a contact after transmission, but they contain techniques for studying partnerships with varying duration.

4.5. Exercises

Exercise 4.1

Ancestor distribution for homogeneous well-mixed population.

Consider an SIR disease in a well-mixed population having N individuals and a given 0. Let v be a randomly chosen individual from the directed graph created by placing edges from each node to all those nodes they would transmit to if infected.

  • a

    Show that if the average number of offspring is 0, then so is the average number of infectors.

  • b

    If there are exactly 0N edges in the directed graph and each recipient is chosen uniformly at random from the population (independent of any previous choice), argue that the number of transmissions v receives has a binomial distribution with 0N trials and probability 0/N. (technically we must allow edges from v to v)

  • c

    Argue that if 0 remains fixed as N, then the number of transmissions v receives is Poisson distributed with mean 0.

Exercise 4.2

Explain why for large N the probability v is still susceptible at generation g if she was initially susceptible is χ(S(g1)/N).

Exercise 4.3

Use Theorem 4.2 to derive a result like Theorem 4.1, but with nonzero ρ.

Exercise 4.4

Final size relations

Consider the continuous time SIR dynamics as given in System (35)

  • a

    Assume κ=1 for all individuals, and write down the corresponding equations for S, I, R, and θ.

  • b

    At large time I0, so S()=NR(). But also S()=S(0)ψ(θ()). By writing θ() in terms of R(), derive a recurrence relation for r()=R()/N in terms of r() and 0=β/γ.

  • c

    Comment on the relation between your result and Theorem 4.1

Exercise 4.5

Other relations

  • a

    Using the equations from Exercise 4.4, derive the peak prevalence relation, an expression for the maximum value of I. [at the maximum I˙=0, so we start by finding θ so that S˙+R˙=0.]

  • b

    Similarly, find the peak incidence relation, an expression for the maximum rate at which infections occur, S˙.

Exercise 4.6

Alternate derivation of su.

If the rate of transmissions to u is βIκu/NK, then the expected number of transmissions u has received is βκu0tI(τ)dτ/NK and this is Poisson distributed.

  • a

    Let fu(x) be the PGF for the number of transmissions u has received. Find an expression for fu(x) in terms of the integral 0tI(τ)dτ.

  • b

    Explain why fu(0) is the probability u is still susceptible.

  • c

    Find fu(0).

Exercise 4.7

Alternate derivation of Theorem 4.3 in the homogeneous case.

The usual homogeneous SIR equations are

S˙=βIS/NI˙=βIS/NγIR˙=γI

We will derive system (35) for fixed κ=1 from this system through the use of an integrating factor. Set θ=eβ0tI(τ)dτ/N.

  • a

    Show that θ˙=βIθ/N and so θ˙/θ=βR˙/Nγ.

  • b

    Using the equation for S˙ add βIS/N to both sides and then divide by θ(the factor 1/θ is an integrating factor). Show that the expression on the left hand side is ddtS/θ and so

ddtS/θ=0.
  • c

    Solve for R in terms of θ.

  • d

    Solve for S in terms of θ.

  • e

    Solve for I in terms of θ using S+I+R=N.

This equivalence was found in (Miller, 2012) and (Harko, Lobo, & Mak, 2014).

Exercise 4.8

Alternate derivation of Theorem 4.3.

Consider now a population having many subgroups of susceptibles denoted by κ with the group κ receiving transmissions at rate βκI/N per individual. Once infected, each individual transmits with rate βK and recovers with rate γ. These assumptions lead to

S˙κ=βκINKSκI˙=γI+βINKκκSκR˙=γI

Following Exercise 4.7, set θ=eβ0tI(τ)dτ/N and derive system (35) from these equations by use of an integrating factor.

5. Multitype populations

We now briefly discuss how PGFs can be applied to multitype populations. This section is intended primarily as a pointer to the reader to show that it is possible to apply these methods to such populations. We do not perform a detailed analysis.

Many populations can be divided into subgroups. These may be patches in a metapopulation model, genders in a heterosexual sexually transmitted infection model, age groups in an age-structured population, or any of a number of other groupings. Applications of PGFs to such models have been studied in multiple contexts (Kucharski & Edmunds, 2015; Reluga, Meza, Walton, & Galvani, 2007).

5.1. Discrete-time epidemic probability

We begin by considering the probability of an epidemic in a discrete-time model. To set the stage, assume there are M groups and let pi1,i2,,iM|k be the probability that an individual of group k will cause i infections in group . Define αg|k to be the probability that a chain of infections starting from an individual of group k becomes extinct within g generations.

It is straightforward to show that if we define

ψkx1,x2,,xM=i1,i2,,iMpi1,i2,,iM|kx1i1x2i2xMiM

then

αg|k=i1,i2,,iMpi1,i2,,iM|kαg1|1i1αg1|2i2αg1|MiM=ψk(αg1|1,αg1|2,,αg1|M)

After converting this into vectors we get α1=ψ(0). Iterating g times we have

αg=ψ[g](0) (36)

Setting α to be the limit as g goes to infinity, we find the extinction probabilities. Specifically, the k-th component of α is the probability of extinction given that the first individual is of type k. Thus we have:

Theorem 5.1

Let

  • αg=(αg|0,αg|1,,αg|M) where αg|k is the probability a chain of infections starting with a type k individual will end within g generations

  • and ψ=(ψ1,ψ2,,ψM) where ψk(x)=i1,i2,,iMpi1,i2,,iM|kx1i1x2i2xMiM.

Then αg=ψ[g](0).

The vector of eventual extinction probabilities in the infinite population limit is given by α=limgαg and is a solution to α=ψ(α).

We could have derived this directly by showing that the extinction probabilities solve α=ψ(α). In this case it might not be obvious how to solve this multidimensional system of nonlinear equations or how to be certain that the solution found is the appropriate one. However, by interpreting the iteration in Eqn. (36) in terms of the extinction probability after g generations, it is clear that simply iterating starting from α0=0 will converge to the appropriate values. Additionally the values calculated in each iteration have a meaningful interpretation.

Example 5.1

Consider a population made up of many large communities. We assume an unfamiliar disease is spreading through the population. When the disease begins to spread in a community, the community learns to recognize the disease symptoms and infectiousness declines. We assume that we can divide the population into 3 types: primary cases T0, secondary cases T1, and tertiary cases T2. The infectiousness of primary cases is higher than that of secondary cases which is higher than that of tertiary cases. Within a community a primary case can cause secondary cases, while secondary and tertiary cases can cause tertiary cases. All cases can cause new primary cases in other communities. We ignore multiple introductions to the same community.

We define nij to be the number of infections of type Ti caused by a type Tj individual, and we assume that we know the joint distribution pn00n10, pn01n21, and pn02n22. We define

ψ1(x,y,z)=n00,n10pn00n10xn00yn10ψ2(x,y,z)=n01,n21pn01n21xn01zn21ψ3(x,y,z)=n02,n22pn02n22xn02zn22

Note that ψ1 does not depend on z while ψ2 and ψ3 do not depend on y.

We define α0=(0,0,0) and set αg=(ψ1(αg1),ψ2(αg1),ψ3(αg1)). Then taking α to be the limit as g, the first entry of α is the probability that the disease goes extinct starting from a single primary case.

5.2. Continuous-time SIR dynamics

Now we consider a continuous-time version of SIR dynamics in a heterogeneous population.

Assume again that there are M groups and let βij be the rate at which an individual in group j causes transmissions that go to group i. Let ξi be the expected number of transmissions that an individual in group i has received since time 0. Finally assume that individuals in group i recover at rate γi. Then the expected number of transmissions an individual in group i has received by time t is Poisson distributed with mean ξi. The PGF for the number of transmissions received is thus eξi(1x). Setting x=0, the probability of having received zero transmissions is eξi(t). Thus Si=Si(0)eξi(t). We have Ii=NiSiRi and R˙i=γiIi. To find ξi, we simply note that the total rate that group i is receiving infection is jIjβij, and so

ξ˙i=jIjβijNi.

Thus:

Theorem 5.2

If the rate of transmission from an infected individual in group j to group i is βij, then

Si=Si(0)eξi(t) (37a)
Ii=NiSiRi (37b)
R˙i=γiIi (37c)
ξ˙i=jIjβijNi (37d)

with ξ(0)=0.

5.3. Exercises

Exercise 5.1

Consider a vector-borne disease for which each infected individual infects a Poisson-distributed number of vectors, with mean λ. Each infected vector causes i infections with probability pi=πi(1π) for some π[0,1]. This scenario corresponds to human infection lasting for a fixed time with some constant transmission rate to vectors, and each vector having probability π of living to bite again after each bite and transmitting with probability 1 if biting.

  • a.

    Let αg|1 and αg|2 be the probability that an outbreak would go extinct in g generations starting with an infected human or vector respectively. Find the vector-valued function ψ(x)=(ψ1(x),ψ2(x)). That is, what are the PGFs ψ1(x1,x2) and ψ2(x1,x2)?

  • b.

    Set λ=3 and π=0.5. Find the probability of an epidemic if one infected human is introduced or if one infected vector is introduced.

  • c.

    For the same values, find the probability of an epidemic if one infected vector is introduced.

  • d.

    Find ψ2(ψ1(0,x),0). How should we interpret the terms of its Taylor Series expansion?

Exercise 5.2

Starting from the equations

S˙i=SiNijβijIjI˙i=γiIi+SiNijβijIjR˙i=γiIi

use integrating factors to derive System (37).

Exercise 5.3

Assume the population is grouped into subgroups of size Ni with N=iNi and the i-th subgroup has a parameter κi representing their rate of contact with others. Take

βji=κjκiNiNκβ

to be the transmission rate from type i individuals to a single type j individual, and assume all infected individuals recover with the same rate γ.

Define θ=eβ(jκj0tIj(τ)dτ)/jκjNj and define the PGF ψ(x)=iNiNxi. Let S=iSi, I=iIi, and R=Ri.

S=Nψ(θ)I=NSRR˙=γIθ˙=βθjκjIjjκjNj

with θ(0)=1.

θ˙=βθ+βθ2ψ(θ)ψ(1)θγlnθ
  • a.

    Explain what assumptions this model makes about interactions between individuals in group i and j.

  • b.

    Show that

  • c.

    Explain why jκjIjjκjNj=1jκjSjjκjNjjκjRjjκjNj.

  • d.

    Show that jκjSjjκjNj=θψ(θ)ψ(1).

  • e.

    Show that ddtjκjRjjκjNj=(γ/β)θ˙θ, and solve for jκjRjjκjNj in terms of θ assuming Rj=0 for all j.

  • f.

    Thus conclude that

6. Discussion

There are many contexts where we are interested in how a newly introduced infectious disease would spread. We encounter situations like this in the spread of zoonotic infections such as Monkey Pox or Ebola as well as the importation of novel diseases such as the Zika in the Americas or the reintroduction of locally eliminated diseases such as Malaria.

PGFs are an important tool for the analysis of epidemics, particularly at early stages. They allow us to relate the individual-level transmission process to the distribution of outcomes. This allows us to take data about the transmission process and make predictions about the possible outcomes, but it also allows us to take observed outbreaks and use them to infer the individual-level transmission properties.

For SIR disease PGFs also provide a useful alternative formulation to the usual mass-action equations. This formulation leads to a simple derivation of final-size relations and helps explain why previous studies have shown that a wide range of disease assumptions give the same final size relation.

Our goal with this primer has been to introduce researchers to the many applications of PGFs to disease spread. We have used the appendices to derive some of the more technical properties of PGFs. Additionally we have developed a Python package Invasion_PGF which allows for quick calculation of the results in the first three sections of this primer. A detailed description of the package is in Appendix C. The software can be downloaded at https://github.com/joelmiller/Invasion_PGF. Documentation is available within the repository, starting with the file docs/_build/html/index.html. The supplementary information includes code that uses Invasion_PGF to generate the figures of Section 2.

Acknowledgments

This work was funded by Global Good.

I thank Linda Allen for useful discussion about the Kolmogorov equations. Hao Hu played an important role in inspiring this work and testing the methods. Hil Lyons and Monique Ambrose provided valuable feedback on the discussion of inference. Amelia Bertozzi-Villa and Monique Ambrose read over drafts and recommended a number of changes that have significantly improved the presentation.

The python code and output in Appendix C was incorporated using Pythontex (Poore, 2015). I relied heavily on https://tex.stackexchange.com/a/355343/70067 by “touhami” in setting up the solutions to the exercises.

Handling Editor: J. Wu

Footnotes

Peer review under responsibility of KeAi Communications Co., Ltd.

1

Although this converges for any given z in [0,1], it does not do so “uniformly” if 0>1. That is, for 0>1 no matter how large g is, there are always some values of z<1, but sufficiently close to 1, which are far from converged.

2

If the index case causes 0 infections and its first offspring causes 1 infection, we have a sequence of two numbers that sum to 1, but it is biologically meaningless because it does not make sense to talk about the first offspring of an individual who causes no infections.

3

Throughout this section, we assume that log is taken with base e.

4

The Gamma function is an analytic function that satisfies Γ(n)=(n+1)! for positive integer values so to calculate log(n!) we use lgamma(n+1).

5

If the sequence is not a Łukasiewicz word, then either it is the start of a sequence corresponding to a larger (possibly infinite) tree, or some initial subsequence corresponds to a completed tree.

Appendix D

Supplementary data to this article can be found online at https://doi.org/10.1016/j.idm.2018.08.001.

Appendix A. Important properties of PGFs

In this appendix, we give some theoretical background behind the important properties of PGFs which we use in the main part of the primer. We attempt to make each subsection self-contained so that the reader has a choice of reading through the appendix in its entirety, or waiting until a property is used before reading that section. Because we expect the appendix is more likely to be read piecemeal, the exercises are interspersed through the text where the relevant material appears.

A PGF has been described as “a clothesline on which we hang up a sequence of numbers for display” (Wilf, 2005). Similarly (Pólya, 1990) says “A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in a bag, and then we have only one object to carry, the bag.” Indeed for many purposes mathematicians use PGFs primarily because once we have the distribution put into this “bag”, many more mathematical tools are available, allowing us to derive interesting and sometimes surprising identities (Wilf, 2005).

However, for our purposes there is a meaningful direct interpretation of a PGF. Assume that we are interested in the probability that an event does not happen given some unknown number i of independent identical Bernoulli trials with probability α the event does not happen in any one trial. Let ri represent the probability that there are i trials. Then the probability that the event does not occur in any trial is

iriαi=f(α),

and so PGFs emerge naturally in this context.

In infectious disease, this context occurs frequently and many results in this primer can be expressed in this framework. For reference, we make this property more formal:

Property A.1. Assume we have a process consisting of a random number i independent identical Bernoulli trials. Let ri denote the probability of a given i and f(x)=irixi be its PGF. If α is the probability that a random trial fails, then f(α) is the probability all trials fail.

Appendix A.1. Properties related to individual coefficients

We start by investigating how to find the coefficients of a PGF if we can calculate the numeric value of the PGF at any point.

This section makes use of the imaginary number i=1, and so in this section we avoid using i as an index in the sum of f(x).

Property A.2. Given a PGF f(x)=nrnxn, the coefficient of xn in its expansion for a particular n can be calculated by taking n derivatives, evaluating the result at x=0, and dividing by (n!). That is

rn=1n!(ddx)nf(x)|x=0

This result holds for any function with a Taylor Series (it does not use any special properties of PGFs).

Exercise A.1. Prove Property A.2 [write out the sum and show that the derivatives eliminate any rm for m<n, the leading coefficient of the result is n!rn, and the later terms are all zero].

There are many contexts in which we can only calculate a function numerically. In this case the calculation of these derivatives is likely to be difficult and inaccurate. An improved way to calculate it is given by a Cauchy integral (Moore & Newman, 2000). This is a standard result of Complex Analysis, and initially we simply take it as given.

rn=12πif(z)zn+1dz

This integral can be done on a closed circle around the origin z=Reiθ, in which case dz=izdθ. Then rn can be rewritten as

rn=12π02πf(Reiθ)(Reiθ)ndθ

Using another substitution, θ=2πu, we find dθ=2πdu with u varying from 0 to 1. This integral becomes

rn=01f(Re2πiu)Rne2nπiudu

The integral on the right hand side can be approximated by a simple summation and we find

rn1Mm=1Mf(Re2πim/M)Rne2nπim/M

for large M.

A few technical steps show that the PGF f(z) converges for any z with |z|1 (any PGF is analytic within the unit circle R=1 and that the PGF converges everywhere on the unit circle [the coefficients are all positive or zero and the sum converges for z=1, so it converges absolutely on the unit circle]). Thus this integral can be performed for any positive R1 (and in many cases it can be done for larger R). We have found that the unit circle (R=1) yields remarkably good accuracy, so we recommend using it unless there is a good reason not to. Some discussion of identifying the optimal radius appears in (Bornemann, 2011).

Thus we have

Property A.3. Given a PGF f(x), the coefficient of xn in its expansion can be calculated by the integral

rn=01f(Re2πiu)Rne2nπiudu (A.1)

This is well-approximated by the summation

rn1Mm=1Mf(Re2πim/M)Rne2nπim/M (A.2)

with R=1 and M1.

It turns out that this approach is closely related to the approach to get a particular coefficient of a Fourier Series. Once the variable is changed from z to θ, our function is effectively a (complex) Fourier Series in θ, and the integral corresponds to the standard approach to finding the nth coefficient of a Fourier Series.

Exercise A.2. Verification of Equation (A.1):

In this exercise we show that the formula in Equation (A.1) yields rn. Assume that the integral is performed on a circle of radius R1 about the origin.

  • a.

    Write f(z)=mrmzm and rewrite 01f(Re2πiu)Rne2nπiudu as a sum

01f(Re2πiu)Rne2nπiudu=mrm01Rmne2(mn)πiudu
  • b.

    Show that for m=n the integral in the summation on the right hand side is 1.

  • c.

    Show that for mn, the integral in the summation on the right hand side is 0.

  • d.

    Thus conclude that the integral on the left hand side must yield rn.

Exercise A.3. Let f(z)=ez=1+z+z2/2+z3/6+z4/24+z5/120+. Write a program that estimates r0, r1, …, r5 using Equation (A.2) with R=1. Report the values to four significant figures for

  • a.

    M=2

  • b.

    M=4

  • c.

    M=5

  • d.

    M=10

  • e.

    M=20.

  • f.

    How fast is convergence for different rn?

Appendix A.2. Properties related to distribution moments

We next look at two straightforward properties about the moments of the distribution ri having PGF f(x). We return to using i=0,1, as an indexing variable, so i is no longer 1. We have

f(1)=iri1i=iri=1

where the final equality is because the ri determine a probability distribution.

With mildly more effort, we have

f'(1)=irii1i1=iiri=E(i)

where E(i) denotes the expected value of i. These arguments show

Property A.4. Any PGF f(x) must satisfy f(1)=1.

Property A.5. The expected value of a random variable i whose distribution has PGF f(x) is given by E(i)=f'(1).

It is straightforward to derive relationships for E(i2) and higher order moments by repeated differentiation of f and evaluating the result at 1.

Appendix A.3. Properties related to function composition

To motivate function composition, we start with an example.

Example A.1. Consider a weighted coin which comes up ‘Success’ with probability p and ‘Failure’ with probability 1p. We play a game in which we stop at the first failure, and otherwise flip it again. Define f(x)=px+1p

Let αg be the probability of failure within the first g flips. Then α0=0 and α1=1p=f(0) are easily calculated.

More generally the probability of starting the game and failing immediately is α0=1p=f(0) while the probability of having a success and flipping again is p, at which point the probability of failure within g1 flips is αg1. So we have αg=(1p)+pαg1=f(αg1). So using induction we can show that the probability of failure within g generations is f[g](0).

Exercise A.4. The derivation in Example A.1 was based on looking at what happened after a single flip and then looking g1 flips into the future in the inductive step. Derive αg=f(αg1) by instead looking g1 flips into the future and then considering one additional step. [the distinction between this argument and the previous one becomes useful in the continuous-time case where we use the ‘backward’ or ‘forward’ Kolmogorov equations.]

Exercise A.5. Consider a fair six-sided die with numbers 0, 1, …, 5, rather than the usual 1, …, 6. We roll the die once. Then we look at the result, and roll that many copies (if zero, we stop), then we look at the sum of the result and repeat.

f(x)=1+x++x56={x616(x1)x11x=1

Define αg to be the probability the process stops after g iterations (with α0=0 and α1=1/6).

  • a.

    Find an expression for αg, the probability that by the g'th iteration the process has stopped, in terms of f(x).

  • b.

    Rephrase this question in terms of the extinction probability for an infectious disease.

Processes like that in Exercise A.1 can be thought of as “birth-death” processes where each event generates a discrete number of new events. Our examples above show that function composition arises naturally in calculating the probability of extinction in a birth-death process. We show below that it also arises naturally when we want to know the distribution of population sizes after some number of generations rather than just the probability of 0.

Specifically, we often assume an initially infected individual causes some random number of new infections i from some distribution. Then we assume that each of those new infections independently causes an additional random number of infections from the same distribution. We will be interested in how to get from the one-generation PGF to the PGF for the distribution after g generations.

We derive this in a few stages.

  • We first show that if we take two numbers from different distributions with PGFs f(x) and h(x), then their sum has distribution f(x)h(x) [Property A.6]. Then inductively applying this we conclude that the distribution of the sum of n numbers from a distribution with PGF f(x) has PGF [f(x)]n.

  • We also show that if the probability we take a number from the distribution with PGF f(x) is π1 and the probability we take it from the distribution with PGF h(x) is π2, then the PGF of the resulting distribution is π1f(x)+π2h(x) [Property A.7].

  • Putting these two properties together, we can show that if we choose i from a distribution with PGF f(x) and then choose i different values from a distribution with PGF h(x), then the sum of the i values has PGF f(h(x)) [Property A.8].

Our main use of Properties A.6 and A.7 is as stepping stones towards Property A.8.

Consider two probability distributions, let ri be the probability of i for the first distribution and qj be the probability of j for the second distribution. Assume they have PGFs f(x)=irixi and h(x)=jqjxj respectively.

We are first interested in the process of choosing i from the first distribution, j from the second, and adding them. In the disease context this arises where the two distributions give the probability that one individual infects i and another infects j and we want to know the probability of a particular sum.

The probability of obtaining a particular sum k is

iriqki

So the PGF of the sum is ki=0kriqkixk. By inspection, this is equal to the product f(x)h(x). This means that the PGF of the process where we choose i from the first and j from the second and look at the sum is the product f(x)h(x).

We have shown

Property A.6. Consider two probability distributions, r0, r1, and q0, q1, with PGFs f(x)=irixi and h(x)=jqjxj. Then if we choose i from the distribution ri and j from the distribution qj, the PGF of their sum is f(x)h(x).

Usually we want the special case where we choose two numbers from the same distribution having PGF f(x). The PGF for the sum is [f(x)]2. The PGF for the sum of three numbers from the same distribution can be thought of as the result of [f(x)]2 and f(x), yielding [f(x)]3. By induction, it follows that the PGF for the sum of i numbers sum is [f(x)]i.

Now we want to know what happens if we are not sure what the current system state is. For example, we might not know if we have 1 or 2 infected individuals, and the outcome at the next generation is different based on which it is.

We use the distributions ri and qj. We assume that with probability π1 we choose a random number k from the ri distribution, while with probability π2=1π it is chosen from the qj distribution. Then the probability of a particular value k occurring is π1rk+π2qk, and the resulting PGF is k(π1rk+π2qk)xk=π1f(x)+π2h(x). This becomes:

Property A.7. Consider two probability distributions, r0, r1, and q0, q1, with PGFs f(x)=irixi and h(x)=jqjxj. We consider a new process where with probability π1 we choose k from the ri distribution and with probability π2=1π1 we choose k from the qj distribution. Then the PGF of the resulting distribution is π1f(x)+π2g(x).

We finally consider a process in which we have two distributions with PGFs f(x)=irixi and h(x)=jqjxj. We choose the number i from the distribution ri and then take the sum of i values chosen from the qj distribution, =1ij. Both the number of terms in the sum and their values are random variables. Using the results above, the PGF of the resulting sum is iri[hx]i=fhx. Thus we have

Property A.8. Consider two probability distributions, r0, r1, and q0, q1, with PGFs f(x)=irixi and h(x)=jqjxj. Then if we choose i from the distribution ri and then take the sum of i values chosen from the distribution qj, the PGF of the sum of those i values is f(h(x)).

This property is closely related to the spread of infectious disease. An individual may infect i others, and then each of them causes additional infections. The number of these second generation cases is the sum of i random numbers =1ij where j is the number of additional infections caused by the -th infection caused by the initial individual. So if f(x) is the PGF for the distribution of the number of infections caused by the first infection and h(x) is the PGF for the distribution of the number of infections caused by the offspring, then f(h(x)) is the PGF for the number infected in the second generation [and if the two distributions are the same this is f[2](x)]. Repeated iteration gives us the distribution after g generations.

Exercise A.6. Note that if we interchange p and q in the PGF of the negative binomial distribution in Table 1, it is simply the PGF of the geometric distribution raised to the power rˆ. A number chosen from the negative binomial can be defined as the number of successful trials (each with success probability p) before the rˆth failure.

Using this and Property A.8, derive the PGF of the negative binomial.

Exercise A.7. Sicherman dice (Gallian & Rusin, 1979; Gardner, 1978).

To motivate this exercise consider two tetrahedral dice, numbered 1,2,3,4. When we roll them we get sums from 2 to 8, each with its own probability, which we can infer from this table:

Image 1

However another pair of tetrahedral dice, labelled 1,2,2,3 and 1,3,3,5 yields the same sums with the same probabilities:

Image 2

We now try to find a similar pair for 6-sided dice. First consisder a pair of standard 6-sided dice.

  • a.

    Show that the PGF of each die is f(x)=(x+x2+x3+x4+x5+x6)/6.

  • b.

    Fill in the tables showing the possible sums from rolling two dice (fill in each square with the sum of the two entries) and multiplication for two polynomials (fill in each square with the product of the two entries):

Image 3

  • c.

    Explain the similarity.

  • d.

    Show that each step of the following factorization is correct:

f(x)=x(1+x+x2+x3+x4+x5)6=x(1+x+x2)(1+x3)6=x(1+x+x2)(1+x)(1x+x2)6.

This cannot be factored further with real coefficients, and indeed it can be shown that a property similar to prime numbers holds. Namely, any factorization of f(x)f(x) as h1(x)h2(x) with real coefficients has the property that each of h1 and h2 can be factored into some powers of these “prime” polynomials times a constant.

We seek two new six-sided dice (each different) such that the sum of a roll of the two dice has the same probabilities as the normal dice. The two dice have positive integer values on them (so no fair adding a constant c to everything on one die and subtracting c on the other). Let h1(x) and h2(x) be their PGFs.

  • e.

    Explain why we must have h1(x)h2(x)=[f(x)]2.

  • f.

    If the dice have numbers a1,,a6 and b1,,b6, show that their PGFs are of the form h1(x)=ixai/6 and h2(x)=ixbi/6 where all ai and bi are positive integers.

  • g.

    Given the properties we want for the dice, find h1(0) and h2(0).

  • h.

    Given the properties we want for the dice, find h1(1) and h2(1).

  • i.

    Using the values at x=0 and x=1, explain why h1(x)=x(1+x+x2)(1+x)(1x+x2)b/6 and h2(x)=x(1+x+x2)(1+x)(1x+x2)2b/6 where b is 0, 1, or 2.

  • j.

    The case b=1 gives the normal dice. Consider b=0 (b=2 gives the same final result). Find h1(x). For reference, h2(x)=16(x+x3+x4+x5+x6+x8)

  • k.

    Create the table for the two dice corresponding to h1(x) and h2(x) and verify that the sums occur with the same frequency as a normal pair:

Image 4

Fig. A.10.

Fig. A.10

Cobweb diagrams: We take the function f(x)=(1+x3)/2. A cobweb diagram is built by alternately drawing vertical lines from the diagonal to f(x) and then horizontal lines from f(x) to the diagonal. The dashed lines show αg=f(αg1) starting with α0=0 and highlight the relation to the iterative process.

Exercise A.8. Early-time outbreak dynamics

  • a

    Consider normal dice. The PGF is f(x)=(x+x2+x3+x4+x5+x6)/6. Consider the process where we roll a die, take the result i, and then roll i other dice and look at their sum. What is the PGF of the resulting sum in terms of f?

  • b

    If an infected individual causes anywhere from 1 to 6 infections, all with equal probability, find the PGF for the number of infections in generation 2 if there is one infection in generation 0. [you can express the result in terms of f]

  • c

    And in generation g (assuming depletion of susceptibles is unimportant)?

Appendix A.4. Properties related to iteration of PGFs

There are various contexts in which we might iterate to calculate f[n](x) (the result of applying f n times to x).

In the disease context, this occurs most frequently in calculating the probability of outbreak extinction. If we think of α as the probability that the outbreak goes extinct from a single individual, then from Property A.1 we would expect that α=f(αˆ) where αˆ is the probability that an offspring of the individual fails to produce an epidemic. However, under common assumptions, the number of infections from the offspring should be from the same distribution as from the parent. In this case we would conclude α=αˆ and so α=f(α).

It turns out that a good way to solve for α is iteration, starting with the guess α0=0. We will show that this converges to the correct value [x=f(x) can have multiple solutions, only one of which is the correct α].

Fig. A.10 demonstrates how the iterative process can be represented by a “cobweb diagram” (May 1976; Peitgen, Jürgens, & Saupe, 2006) To use a cobweb diagram to study the behavior of f[g](x0), we draw the line y=x and the curve y=f(x). Then at x0 we draw a vertical line to the curve y=f(x). We draw a horizontal line to the line y=x [which will be at the point (x1,x1)]. We then repeat these steps, drawing a vertical line to y=f(x) and a horizontal line to y=x. Cobweb diagrams are particularly useful in studying behavior near fixed or periodic points.

Exercise A.9. Understanding cobweb diagrams

From Fig. A.10 the origin of the term “cobweb” may be unclear. Because of properties of PGFs, the more interesting behavior does not occur for our applications. Here we investigate cobweb diagrams in more detail for non-PGF functions. Since we use f(x) to denote a PGF, in this exercise we use z(x) for an arbitrary function.

  • a.

    Consider the line z(x)=2(1x)/3. Starting with x0=0, show how the first few iterations of xi=z(xi1) can be found using a cobweb diagram (do not explicitly calculate the values).

  • b.

    Now consider the line z(x)=2(1x). The solution to z(x)=x is x=2/3. Starting from an initial x0 close to (but not quite equal to) 2/3, do several iterations of the cobweb diagram graphically.

  • c.

    Repeat this with the lines z(x)=1/4+x/2 starting at x0=0 and z(x)=1+3x starting close to where x=z(x).

  • d.

    What is different when the slope is positive or negative?

  • e.

    Can you predict what condition on the slope's magnitude leads to convergence to or divergence from the solution to x=z(x) when z is a line?

So far we have considered lines z(x). Now assume z(x) is nonlinear and consider the behavior of cobweb diagrams close to a point where x=z(x).

  • f.

    Use Taylor Series to argue that (except for degenerate cases where z' is 1 at the intercept) it is only the slope at the intercept that determines the behavior sufficiently close to the intercept.

Exercise A.10. Structure of fixed points of f(x).

Consider a PGF f(x)=irixi, and assume r0>0.

  • a.

    Show that f(1)=1 and f(0)>0.

  • b.

    Show that f(x) is convex (that is fx0) for x>0. [hint ri0 for all i]

  • c.

    Thus argue that if f11, then x=f(x) has only one solution to x=f(x) in [0,1], namely f(1)=1. It may help to draw pictures of f(x) and the function y=x for x in [0,1].

  • d.

    Explain why if there is a point x01 where f(x0)=x0 and f(x)>x for x in some region (x0,x1) then 0<fx0<1.

  • e.

    Thus show that if f1>1 then there are exactly two solutions to x=f(x) in [0,1], one of which is x=1.

These results suggest:

Property A.9. Assume f(x)=irixi is a PGF, and f(0)>0.

  • If f11 then the only intercept of x=f(x) in [0,1] is at x=1.

  • Otherwise, there is another intercept x*, 0<x*<1, and if x<x* then x<f(x)<x* while if x>x* then x>f(x)>x* and for 0x0<1, f[g](x0) converges monotonically to x*.

The assumption r0>0 was used to rule out f(x)=x. Excluding this degenerate case, these results hold even if r0=0, in which case we can show f1>1 and x*=0.

To sketch the proof of this property, we note that clearly f(1)=1, so if f(0)>0 then either f(x) crosses y=x at some intermediate 0<x*<1 or it does not cross until x=1. Then using the fact that for x>0 the slope of f is positive and increasing, we can inspect the cobweb diagram to see these results.

Appendix A.5. Finding the Kolmogorov Equations

To study continuous-time dynamics, we will want to have partial differential equations (PDEs) where we write the time derivative of a PGF f(x,t) or f(x,y,t) in terms of f and its spatial derivatives.

We will use two approaches to find the derivative. Both start with the assumption that we know f(x,t), and calculate the derivative by finding f(x,t+Δt) and use the definition of the derivative:

tf(x,t)=limΔt0f(x,t+Δt)f(x,t)Δt

The methods differ in how they find f(x,t+Δt). The distinction is closely related to the observation in Exercise 2.7 that μ[g](x) can be written as either μ[g1](μ(x)) or μ(μ[g1](x)).

  • The first involves assuming we know f(x,t) and then looking through all of the possible transitions to find how the system changes going from t to t+Δt. This will yield the forward Kolmogorov equations.

  • The second involves starting from the initial condition f(x,0) and finding f(x,Δt) by investigating all of the possible transitions. Then taking f(x,Δt) and f(x,t) we are able to find f(x,t+Δt). This will yield the backward Kolmogorov equations.

Appendix A.5.1. Forward Kolmogorov Equations

We start with the forward Kolmogorov equations. We let ri(t) denote the probability that at time t there are i individuals, and define the PGF

f(x,t)=iri(t)xi

We begin by looking at events that can be treated as if they remove one individual and replace it with m individuals. Thus i is replaced by i+m1:

ii+m1.

For example early in an epidemic, we may assume that an infected individual causes new infections at rate β. The outcome of an infection event is equivalent to the removal of the infected individual and replacement by two infected individuals. Similarly, a recovery event occurs with rate γ and is equivalent to removal with no replacement. So λ2=β, λ0=γ, and all other λm are 0.

Our events happen at a per-individual rate λm, so the total rate an event occurs across the population of i individuals is λmi. Events that can be modeled like this include decay of a radioactive particle, recovery of an infected individual, or division of a cell. We assume that different events may be possible, each having a different m. If multiple events have the same effect on m (for example emigration or death), we can combine their rates into a single λm.

It will be useful to define

Λ=mλm

to be the combined per-capita rate of all possible events and

h(x)=mλmxm/Λ

We can think of h(x) as the PGF for the number of new individuals given that a random event happens (since λm/Λ is the probability that the random event introduces m individuals).

We start with one derivation of the equation for f˙(x,t) based on directly calculating f(x,t+Δt) and using the definition of the derivative. An alternate way is shown in Exercise A.11. For small Δt the probability that multiple events occur in the same time internal is O(Δt), and we will see that this is negligible. Let us assume the system has i individuals at time t, which occurs with probability ri(t). For a given m, the probability that the event occurs in the time interval given i is λmiΔt+O(Δt), and (1mλmiΔt)+O(Δt) measures the probability that none of the events occur in the time interval and the system remains in state i. If the event occurs, the system leaves the state corresponding to xi and enters the state corresponding to xi+m1. Summing over m and i, we have

f(x,t+Δt)=i(ri(t)[m(λmiΔt)xi+m1+(1mλmiΔt)xi])+O(Δt)

The O(Δt) corrects for the possibility of multiple events happening in the time interval.

A bit of algebra and separating the i and m summations shows that

f(x,t+Δt)=iri(t)xi+mλm(Δt)(xmx)iri(t)ixi1+O(Δt)=f(x,t)+mλm(xmx)Δtiri(t)xxi+O(Δt)=f(x,t)+(Δt)(mλmxmxmλm)xiri(t)xi+O(Δt)=f(x,t)+Λ(Δt)[h(x)x]xf(x,t)+O(Δt)

So we now have

tf(x,t)=limΔt0f(x,t+Δt)f(x,t)Δt=limΔt0f(x,t)+ΛΔt[h(x)x]xf(x,t)+O(Δt)f(x,t)Δt=Λ[h(x)x]xf(x,t)

We finally have

Property A.10. Let f(x,t)=iri(t)xi be the PGF for the probability of having i individuals at time t. Assume several events indexed by m can occur, each with rate λmi, that remove one individual and replace it with m. Let Λ=mλm be the total per-capita rate and h(x)=mλmxm/Λ be the PGF of the outcome of a random event. Then

tf(x,t)=Λ[h(x)x]xf(x,t) (A.3)

We look at a heuristic way to interpret this. We can rewrite Equation (A.3) as

f˙(x,t)=[m(λmxmλmx)]xf(x,t)

Then if we expand f on the right hand side, we have

miλm(xmx)irixi1

The derivative serves the purpose of getting the factor i into the coefficient of each term which addresses the fact that the rate events happen is proportional to the total count. The derivative has the additional effect of reducing the exponent by 1, corresponding to the removal of one individual. The λm in the remaining factor gives the per-capita rate of changing state. The xmx captures the fact that when moving to that new state m individuals are added but the system is leaving the current state (which has an exponent of xi) with the same rate.

Exercise A.11. Alternate derivation of Equation (A.3)

An alternate way to derive Equation (A.3) is through directly calculating r˙i.

  • a.

    Explain why r˙i=mλmiri+mλm(im+1)rim+1.

  • b.

    Taking f˙(x,t)=ir˙ixi, derive Equation (A.3).

We can generalize this to the case where there are multiple types of individuals. For the forward Kolmogorov equations, it is relatively straightforward to allow for interactions between individuals. We may be interested in this generalization when considering predator-prey interactions or interactions between infected and susceptible individuals if we are interested in depletion of susceptibles. We assume that there are two types of individuals A and B with counts i and j respectively, and we let rij(t) denote the probability of a given pair i and j. We define the PGF

f(x,y,t)=i,jri,j(t)xiyj

We assume that interactions between an A and a B individual occur with some rate proportional to the product ij We assume that the interaction removes both individuals and replaces them by m of type A and n of type B. We denote the rate as μm,nij, and the sum

M=m,nμm,n.

We also assume that individuals of type A spontaneously undergo changes as they did above, but they can be replaced by type A and/or type B individuals. So one individual of type A is removed and replaced by m individuals of type A and n of type B with rate λm,n, and the combined rate for one specific transition over the entire set of individuals is λm,ni. We define

Λ=m,nλm,n.

We will ignore spontaneous changes by nodes of type B, but the generalization to include these can be found by following the same method.

Finally, let

h(x,y)=m,nλm,nxmyn/Λ

and

g(x,y)=m,nμm,nxmyn/M

be the PGFs for the outcomes of the two types of events.

Then

f(x,y,t+Δt)=i,jri,j(t)[m,n[(λm,niΔt)xi+m1yj+(μm,nijΔt)xi+m1yj+n1+(1m,n[λm,niΔt+μm,nijΔt])xiyj]+O(Δt)=i,jri,j(t)xiyj+m,nλm,n(xmynx)Δti,jri,j(t)ixi1yj+m,nμm,n(xmynxy)Δti,jri,j(t)ijxi1yj1+O(Δt)=f(x,y,t)+m,nλm,n(xmynx)Δtxf(x,y,t)+m,nμm,n(xmynxy)Δtxyf(x,y,t)+O(Δt)=f(x,y,t)+(Δt)(Λ[h(x,y)x]xf(x,y,t)+M[g(x,y)xy]xyf(x,y,t))+O(Δt)

So

tf(x,y,t)=limΔt0f(x,y,t+Δt)f(x,y,t)Δt=limΔt0(Δt)Λ[h(x,y)x]xf(x,y,t)+(Δt)M[g(x,y)xy]xyf(x,y,t)+O(Δt)Δt=Λ[h(x,y)1]xxf(x,y,t)+M[g(x,y)1]xyxyf(x,y,t)

We have shown:

Property A.11. Let f(x,y,t)=i,jrij(t)xiyj be the PGF for the probability of having i type A and j type B individuals. Assume that events occur with rate λm,ni or μm,nij to replace a single type A individual or one of each type with m type A and n type B individuals. Let Λ=m,nλm,n and M=m,nμm,j. Then

tf(x,y,t)=Λ[h(x,y)x]xf(x,y,t)+M[g(x,y)xy]xyf(x,y,t) (A.4)

where h(x,y)=m,nλm,nxmyn/Λ is the PGF for the outcome of a random event whose rate is proportional to i and g(x,y)=m,nμm,nxmyn/M is the PGF for the outcome of a random event whose rate is proportional to ij.

This can be generalized further if there are events whose rates are proportional only to j or if there are more than two types. The exercise below shows how to generalize this if the rate of events depend on i in a more complicated manner.

Exercise A.12. In many cases interactions between two individuals of the same type are important. These may occur with rate i(i1) or i2 depending on the specific details. Assume we have only a single type of individual with PGF f(x,t)=iri(t)xi.

  • a.

    If a collection of events to replace two individuals with m individuals occur with rate βmi(i1), find how write a PDE for f. Your final result should contain 2x2f(x,t). Use B=mβm and g(x)=mβmxm/B. Follow the derivation of Equation (A.3).

  • b.

    If instead the events replace two individuals with m individuals and occur with rate βmi2, find how to incorporate them into a PDE for f. Your final result should contain x(xxf(x,t)) or equivalently xf(x,t)+x2x2f(x,t).

Exercise A.13. Consider a chemical system that begins with some initial amount of chemical A. Let i denote the number of molecules of species A. A molecule of A spontaneously degrades into a molecule of B, with rate ξ per molecule. Let j denote the number of molecules of species B. Species B reacts with A at rate ηij to produce new molecules of species B. The reactions are denoted

AB
A+B2B

Let ri,j(t) denote the probability of i molecules of A and j molecules of B at time t. Let f(x,y,t)=ri,j(t)xiyj be the PGF. Find the forward Kolmogorov equation for f(x,y,t).

Appendix A.5.2. Backward Kolmogorov Equations

We now look for another derivation of tf(x,t), and as before we find it by first finding f(x,t+Δt) for small Δt and then using the definition of the derivative. We will assume that each individual acts independently, and at rate λm an individual may be removed and replaced by m new individuals. So if there are i total individuals, at rate λmi the count i is replaced by i1+m.

Property A.8 plays an important role in our derivation. We define f1(x,t)=iri(t)xi where we assume that r1(0)=1, that is we start with exactly one individual at time 0. Then Property A.8 shows that f1(x,t1+t2)=f1(f1(x,t2),t1). Then from our initial condition f1(x,0)=x, and

f1(x,Δt+t)=f1(f1(x,t),Δt) (A.5)

We need to find f1(x,Δt). We have

f1(x,Δt)=iri(0)xi(1miλm(Δt)+miλm(Δt)xm1)+O(Δt)=x(1mλm(Δt)+mλm(Δt)xm1)+O(Δt)=xx(Δt)mλm+mλmxm+O(Δt)=x+(Δt)Λ[h(x)x]+O(Δt)

where, as in the forward Kolmogorov case, Λ=mλm and h(x)=mλmxm/Λ is the PGF of the number of new individuals created given that an event occurs. In the first step we used the fact that for f1(x,t), ri(0)=1 if i=1 and otherwise it is 0. Thus Equation (A.5) implies

f1(x,t+Δt)=f1(x,t)+(Δt)Λ[h(f1(x,t))f1(x,t)]+O(Δt).

Now taking the definition of the derivative, we have

tf1(x,t)=limΔt0f1(x,t+Δt)f1(x,t)Δt=limΔt0f1(x,t)+(Δt)Λ[h(f1(x,t))f1(x,t)]+O(Δt)f1(x,t)Δt=Λ[h(f1(x,t))f1(x,t)]

Thus we have an ODE for f1(x,t).

In general, our initial condition may not be a single individual, but some other number (or perhaps a value chosen from a distribution). Let the initial condition have PGF f(x,0). Then it follows from Property A.8 that

f(x,t)=f(f1(x,t),0)

So we have

Property A.12. Consider a process in which the number of individuals change in time such that when an event occurs one individual is destroyed and replaced with m new individuals. The associated rate associated with an event that changes the population size by m is λmi where i is the number of individuals. Let f1(x,t) be the PGF for this process beginning from a single individual and Λ=mλm. Then

f˙1(x,t)=Λ[h(f1(x,t))f1(x,t)] (A.6)

where h(x) is the PGF for the number of new individuals created in a random event. If the initial number of individuals is not 1, let f(x,0) denote the PGF for the initial condition. Then

f(x,t)=f(f1(x,t),0) (A.7)

is the PGF at arbitrary positive time.

This is fairly straightforward to generalize to multiple types as long as none of the events involve interactions.

Exercise A.14. In this exercise we generalize Property A.12 for the case where there are two types of individuals A and B with counts i and j.

Assume events occur spontaneously with rate λm,ni to remove an individual of type A and replace it with m of type A and n of type B, or they occur spontaneously with rate ζm,nj to remove an individual of type B and replace it with m of type A and n of type B.

Set Λ=m,nλm,n and =m,nζm,n. Let f1,0(x,y,t) denote the outcome beginning with one individual of type A and f0,1(x,y,t) denote the outcome beginning with one individual of type B.

  • a.

    Write f1,0(x,y,Δt) and f0,1(x,y,Δt) in terms of h(x,y)=m,nλm,nxmyn/Λ and g(x,y)=m,nζm,nxmyn/.

  • b.

    Use Property A.8, write f1,0(x,Δt+t) and f0,1(x,Δt+t) in terms of f1,0 and f0,1 evaluated at t and Δt. The answer should resemble Equation (A.5).

  • c.

    Derive expressions for tf1,0(x,y,t) and tf0,1(x,y,t).

  • d.

    Use this to derive Equation (22).

Appendix B. Proof of Theorems 2.7 and 3.6

We now prove Theorems 2.7 and 3.6.

Appendix B.1. Theorem 2.7

We take as given a probability distribution so that pi is the probability of i offspring.

We will first show a way to represent a (finite) transmission tree as a sequence of integers representing the number of offspring of each node. Additionally we show that the possible sequences coming from a tree can be characterized by a few specific properties. Then the probability of such a sequence corresponds to the probability of the corresponding tree.

Given a finite transmission tree T, we first order the offspring of any individual (randomly) from “left” to “right”. We then construct a sequence S by performing a depth-first traversal of the tree and recording the number of offspring as we visit the nodes of the tree, as shown in Fig. B.11. A sequence constructed in this way is called a Łukasiewicz word (Stanley, 2001).

Fig. B11.

Fig. B11

Demonstration of the steps mapping the tree T to the sequence S. The nodes are traced in a depth-first traversal and their number of offspring is recorded. For the labeling given, a depth-first traversal traces the nodes in alphabetical order. At an intermediate stage (left) the traversal has not finished the sequence. The final sequence (right) is uniquely determined once the order of a node's offspring is (randomly) chosen.

It is straightforward to see that if we are given a Łukasiewicz word S, we can uniquely reconstruct the (ordered) tree T from which it came. We now demonstrate the relation between the probability of a given tree T and the probability of observing a given sequence S.

Fig. B.12.

Fig. B.12

The steps of the construction of a tree with S=(2,0,0,0,1,0,3,0,2) [a cyclic permutation of the previous S]. Each frame shows next step in building a tree on a ring. The resulting tree is not rooted at the top. The names of the nodes in the tree are a cyclic permutation of the original.

We first note that the probability of observing a given length-j sequence S by choosing j numbers from the offspring distribution is simply πS=siSpsi.

Similarly, as infection spreads, each infected individual infects some number si with probability psi. The probability of a given tree is thus i=1jpsi. Note that the different trees may have different sizes j. However, for a given tree with a given j and corresponding Łukasiewicz word S, the probability of observing this sequence by choosing j numbers from the offspring distribution is i=1jpsi, equal to the a priori probability of observing the tree.

Now we look for the probability that a random length-j sequence created by choosing numbers from the offspring distribution actually corresponds to a tree, that is it is actually a Łukasiewicz word.5 To be a Łukasiewicz word, the sequence clearly must satisfy that siSsi=j1 because the sum is the total number of transmissions occurring which is one less than the total number of infections. Momentarily we will show that given a length-j sequence which sums to j1, exactly one of its j cyclic permutations is a Łukasiewicz word. Since each cyclic permutation has the same probability of being observed as a sequence, we will conclude that the probability of observing a tree of length j is exactly 1/j times the probability that the entries in a length-j sequence of values chosen from the offspring distribution sum to j1.

We now prove the final detail. Given a length j sequence S of non-negative integers that sum to j1, we place j nodes on a ring starting at the top and ordered clockwise. We label each ith node with si. If a node v is labelled with 0 and the adjacent position in the counter-clockwise direction has node u with a positive label, we place an edge from u to v (with v to the right of any previous edge from u to another node) and remove v. We decrease u's label by one.

This process reduces both the number of nodes and the sum by one. So the sum remains one less than the number of nodes. This guarantees at least one zero and at least one nonzero value until only one node remains. Thus we can always find an appropriate pair u and v until only a single node remains. The process constructs a tree (the final outcome has j nodes and j1 edges and is connected). Fig. B.12 demonstrates the steps.

If the tree is rooted at the node that began at the top of the ring, then S corresponds to a depth-first traversal of that tree. If the tree is not rooted at the top, then a rotation of the sequence so that the root is at the top will result in a sequence which corresponds to a tree. All cyclic rotations of any length-j sequence summing to j1 are equally probable as sequences, but only one corresponds to a tree. Thus the probability that a random length-j sequence summing to j1 corresponds to a tree is 1/j.

So we finally conclude that the probability of a tree of j nodes is equal to 1/j times the probability that j randomly-chosen values from the offspring distribution sum to j1. By repeated application of Property A.6, this is 1/j times the coefficient of yj1 in [μ(y)]j as Theorem 2.7 claims.

Appendix B.2. Theorem 3.6

We can prove Theorem 3.6 as a special case of Theorem 2.7 by calculating the offspring distribution (Exercise B.1). However, a more illuminating proof is by noting that if we treat a transmission event as a node disappearing and being replaced by two infected nodes and a recovery event as a node disappearing with no offspring, then we have a tree where each node has 2 or 0 offspring. The total number of actual individuals infected in the outbreak is equal to the number of nodes with 0 offspring in the tree.

Following the arguments above, we are looking for sequences of length 2j1 in which 2 appears j1 times and 0 appears j times. There are (2j1j1) such sequences. The probability of each is βj1γj/(β+γ)2j1 and a fraction 1/(2j1) of these correspond to trees. Thus, the probability a length 2j1 sequence is a Łukasiewicz word is

12j1βj1γj(β+γ)2j1(2j1j1)=1jβj1γj(β+γ)2j1(2j2j1)

Using the same approach as before, we conclude that this is the probability of exactly j infections.

Exercise B.1. If we do not think of an infected individual as disappearing and being replaced by two infected individuals when a transmission happens, but rather, we count up all of the transmissions the individual causes, we get a geometric distribution with q=β/(β+γ). The details are in Exercise 3.2. Use this along with Theorem 2.7 and Table 6 (which was derived in Exercise 2.13) to give a different proof of Theorem 3.6.

Appendix C. Software

We have produced a python package, Invasion_PGF which can be used to solve the equations of Section 2 or Section 3 once the PGF of the offspring distribution or β and γ are determined. Because the numerical method involves solving differential equations in the complex plane, it requires an integration routine that can handle complex values. For this we use odeintw (Weckesser).

Table C.7 briefly summarizes the commands available in Invasion_PGF.

We now demonstrate a sample session with these commands.

Image 5

Table C.7.

Commands of Invasion_PGF. Many of these have an optional boolean argument intermediate_values which, if True, will result in returning values from generation 0 to generation gen in the discrete-time case or at some intermediate times in the continuous-time case. For the discrete-time results, the input μ is the offspring distribution PGF. For the continuous-time version, β and γ are the transmission and recovery rates respectively.

Command Output
R0(μ) Approximation of 0.
extinction_prob(μ, gen) Probability αgen of extinction by generation gen given offspring PGF μ.
cts_time_extinction_prob(β, γ, T) Probability α(T) of extinction by time T given transmission and recovery rates β and γ.
active_infections(μ, gen, M) Array containing probabilities φ0,,φj,,φM1 of having j active infections in generation gen given offspring PGF μ.
cts_time_active_infections(β, γ, T) Array containing probabilities φ0,,φj,,φM1 of having j active infections at time T given transmission and recovery rates β and γ.
completed_infections(μ, gen, M) Array containing probabilities ω0,,ωj,,ωM1 of having j completed infections in generation gen given offspring PGF μ.
cts_time_completed_infections (β, γ, T) Array containing probabilities ω0,,ωj,,ωM1 of having j completed infections at time T given transmission and recovery rates β and γ.
active_and_completed(μ, gen, M1, M2) M1×M2 array containing probabilities πi,r of i active infections and r completed infections in generation gen given offspring PGF μ.
cts_time_active_and_completed (β, γ, T) M1×M2 array containing probabilities πi,r of i active infections and r completed infections at time T given transmission and recovery rates β and γ.
final_sizes(μ, M) Array containing probabilities ω0,,ωj,,ωM1 of having j total infections in an outbreak given offspring PGF μ.
cts_time_final_sizes(β, γ, T) Array containing probabilities ω0,,ωj,,ωM1 of having j total infections in an outbreak given transmission and recovery rates β and γ.

Image 6

Appendix D. Supplementary data

The following is the Supplementary data to this article:

Multimedia component 1
mmc1.pdf (448.5KB, pdf)
Multimedia component 2
mmc2.txt (491B, txt)
Multimedia component 3
mmc3.txt (15.3KB, txt)

References

  1. Allen L.J.S. Mathematical epidemiology. Springer; 2008. An introduction to stochastic epidemic models; pp. 81–130. [Google Scholar]
  2. Allen L.J.S. CRC Press; 2010. An introduction to stochastic processes with applications to biology. [Google Scholar]
  3. Allen L.J.S. A primer on stochastic epidemic models: Formulation, numerical simulation, and analysis. Infectious Disease Modelling. 2017;2(2):128–142. doi: 10.1016/j.idm.2017.03.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Antal T., Krapivsky P.L. Exact solution of a two-type branching process: Models of tumor progression. Journal of Statistical Mechanics: Theory and Experiment. 2011;2011(08) [Google Scholar]
  5. Bailey N.T.J. The total size of a general stochastic epidemic. Biometrika. 1953:177–185. [Google Scholar]
  6. Bailey N.T.J. John Wiley & Sons; 1964. The elements of stochastic processes with applications to the natural sciences. [Google Scholar]
  7. Bartlett M.S. Some evolutionary stochastic processes. Journal of the Royal Statistical Society. Series B (Methodological) 1949;11(2):211–229. [Google Scholar]
  8. Blumberg S., Lloyd-Smith J.O. Inference of 0 and transmission heterogeneity from the size distribution of stuttering chains. PLoS Computational Biology. 2013;9(5) doi: 10.1371/journal.pcbi.1002993. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bornemann F. Accuracy and stability of computing high-order derivatives of analytic functions by Cauchy integrals. Foundations of Computational Mathematics. 2011;11(1):1–63. [Google Scholar]
  10. Broder A., Kumar R., Maghoul F., Raghavan P., Rajagopalan S., Stata R. Graph structure in the web. Computer Networks. 2000;33:309–320. [Google Scholar]
  11. Conway J.M., Coombs D. A stochastic model of latently infected cell reactivation and viral blip generation in treated HIV patients. PLoS Computational Biology. 2011;7(4) doi: 10.1371/journal.pcbi.1002033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Diekmann O., Heesterbeek J.A.P. Wiley Chichester; 2000. Mathematical epidemiology of infectious diseases. [Google Scholar]
  13. Dorogovtsev S.N., Mendes J.F.F., Samukhin A.N. Giant strongly connected component of directed networks. Physical Review E. Jul 2001;64(2) doi: 10.1103/PhysRevE.64.025101. [DOI] [PubMed] [Google Scholar]
  14. Durrett R. Branching process models of cancer. Springer; 2015. Branching process models of cancer; pp. 1–63. [Google Scholar]
  15. Dwass M. The total progeny in a branching process and a related random walk. Journal of Applied Probability. 1969;6(3):682–686. [Google Scholar]
  16. Easley D., Kleinberg J. Cambridge University Press; 2010. Networks, crowds, and markets: Reasoning about a highly connected world. [Google Scholar]
  17. Gallian J.A., Rusin D.J. Cyclotomic polynomials and nonstandard dice. Discrete Mathematics. 1979;27(3):245–259. [Google Scholar]
  18. Gardner M. Mathematical games. Scientific American. 1978;238:19–32. [Google Scholar]
  19. Getz W.M., Lloyd-Smith J.O. Disease evolution: Models, concepts, and data analyses. 2006. Basic methods for modeling the invasion and spread of contagious diseases; pp. 87–112. [Google Scholar]
  20. Harko T., Lobo F.S.N., Mak M.K. Exact analytical solutions of the Susceptible-Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and birth rates. Applied Mathematics and Computation. 2014;236:184–194. [Google Scholar]
  21. Hoff P.D. Springer Science & Business Media; 2009. A first course in Bayesian statistical methods. [Google Scholar]
  22. van der Hofstad R., Keane M. An elementary proof of the hitting time theorem. The American Mathematical Monthly. 2008;115(8):753–756. [Google Scholar]
  23. House T., Ross J.V., Sirl D. How big is an outbreak likely to be? methods for epidemic final-size calculation. Proceedings of the Royal Society A. 2013;469(2150) [Google Scholar]
  24. Kenah E., Miller J.C. Epidemic percolation networks, epidemic outcomes, and interventions. Interdisciplinary Perspectives on Infectious Diseases. 2011;2011 doi: 10.1155/2011/543520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Kendall D.G. Stochastic processes and population growth. Journal of the Royal Statistical Society. Series B (Methodological) 1949;11(2):230–282. [Google Scholar]
  26. Kimmel M., Axelrod D.E. Branching processes in biology. Interdisciplinary Applied Mathematics. 2002;19 [Google Scholar]
  27. Kiss I.Z., Miller J.C., Simon P.L. Springer; 2017. Mathematics of epidemics on networks: From exact to approximate models. [Google Scholar]
  28. Kucharski A.J., Edmunds W.J. Characterizing the transmission potential of zoonotic infections from minor outbreaks. PLoS Computational Biology. 2015;11(4) doi: 10.1371/journal.pcbi.1004154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Lewis M.A., Petrovskii S.V., Potts J.R. Vol. 44. Springer; 2016. (The mathematics behind biological invasions). [Google Scholar]
  30. Lloyd-Smith J.O., Schreiber S.J., Kopp P.E., Getz W.M. Superspreading and the effect of individual variation on disease emergence. Nature. 2005;438(7066):355. doi: 10.1038/nature04153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Ludwig D. Final size distributions for epidemics. Mathematical Biosciences. 1975;23:33–46. [Google Scholar]
  32. Ma J.J., Earn D.J.D. Generality of the final size formula for an epidemic of a newly invading infectious disease. Bulletin of Mathematical Biology. 2006;68(3):679–702. doi: 10.1007/s11538-005-9047-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. May R.M. Simple mathematical models with very complicated dynamics. Nature. 1976;261(5560):459–467. doi: 10.1038/261459a0. [DOI] [PubMed] [Google Scholar]
  34. Miller J.C. A note on a paper by Erik Volz: SIR dynamics in random networks. Journal of Mathematical Biology. 2011;62(3):349–358. doi: 10.1007/s00285-010-0337-9. [DOI] [PubMed] [Google Scholar]
  35. Miller J.C. A note on the derivation of epidemic final sizes. Bulletin of Mathematical Biology. 2012;74(9):2125–2141. doi: 10.1007/s11538-012-9749-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Miller J.C., Davoudi B., Meza R., Slim A.C., Pourbohloul B. Epidemics with general generation interval distributions. Journal of Theoretical Biology. 2010;262(1):107–115. doi: 10.1016/j.jtbi.2009.08.007. [DOI] [PubMed] [Google Scholar]
  37. Miller J.C., Slim A.C., Volz E.M. Edge-based compartmental modelling for infectious disease spread. Journal of The Royal Society Interface. 2012;9(70):890–906. doi: 10.1098/rsif.2011.0403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Moore C., Newman M.E.J. Exact solution of site and bond percolation on small-world networks. Physical Review E. 2000;62(5):7059. doi: 10.1103/physreve.62.7059. [DOI] [PubMed] [Google Scholar]
  39. Nee S., Holmes E.C., May R.M., Harvey P.H. Extinction rates can be estimated from molecular phylogenies. Philosophical Transactions of the Royal Society London B. 1994;344(1307):77–82. doi: 10.1098/rstb.1994.0054. [DOI] [PubMed] [Google Scholar]
  40. Nishiura H., Yan P., Sleeman C.K., Mode C.J. Estimating the transmission potential of supercritical processes based on the final size distribution of minor outbreaks. Journal of Theoretical Biology. 2012;294:48–55. doi: 10.1016/j.jtbi.2011.10.039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Peitgen H.-O., Jürgens H., Saupe D. Springer Science & Business Media; 2006. Chaos and fractals: New frontiers of science. [Google Scholar]
  42. Pólya G. Vol. 1. Princeton University Press; 1990. (Mathematics and plausible reasoning: Induction and analogy in mathematics). [Google Scholar]
  43. Poore G.M. Pythontex: Reproducible documents with LaTeX, Python, and more. Computational Science & Discovery. 2015;8(1) [Google Scholar]
  44. Reluga T., Meza R., Walton D.B., Galvani A.P. Reservoir interactions and disease emergence. Theoretical Population Biology. 2007;72(3):400–408. doi: 10.1016/j.tpb.2007.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Stanley R.P. Vol. II. Cambridge University Press; 2001. (Enumerative combinatorics). [Google Scholar]
  46. Valdez L.D., Macri P.A., Braunstein L.A. Temporal percolation of the susceptible network in an epidemic spreading. PLoS One. 2012;7(9) doi: 10.1371/journal.pone.0044188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Volz E.M. SIR dynamics in random networks with heterogeneous connectivity. Journal of Mathematical Biology. 2008;56(3):293–310. doi: 10.1007/s00285-007-0116-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Volz E.M., Romero-Severson E., Leitner T. Phylodynamic inference across epidemic scales. Molecular Biology and Evolution. 2017;34(5):1276–1288. doi: 10.1093/molbev/msx077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Watson H.W., Galton F. On the probability of the extinction of families. The Journal of the Anthropological Institute of Great Britain and Ireland. 1875;4:138–144. [Google Scholar]
  50. W. Weckesser odeintw. "https://github.com/WarrenWeckesser/odeintw".
  51. Wendel J.G. Left-continuous random walk and the Lagrange expansion. American Mathematical Monthly. 1975:494–499. [Google Scholar]
  52. Wilf H.S. 3rd ed. A K Peters, Ltd; 2005. generatingfunctionology. [Google Scholar]
  53. Yan P. Distribution theory, stochastic processes and infectious disease modelling. Mathematical Epidemiology. 2008:229–293. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia component 1
mmc1.pdf (448.5KB, pdf)
Multimedia component 2
mmc2.txt (491B, txt)
Multimedia component 3
mmc3.txt (15.3KB, txt)

Articles from Infectious Disease Modelling are provided here courtesy of KeAi Publishing

RESOURCES