Skip to main content
Springer logoLink to Springer
. 2026 Feb 26;92(3):40. doi: 10.1007/s00285-026-02349-7

Bounds for survival probabilities in supercritical Galton-Watson processes and applications to population genetics

Reinhard Bürger 1,
PMCID: PMC12945936  PMID: 41746375

Abstract

Population genetic processes, such as the adaptation of a quantitative trait to directional selection, may occur on longer time scales than the sweep of a single advantageous mutation. To study such processes in finite populations, approximations for the time course of the distribution of a beneficial mutation were derived previously by branching process methods. The application to the evolution of a quantitative trait requires bounds for the probability of survival S(n) up to generation n of a single beneficial mutation. Here, we present a method to obtain a simple, analytically explicit, either upper or lower, bound for S(n) in a supercritical Galton-Watson process. We prove the existence of an upper bound for offspring distributions including Poisson, binomial, and negative binomial. They are constructed by bounding the given generating function, φ, by a fractional linear one that has the same survival probability S and yields the same rate of convergence of S(n) to S as φ. For distributions with at most three offspring, we characterize when this method yields an upper bound, a lower bound, or only an approximation. Because for many distributions it is difficult to get a handle on S, we derive an approximation by series expansion in s, where s is the selective advantage of the mutant. We briefly review well-known asymptotic results that generalize Haldane’s approximation 2s for S, as well as less well-known results on sharp bounds for S. We apply them to explore when bounds for S(n) exist for a family of generalized Poisson distributions. Numerical results demonstrate the accuracy of our and of previously derived bounds for S and S(n). Finally, we treat an application of these results to determine the response of a quantitative trait to prolonged directional selection.

Supplementary Information

The online version contains supplementary material available at 10.1007/s00285-026-02349-7.

Keywords: Extinction probability, Fractional linear generating function, Advantageous mutation, Directional selection, Allele-frequency distribution, Haldane’s approximation

Introduction

Galton-Watson branching processes were used early in the history of population genetics to approximate the fixation probability of a single advantageous mutation in a finite population (Fisher 1922; Haldane 1927). In particular, Haldane showed that if the offspring distribution of the mutant is approximately Poisson with mean m=1+s, the fixation probability can be approximated by 2s provided the selective advantage s is sufficiently small. More recent work led to considerable generalizations of Haldane’s approximation and is discussed in Sect. 5.3.

Essentially parallel to the first applications of Galton-Watson processes the method of diffusion approximation was introduced and developed (Fisher 1922; Wright 1931; Kolmogorov 1931). Whereas this method is a powerful tool to quantify fixation probabilities, stationary distributions, or the distribution of the time to fixation of an allele (Kimura 1964; Ewens 2004), it is less well suited to obtain analytically explicit expressions for the time-dependence of allele frequencies under selection. Approximations have been derived, essentially for statistical purposes, but they are semi-explicit and complex (e.g. Steinrücken et al. 2013).

The time course of the frequency distribution of a new favorable mutant has been approximated by explicit formulas derived with the help of branching-process theory (Desai and Fisher 2007; Uecker and Hermisson 2011; Martin and Lambert 2015; Götsch and Bürger 2024). Application of these results to population genetic processes that occur on longer time scales than the sweep of a single mutation, such as the evolutionary response of a quantitative trait to directional selection, require bounds for the probability of survival S(n) of a single new mutant up to generation n (Götsch and Bürger 2024). The response of the mean of a trait is determined by the variance contributed by every mutation that is favored by selection and spreads. The total variance contributed by a single mutation while present in the population is given by an integral, whose integrand depends, among others, on S(n). With accurate and analytically simple bounds on S(n), this integral can be approximated and the error estimated. This procedures also requires estimates on the time T(ϵ) needed for S(n) to fall below (1+ϵ)S, S the (ultimate) survival probability. Under appropriate scaling assumptions on the strength s of selection and the population size N, the time T(ϵ) is short compared with the time while the mutant is sweeping to fixation. Essentially, it can be shown that the variance contributed by mutations that are lost is negligible. Above T(ϵ), S(n) can be approximated by the constant term S which greatly simplifies the integral (for a detailed description, see Sect. 6.3).

Götsch and Bürger (2024) imposed the assumption that the offspring distribution is such that S(n) can be bounded above by the explicitly available expression for an appropriately chosen modified geometric distribution (which has a fractional linear generating function; see Sect. 2.3). As explained below, here we present a general method to derive such bounds and prove its applicability for some well-known families of offspring distributions.

Motivated by these considerations, the main goal of this paper is the derivation of sharp, explicit, and analytically tractable upper bounds for the probability of survival S(n) up to generation n in supercritical Galton-Watson processes. We adopt the principal method pioneered by Seneta (1967) (and attributed by him to P.A.P. Moran) of using probability generating functions (pgfs) of fractional linear type to bound a given pgf φ. Seneta (1967) and Agresti (1974) used it to derive simple bounds for the extinction time distribution of subcritical or critical branching processes, originating from specific offspring distributions. For the Poisson distribution, Agresti derived best possible bounds of fractional linear type and indicated how to derive bounds for the supercritical case by exploiting a duality relation between subcritical and supercritical processes. For the supercritical case, these bounds are no longer pgfs (see Sect. 3.2). A different method to obtain bounds for the extinction probabilities P(n)=1-S(n) for a given pgf was developed by Pollak (1971). It is based on series expansion of the pgf and is applicable to sub- and supercritical processes (see Sect. 3.2).

We use a direct method for the supercritical case that is based on proper generating functions and requires that the prospective bounding fractional linear pgf has the same extinction probability Pφ and the same slope γφ=φ(Pφ) as the given pgf φ (see Sect. 3.1). This method can be applied to pgfs other than the Poisson distribution, even if no analytical expression for Pφ is available, as in the case of a binomial distribution when a re-parameterization in terms of PBin and the number of trials is possible.

Using this method, we prove that simple, explicit upper bounds obtained from fractional linear distributions, denoted SFL(n), do exist for Poisson, binomial, and negative binomial distributions (Sects. 4.14.3). For distributions with at most three offspring, SFL(n) can yield an upper bound (in most of the parameter space), a lower bound, or SFL(n) may switch from a lower to an upper bound at some generation n; a full characterization is obtained in Sect. 4.4. Except for the Poisson distribution, where the proof is simple enough to provide insight, the proofs are relegated to the Appendix.

For most distributions our method is difficult to apply because their pgfs are too complicated to be handled analytically and already P is difficult to access. Interestingly, there exists a branch of research that seems to be completely disconnected from the literature related to Haldane’s approximation. Quine (1976), Daley and Narayan (1980), From (2007), and others derived explicit and very accurate upper and lower bounds for the eventual extinction probability in Galton-Watson processes with offspring distributions having finite second or third moment. They are outlined in Sects. 5.1 and 5.2; for an extensive review consult From’s paper. In Sect. 5.3, we devise a method for general pgfs φ to deduce series expansions of Pφ and of γφ in terms of s, where m=1+s>1 and s is small. The bounds of Quine (1976) and Daley and Narayan (1980) are shown to have an error of order O(s3). In this context we also briefly review recent, far reaching generalizations of Haldane’s approximation for the fixation probability in finite populations. In Sect. 5.4, we review bounds and approximations obtained previously by diffusion-approximation methods for the Wright-Fisher model.

Among others, we apply our series expansions to derive analytically explicit, at least approximate, bounds for Sφ(n) for a family of generalized Poisson distributions which otherwise is prohibitively difficult to tackle (Sect. 5.8). Even for cases that can be treated fully analytically, such as the Poisson or binomial distribution, these expansions yield valuable additional insights (Sects. 5.5, 5.6, 5.7). For the generalized Poisson distribution, SFL(n) may yield an upper bound for the true SGP(n) (if the variance is not much higher than the mean), a lower bound (if the variance is much higher than the mean), or switch from an upper to a lower bound at some n (Sect. 5.8).

For every pgf φ with finite variance, the sequence SFL(n) that we construct, whether it is an exact bound or an approximation based on series expansion, has the property that it converges to the given survival probability Sφ at the correct asymptotic rate γφ. The accuracy of the resulting convergence times Tφ(ϵ) and relative errors of the bounds for Sφ(n) are explored in Sects. 6.1 and 6.2, respectively. Our central population genetics application is treated in Sect. 6.3.

Definitions and preliminaries

Basic notation and assumptions

We consider a Galton-Watson process {Zn}, where Zn=j=1n-1ξj, Z0=1, and ξj denotes the (random) number of offspring of individual j in generation n-1. Thus, Zn counts the number of descendants of the mutant that emerged in generation 0. We assume that the ξj are mutually independent, identically distributed random variables, independent of n, and have at least three finite moments, where

E[ξj]=m>1andVar[ξj]=σ2>0. 2.1

Therefore, the process {Zn} is supercritical. We denote the probability generating function (pgf) of ξj (hence of Z1) by φ, and the probability of having k offspring by pk=P(ξj=k). Then φ(x)=k=0pkxk. To avoid trivialities, we always assume p0>0 and p0+p1<1 (whence σ2>0).

As is well known (e.g. Athreya and Ney 1972, ), the extinction probability Pφ of this process is the unique value x(0,1) satisfying φ(x)=x. We denote by

Pφ(n)=Prob[Zn=0|Z0=1] 2.2

the probability that the mutant is extinct by generation n. It satisfies Pφ(n)=φ(n)(0), where φ(n) is the nth iterate of φ. Our assumptions imply 0<Pφ(n)<Pφ<1 and limnPφ(n)=Pφ. Often it will be convenient to formulate results in terms of the corresponding survival probabilities:

Sφ=1-PφandSφ(n)=1-Pφ(n). 2.3

We will use subscripts, such as φPoi and PPoi, to refer to specific offspring distributions.

Seneta’s method of bounding the extinction probabilities Pφ(n)

Seneta (1967) showed the following result (which does not require m>1): Let φL, φU, and φ be pgfs such that

φL(x)φ(x)φU(x)for everyx[0,1]. 2.4

Then

φL(n)(x)φ(n)(x)φU(n)(x)for everyx[0,1]and everyn1. 2.5

In particular, the probability of extinction by generation n, Pφ(n), satisfies

φL(n)(0)Pφ(n)φU(n)(0)for everyn1. 2.6

If the Galton-Watson process generated by φ is supercritical, it is natural to bound it by supercritical processes. Because Pφ(n) is monotone increasing and converges to Pφ<1, the following variant of this results is valid: If

φL(x)φ(x)φU(x)for everyx[0,Pφ], 2.7

then (2.6) holds. As in some previous work (e.g. Seneta 1967; Agresti 1974), we will use fractional linear pgfs as bounds because they have the property that the nth iterates φ(n) can be calculated explicitly.

Fractional linear generating functions

The modified geometric, or fractional linear, distribution is defined by

p0(FL)=ρandpk(FL)=(1-ρ)(1-π)πk-1ifk1, 2.8

where 0<ρ<1 and 0<π<1 (e.g. Athreya and Ney 1972, pp. 6-7; Haccou et al. 2005, p. 16). The name fractional linear derives from the fact that its pgf is

φFL(x;π,ρ)=ρ+x(1-π-ρ)1-xπ, 2.9

hence fractional linear. With ρ=1-π, the geometric distribution is recovered. It is straightforward to show that every fractional linear pgf generates a modified geometric distribution. We omit the dependence of φFL on π and ρ if no confusion can occur.

Mean and variance of {pk(FL)} are

mFL=1-ρ1-πandσFL2=(1-ρ)(π+ρ)(1-π)2. 2.10

Therefore mFL>1 if and only if 0<ρ<π<1, and mFL>σFL2 if and only if 2π+ρ<1, which implies π<12.

If m1, and after rearrangement of the parameterization in Athreya and Ney (1972, p. 7), the n-times iterated pgf is again fractional linear and has parameters

πn=π(1-mFL-n)π-ρmFL-nandρn=ρ(1-mFL-n)π-ρmFL-n. 2.11

Now assume mFL>1, i.e., ρ<π. Then the probability of extinction by generation n is PFL(n)=φFL(n)(0)=ρn and the (ultimate) extinction probability is

PFL=ρπ. 2.12

By simple algebra we arrive at

PFL(n)=PFL(1-mFL-n)1-mFL-nPFL 2.13

and

SFL(n)=1-PFL(n)=SFL1-mFL-n(1-SFL). 2.14

Because it will be important in subsequent sections, we note that

γFL:=φFL(PFL)=mFL-1. 2.15

Equation (2.14) allows to compute the time needed for the probability of survival up to generation n, SFL(n), to decline to (1+ϵ)SFL. For ϵ>0 (not necessarily small) we define TFL(ϵ) as the (positive) solution T of

SFL(T)=(1+ϵ)SFL. 2.16

With the help of (2.14), this time is

TFL(ϵ)=ln((1+1ϵ)PFL)lnmFL. 2.17

Of course, the first generation in the associated GW-process that satisfies SFL(T)(1+ϵ)SFL is the least integer greater than or equal TFL(ϵ).

The basic result and alternative methods for deriving bounds for Sφ(n)

First we derive our basic result and simple consequences. Then we discuss alternative approaches.

Basic result

For a given pgf φ, we are primarily interested in lower bounds for Pφ(n), and upper bounds for Sφ(n). It is well known that Pφ(n) converges to Pφ at the geometric rate

γφ:=φ(Pφ) 3.1

(Athreya and Nei 1972, Sect. 1.11). By Seneta’s inequalities (2.6), we can obtain a lower bound for Pφ(n) that converges to Pφ at the correct rate γφ, if we can choose a fractional linear pgf φFL such that

φFL(Pφ)=PφandγFL=φFL(Pφ)=γφ 3.2

and φFL(x;π,ρ)φ(x) for every x[0,Pφ]. Indeed, a straightforward calculation shows that for given 0<a1<1 and 0<a2<1, there is always a unique solution (π,ρ) of the system

φFL(a1;π,ρ)=a1andφFL(a1;π,ρ)=a2. 3.3

It is given by

π=1-a21-a1a2andρ=a1π 3.4

and satisfies 0<ρ<π<1.

With a1=Pφ, a2=γφ, and the resulting values π and ρ, eq. (2.13) informs us that for the resulting fractional linear offspring distribution the probability of extinction by generation n is

PFL(n)=φFL(n)(0)=Pφ(1-γφn)1-γφnPφ, 3.5

where we used (2.15). Together with the left inequality in (2.6), these considerations yield the following basic result.

Proposition 3.1

Let φ(x) be a pgf satisfying our general assumptions stated in Section 2, so that m>1 and 0<Pφ<1. Let φFL(x;πφ,ρφ) denote the uniquely determined fractional linear pgf that satisfies (3.2). If

φFL(x;πφ,ρφ)φ(x)for everyx[0,Pφ], 3.6

then the probability of extinction by generation n satisfies

Pφ(1-γφn)1-γφnPφPφ(n)Pφ. 3.7

Equivalently, the probability Sφ(n) of survival up to generation n satisfies

SφSφ(n)Sφ1-γφn(1-Sφ). 3.8

The key of applying this result to a given offspring distribution φ is of course the establishment of (3.6). These bounds yield the correct rate of approach to Pφ and Sφ. However, in general, they yield little detailed information because typically Pφ and γφ cannot be evaluated analytically (even for simple distributions, such as binomial or negative binomial). One remedy is to use accurate approximations for Pφ and γφ, which is possible for many families of distributions (see Section 5).

Remark 3.2

If instead of (3.6),

φFL(x;πφ,ρφ)φ(x)for everyx[0,Pφ] 3.9

is satisfied, then

Pφ(n)Pφ(1-γφn)1-γφnPφ 3.10

and

Sφ(n)Sφ1-γφn(1-Sφ) 3.11

hold. Again, this follows from (2.7) and (2.6).

By construction, these bounds provide excellent approximations for Pφ(n) and Sφ(n) if n is large, but not necessarily if n is small because φFL(0) may differ considerably from φ(0). Relative errors are displayed in Fig. 5 for a versatile class of generalized Poisson distributions.

Fig. 5.

Fig. 5

Relative errors of survival probabilities by generation n, (Sapp(n)-SGP(n))/SGP(n), for the generalized Poisson distribution. In all cases, s=0.1. The values of λ are given in the legend; λ=0 yields the Poisson distribution. If λ=0.276, then Sapp(n)-SGP(n) changes sign between n=3 and n=4; cf. Fig. 4. We note that for given m=1+s, φGP and φNB have the same variance if λ=1-r/(r+1+s). With r=5 this yields λ0.095, SGP0.14841, SNB0.14834. Thus, on this scale of resolution, the blue curve would be almost indistinguishable from the corresponding curve for the negative binomial with r=5.

From (3.8), we can derive a simple bound for the minimum time Tφ(ϵ) such that

Sφ(n)(1+ϵ)Sφfor everynTφ(ϵ). 3.12

Indeed, from (2.17), (2.15), and (3.8), we obtain Tφ(ϵ)TFL(ϵ). By the construction of φFL in Proposition 3.6 this yields

Tφ(ϵ)ln((1+1ϵ)Pφ)-lnγφ. 3.13

For sufficiently small ϵ, Tφ(ϵ) is the time after which extinction of the mutant can be ignored. We study simple approximations as well as their accuracy in Sect. 6.1. This estimate of T(ϵ) will play a key role in Sect. 6.3.

In Section 4 we investigate the validity of (3.6) for well-known families of offspring distributions. In some cases, (3.6) is valid for every x[0,1]. A necessary condition for this is

mφγφ<1. 3.14

Indeed, by our construction of φFL and by (2.15), i.e., because mFL-1=γFL=γφ, we obtain that (3.14) holds if and only if φFL(1)>φ(1). The latter implies φFL(x)<φ(x) for x slightly smaller than 1.

Alternative approaches

As noted by a reviewer, a simple general upper bound for Pφ(n) is obtained by using concavity of the pgf φ and starting with the observation Pφ-Pφ(n)=φ(Pφ)-φ(Pφn-1)γφ(Pφ-Pφn-1). Then iteration yields

Pφ-Pφ(n)γφnPφ,n1. 3.15

Our bound (3.7) yields

Pφ-Pφ(n)γφnPφ(1-Pφ)1-γφnPφ, 3.16

where the right side converges to Pφ(1-Pφ) as n. This yields a much tighter upper bound than (3.15), especially in the slightly supercritical case when 1-Pφ=O(s) if m=1+s. Also (3.15) entails a much higher estimate for Tφ(ϵ) than (3.13), which is important for the applications in Sect. 6.3. For the Poisson distribution, relative errors of these bounds and those discussed below are given in Table 1.

Table 1.

The table shows the relative errors (Sapp(n)-SPoi(n))/SPoi(n) of the upper bound Sapp(n) for SPoi(n) obtained from the simple method in (3.15), from (3.8), and from Pollak’s (4.13) (and Agresti’s equivalent bound). The relative error of the simple bound tends to 0 very slowly. If m=1.02, it takes 506 generations to decrease below 0.001.

n=1 n=5 n=10 n=20 n=50 n=100
m=1.5
(3.15) 0.0863 0.0295 0.00307 2.8×10-5 2.2×10-11 0
(3.8) 0.0153 0.0035 0.00034 3.1×10-6 2.5×10-12 0
(4.13) 0.0062 0.0010 0.00009 8.3×10-7 6.4×10-13 0
m=1.1
(3.15) 0.3832 0.9869 0.94154 0.47327 0.02823 2.1×10-4
(3.8) 0.0420 0.0372 0.02084 0.00705 0.00035 2.5×10-6
(4.13) 0.0342 0.0262 0.01321 0.00404 0.00019 1.4×10-6
m=1.02
(3.15) 0.5343 2.2140 3.7106 5.405 5.589 2.802
(3.8) 0.0518 0.0587 0.04439 0.02777 0.01050 0.00316
(4.13) 0.0497 0.0546 0.03994 0.02386 0.00827 0.00233

Pollak’s (1971) bounds

For generating functions φ with m>1 and Pφ>0, Harris (1963, pp. 16,17) proved that a constant d>0 exists such that, in our notation,

Pφ(n)=Pφ-dγφn+O(γφ2n). 3.17

Pollak (1971) derived a method to obtain upper and lower bounds for d. His method is based on a recursive formula for (Pφ-Pφ(n))-1 that invokes series expansion of φ about Pφ up to second and third order for the upper and lower bound, respectively. Application of his method requires the verification of two complicated inequalities (one for each bound) on the generating function (his two-sided inequality (2.2)). He verified both inequalities for Poisson distributions satisfying mPPoi<2 (in fact, mPPoi=γPoi1 holds always by Lemma 4.3), and for negative binomial distributions satisfying two conditions. (With the help of the expansions in Sect. 5.7, it is readily shown that they are fulfilled if m1.) Pollak (1971), eq. (5.2) with r=0 proved that

Pφ-Pφ(n)γφnd¯(n), 3.18a

where

d¯(n)=2(1-γφ)Pφ2(1-γφ)+φ(Pφ)Pφ(1-γφn)/γφ. 3.18b

At least for the Poisson distribution, Pollak’s d¯(n) in (3.18b) is slightly smaller than our bound in (3.16). In Sect. 4.1.1 we compare these bounds in more detail, especially numerically. The corresponding lower bound for (Pφ-Pφ(n))/γφn is more complicated and invokes φ(Pφ). We note that Pollak also derived bounds for the subcritical case.

Agresti’s (1974) bounds

Seneta (1967) applied his method (Sect. 2.2) to obtain bounds for the generating function of the Poisson distribution in the subcritical case. Agresti (1974) refined this approach considerably. For the subcritical case, he derived best possible fractional linear lower and upper bounds for generating functions of the form p0+p1x+p2x2 and p0+pkxk for some k1. Agresti used these to obtain fractional linear bounds for rather general generating functions. In general, those are not best possible. By a different procedure, he derived best possible fractional linear bounds for Poisson generating functions with mean m<1.

Agresti noted that the dual relation φsub(x)=φ(Pφx)/Pφ can be used to derive bounds for a supercritical generating function φ from bounds for the subcritical case. Application of Agresti’s results to a supercritical φ (with 0<Pφ<1) requires to first determine the bounds φsub,L and φsub,U for the dual subcritical φsub(x), i.e.,

φsub,L(x)φsub(x)φsub,U(x). 3.19

This yields the following bounds for the given φ(x):

Pφφsub,L(x/Pφ)φ(x)Pφφsub,U(x/Pφ)if0xPφ. 3.20

Clearly, equality holds in (3.20) if x=Pφ. At x=Pφ, also the first derivatives of the three functions coincide, as do the second derivatives of the first two functions. The reason is that his method for the subcritical case requires that the bounding fractional linear pgfs have the same first derivatives at x=1 as the given pgf φ. For the lower bound, on which we concentrate, also the second derivatives at x=1 must coincide. His upper bound instead satisfies the requirement that its value at 0 equals that of the given (subcritical) pgf. Therefore, his lower bound shares the property of yielding the correct rate of approach of Pφ(n)Pφ and of Sφ(n)Sφ with our lower bound and with that of Pollak. We note that the bounds in (3.20) are not generating functions because at x=1 they exceed 1.

Agresti’s method has a disadvantage that may be prohibitive for applications to other generating functions. In addition to fitting the derivative at P (as in our approach), it requires the determination of the supremum and infimum (with respect to x) of

v(x,m)=φ(Pφx)/Pφ-1+γφ(1-x)xφ(Pφx)/Pφ-x+γφ(1-x), 3.21

where this is already transformed from his version using duality, so that here φ has mean m>1 and his λ=φsub(1)=γφ. The parameter π of the bounding fractional linear function (not a pgf!) is π=sup0x<1v(x,m) for the upper bound and π=inf0x<1v(x,m) for the lower bound. These are readily determined for φPoi because then Agresti proved that in this case v(xm) is strictly monotone decreasing if x[0,1). For other generating functions, this may be much more difficult or impossible to establish. For instance, for the generalized Poisson distribution treated in Sect. 5.8, the resulting function v(x;m,λ) can be decreasing in x (for sufficiently small λ), increasing (for sufficiently large λ), or have a local minimum at some x(0,1) for a small range of intermediate values λ.

For the Poisson distribution with m<1, Agresti derived the bounds φsub,L and φsub,U explicitly. We treat the resulting lower bounds for PPoi(n) in the supercritical case in Sect. 4.1.1, where we also compare the accuracy of the bounds discussed here.

Finally we note that Sagitov and Lindo (2016) introduced a class of so-called power-fractional generating functions. Similar to linear fractional generating functions, this class has the property that it is invariant under iterations. This class is parameterized by four parameters, thus much more flexible than the linear fractional class. Branching processes with power-fractional offspring distributions were recently studied by Alsmeyer and Hoang (2025). It would be of interest to investigate if this class can be used to derive either more accurate bounds than the fractional linear class, or accurate bounds for distributions where the fractional linear class does not provide bounds (such as in a parameter region for distributions with at most three offspring; cf. Sect. 4.4).

Bounds for the survival probabilities Sφ(n) for common families of offspring distributions

In Sections 4.1, 4.2, and 4.3, we prove validity of (3.6) for the families of Poisson, binomial, and negative binomial distributions, respectively. Consequently, the lower bound (3.7) for Pφ(n) and the upper bound (3.8) for Sφ(n) are established for these distributions. In Section 4.4, we study distributions with at most three offspring and characterize when either (3.6) or its converse or none of both holds. Proofs are relegated to the appendix, except for the Poisson distribution for which the proof is simple enough so that the basic ideas are not hidden behind technical details.

The Poisson distribution

The main goal here is to prove that (3.6) holds for the Poisson distribution. Indeed, we will show that the inequality holds for every x[0,1]. We start by recalling some facts about the Poisson distribution with mean m>1. Its pgf is

φPoi(x;m)=e-m(1-x). 4.1

We will need the Lambert function, or the product logarithm, W(z)=ProductLog(z), which is defined as the principal branch of the solution w of z=wew, z-e-1. (Lambert’s W function is treated in considerable detail in (Corless et al. 1996) and (Wikipedia contributors 2025).) We will need W(z) for values z[-e-1,0], for which it is monotone increasing from -1 to 0 and concave. Then the extinction probability is

PPoi(m)=-1mW(-me-m), 4.2

and

γPoi(m)=φPoi(PPoi(m))=mPPoi(m)=-W(-me-m). 4.3

To establish (3.6), we proceed as in the derivation of Proposition 3.1 and choose the parameters π and ρ of our candidate for a bounding fractional linear pgf according to (3.4) with a1=PPoi(m) and a2=γPoi(m). We denote them by πm and ρm to indicate their dependence on m. Straightforward algebra yields

ρm=-W(-me-m)2+W(-me-m)m-W(-me-m)2andπm=-mρmW(-me-m), 4.4

where the latter follows from (2.12) and (4.2).

Theorem 4.1

For every m>1, the pgfs φPoi(x;m) and φFL(x;πm,ρm) satisfy

φFL(x;πm,ρm)φPoi(x;m)for everyx[0,1]. 4.5

Equality holds only at x=PPoi and x=1.

Proposition 3.1 immediately yields

Corollary 4.2

Given a Poisson offspring distribution with mean m>1, the probability of extinction by generation n, PPoi(n), satisfies the inequality (3.7), and the probability of survival up to generation n, SPoi(n), satisfies the inequality (3.8), each with φ=φPoi.

In the proof of Theorem 4.1 we will need some inequalities.

Lemma 4.3

The following inequalities hold:

2-m<γPoi(m)<1m 4.6

and

2m-1<PPoi(m)<1m2, 4.7

where both lower bounds are 0 if m2.

Proof

Because of (4.3) it is sufficient to prove (4.6). We start with the right hand side. By (4.3) and the definition of W(z), γPoi satisfies

-me-m=-γPoi(m)e-γPoi(m). 4.8

By the properties of W we have γPoi(1)=1, 0<γPoi(m)<1 if m>1, and γPoi(m) decreases monotonically to 0 as m. The function g(x)=-xe-x is monotone decreasing from 0 at x=0 to -e-1 at x=1. Therefore, γPoi<1m if and only if g(γPoi)>g(1/m), which is equivalent to -me-m>-1me-1/m by using (4.8). The latter inequality can be rearranged to m2e1/m-m<1, which is easily verified because the left hand side equals 1 if m=1 and is monotone decreasing in m.

To prove the left hand side of (4.6), we show W(-me-m)<m-2. If m2, this is trivially satisfied because W(-me-m)<0 whenever m1. If m<2 we use that 0<x<P is equivalent to φ(x)>x for any generating function. With x=2m-1, this shows that PPoi=-m-1W(-me-m)>2m-1 if and only if φPoi(2m-1)=e2-2m>2m-1. The latter inequality is readily confirmed, for instance by showing that ddm(e2-2m/(2m-1))=2e2-2m(m-1)2/(m-2)2>0.

Proof of Theorem 4.1

Proving (4.5) is equivalent to showing

fPoi(x)=lnφPoi(x;m)-lnφFL(x;πm,ρm)0for everyx[0,1], 4.9

where we omit the dependence of fPoi on m. Proving (4.9) is simplified by the fact that lnφPoi(x;m)=m(x-1). We easily infer from the properties of φPoi stated above and the definition of φFL(x;πm,ρm) that fPoi(PPoi)=0, fPoi(1)=0, fPoi(PPoi)=0, and fPoi(1)=m-γPoi-1<0, where we used (2.15) and the right-hand side of (4.6). A typical graph of fPoi is shown in Fig. 1.

Fig. 1.

Fig. 1

The graph of the function fPoi(x) with m=1.5. Then πm0.506, ρm0.211, PPoi0.4172. As m decreases to 1, PPoi increases to 1, and (πm,ρm) approaches (13,13).

Now we show fPoi(PPoi)=-(lnφFL)(PPoi)>0. For a general fractional linear pgf we get -(lnφFL)(PFL)=(1-π)π2(1-π-2ρ)(1-ρ)2ρ2 because PFL=ρ/π. This is positive if and only if π+2ρ<1. With π=πm and ρ=ρm from (4.4), we obtain after some calculation that πm+2ρm<1 if and only if

(m-2W(-me-m))(1+W(-me-m))m-W(-me-m)2<1. 4.10

Each of the three factors on the left hand side is positive if m>1 because -1<W(-me-m)<0. Therefore rearrangement shows that the inequality (4.10) is equivalent to W(-me-m)<m-2, which we proved in Lemma 4.3.

If we can show that fPoi(x)=-(lnφFL)(x)<0 for every 0<x<1, then fPoi(x) is strictly concave and has exactly one zero between PPoi and 1, because fPoi(PPoi)=0, fPoi(PPoi)>0, and fPoi(1)<0. Because fPoi(PPoi)=fPoi(1)=0 and fPoi(1)<0, fPoi must have a local maximum at this zero. It also follows that fPoi(x)<0 if 0<x<PPoi. Hence, fPoi(x)0 for every 0x1, with equality only at x=PPoi and x=1.

It remains to determine the sign of fPoi(x). It is straightforward to check (Sect. 2.2 in the supplementary Mathematica notebook) that

(lnφFL)(x;π,ρ)=2(1-π)(1-ρ)d(x)(ρ(1-x)+(1-π)x)3(1-πx)3, 4.11

where

d(x)=34(2xπ(1-π-ρ)-(1-π-ρ-ρπ))2+14(1-π)2(1-ρ)2. 4.12

Therefore, (lnφFL)(x;π,ρ)>0 for every x(0,1) and every admissible pair (π,ρ). This finishes the proof.

We note that the proof implies that e-m=φPoi(0;m)>φFL(0;πm,ρm)=ρm, which is not easy to establish directly. It also implies that φPoi(0)<φFL(0).

In the proof above we showed that fPoi(x)<0 if x[0,PPoi). By a simple calculation we infer that the maximum of the relative error (φPoi(x)-φFL(x))/φPoi(x) on the interval [0,PPoi] is achieved at x=0. At x=0, we find limm1φFL(0;πm,ρm)=13<1e=limm1φPoi(0;m), which yields a relative error of 0.094 if m=1. If m=1+s and s is small, such as s<0.3, an accurate approximation of the maximum relative error is 1-e3-e27s0.094-0.101s. The relative error decreases to 0 as m (unsurprisingly, because PPoi0). Therefore, the upper bound in (3.8) is not an accurate approximation if m is close to 1 and n is small (see the case λ=0 in Fig. 5).

Series expansions in s of SPoi and γPoi when m=1+s, as well as upper and lower bounds for SPoi, are presented in Sect. 5.5.

Comparison of Pollak’s and Agresti’s bounds with our and the simple bound

For the Poisson distribution, the term φ(Pφ)Pφ(1-γφn)/γφ in Pollak’s bound d¯(n) in (3.18b) for (Pφ-Pφ(n))/γφn simplifies to γPoi(1-γPoin). Therefore,

d¯(n)=PPoi1+γPoi(1-γPoin)2(1-γPoi), 4.13

whence

d¯=limnd¯(n)=PPoi1+12γPoi(1-γPoi)-1 4.14

is an upper bound for Harris’ constant d in (3.17). If m=1+s, then γPoi=1-s+23s2+O(s3) (Sect. 5.5), so that 1+12γPoi(1-γPoi)-1-1=2s-103s2+O(s3). We recall from (3.16) that our corresponding upper bound for d is PPoi(1-PPoi), where 1-PPoi=2s-83s2+O(s3). Thus, Pollak’s bound is slightly more accurate.

As already noted in Sect. 3.2, Agresti (1974) derived upper and lower bounds for the Poisson distribution with m<1. From the parameters of his bounds for the subcritical case, the parameters π and ρ for the fractional linear bounds in the supercritical case can be computed and are given in Sect. 2.3 of the supplementary Mathematica notebook. It turns out that the lower bounds for PPoi(n) of Pollak and Agresti coincide (as already noted by Agresti). Agresti states that his upper bound, which is also given in the supplementary notebook, performs favorably compared with Pollak’s.

Table 1 shows the relative errors (Sapp(n)-SPoi(n))/SPoi(n) produced by the bounds discussed above, where SPoi(n) is the exact value obtained by iteration of the pgf φPoi. The data confirm that Pollak’s and Agresti’s bounds perform slightly better than ours. The reason is that both are based on fitting also the second derivative at PPoi, i.e., Agresti’s bounding fractional linear function has the property that it not only coincides with φPoi at PPoi, but also its first and second derivative do. Therefore, it cannnot be a pgf in the supercritical case (indeed it exceeds 1 at x=1). Our method posits a bounding fractional linear pgf, whence only it and its first derivative can be fitted at PPoi. Pollak does not construct bounding functions.

The binomial distribution

The binomial distribution has the pgf

φBin(x;n,p)=(1-p+px)n. 4.15

We assume n2 and mBin=np>1. Let PBin denote the extinction probability, i.e., the unique solution of φBin(x)=x in (0, 1). We set

ξ=(PBin)1/n 4.16

and p=1-ξ1-ξn, and parameterize φBin by n and ξ. Then

φBin(x;n,1-ξ1-ξn)=x(1-ξ)+(ξ-ξn)1-ξnn. 4.17

From eqs. (3.3) and (3.4), we infer that the fractional linear pgf φFL(x;πBin,ρBin) with the parameters

πBin=ξ(1-ξn+nξn)-nξnξ(1-ξn+nξ2n)-nξ2nandρBin=ξnπBin 4.18

has the same extinction probability, PBin, and the same rate of convergence,

γBin=nξn(1-ξ)ξ(1-ξn), 4.19

as the binomial.

In Appendix A, we prove

Theorem 4.4

For every n2 and every ξ(0,1), the pgfs φBin and φFL satisfy

φFL(x;πBin,ρBin)φBin(x;n,1-ξ1-ξn)for everyx[0,1]. 4.20

Equality holds if and only if x=PBin or x=1.

Proposition 3.1 immediately yields

Corollary 4.5

Given a binomial offspring distribution with mean mBin>1, the probability of extinction by generation τ, PBin(τ), satisfies the inequality (3.7), and the probability of survival up to generation τ, SBin(τ), satisfies the inequality (3.8), each with φ=φBin.

Series expansions in s of SBin and γBin when mBin=1+s, as well as upper and lower bounds of SBin, are presented in Sect. 5.6.

The negative binomial distribution

The negative binomial distribution has the pgf

φNB(x;r,p)=pr(1-(1-p)x)r. 4.21

The mean and the variance are mNB=r1-pp, σNB2=r1-pp2, respectively. Because for r=1 a geometric distribution is obtained, and nothing remains to be proved, we assume r2 and mNB>1. Let PNB denote the extinction probability, i.e., the unique solution of φNB(x)=x in (0, 1). We set

ζ=(PNB)1/r 4.22

and p=ζ(1-ζr)1-ζr+1, and parameterize the negative binomial distribution by r and ζ. By our general assumptions we have 0<ζ<1. Then

φNB(x;r,ζ(1-ζr)1-ζr+1)=ζ(1-ζr)1-ζr+1-x(1-ζ)r 4.23

and mNB=r(1-ζ)ζ(1-ζr). Straightforward algebra shows that

γNB=r(1-p)PNB1-(1-p)PNB=r(1-ζ)ζr1-ζr. 4.24

From eqs. (3.3) and (3.4), we infer that the fractional linear pgf φFL(x;πNB,ρNB) that has the same extinction probability, PNB, and the same rate of convergence, γNB, as the negative binomial has the parameters

πNB=1-[1+r(1-ζ)]ζr1-[1+r(1-ζ)ζr]ζrandρNB=ζrπNB. 4.25

In Appendix B, we prove

Theorem 4.6

For every r2 and every ζ(0,1), the pgfs φNB and φFL satisfy

φFL(x;πNB,ρNB)φNB(x;r,ζ(1-ζr)1-ζr+1)for everyx[0,PNB]. 4.26

Equality holds if and only if x=PNB1.

Proposition 3.1 immediately yields

Corollary 4.7

Given a negative binomial offspring distribution with mean mNB>1, the probability of extinction by generation n, PNB(n), satisfies the inequality (3.7), and the probability of survival up to generation n, SNB(n), satisfies the inequality (3.8), each with φ=φNB.

We conjecture that the inequality in (4.26) holds for every x[0,1], although our proof yields it only for a smaller interval that contains [0,PNB]. However, we show in Appendix 1 that (4.26) is valid for every x[0,1] if r=2,,6. In addition, we prove that mNBγNB<1, which implies mNBPNB<1 and that (4.26) holds for x sufficiently close to 1; see (3.14). Also the convergence of the negative binomial to the Poisson distribution as r (with mNB fixed) supports our conjecture.

Series expansions in s of SNB and γNB when mNB=1+s, as well as upper and lower bounds for SNB, are presented in Sect. 5.7.

Offspring distributions with at most three offspring

Here we investigate offspring distributions {pk} satisfying

p0>0,p3>0,andpk=0ifk4. 4.27

We exclude the trivial case p0=0 and the simple case p3=0, which is treated separately in Remark 4.13. As a consequence, we assume 0<p2+p3<1. The main results are Theorem 4.11 and Corollary 4.12, which provide a complete characterization when the sequence of probabilities P(n) can be bounded from below as in Proposition 3.1, or from above as in Remark 3.2, or neither nor. The formulation of the main results requires considerable preparation. Illustrations of the main results are shown in Figures 2 and 3.

Fig. 2.

Fig. 2

Possible shapes of graphs of fF3(x)=φF3(x)-φFL(x). All possible cases are obtained by choosing p0=p2, p3=12p2, and varying p2. Then the relations between p0, p2, and p3 are retained as p2 or p1=1-52p2 varies (e.g., p1=0.625 in panel A). In the degenerate case of panel D, we have p0=p0(+)<p0(r), so that fF3(PF3)=0 and fF3(PF3)>0. In the degenerate case of panel F, we have fF3(1)=0 and fF3(1)<0. In addition to the indicated relations, p0(r)>p0(+)>0 holds in A and B, p0(+)>p0(r) in F and G, and p0>p0(γ) in A – E. In all cases, PF3=12(17-3)0.56155, and mF3=1+p2. Figure A applies if 0.4p2>12(1-3/17), and the lower bound yields the critical case B. The critical case D occurs if p2=217, F applies if p2=-12+53417, and G applies for all smaller values of p2. The values of fF3(0) are 0.00081, -0.00222, -0.002959, -0.003273, and -0.00376 in panels C, D, E, F, and G, respectively. Note that the vertical scales in A and B differ from those in the other panels

Fig. 3.

Fig. 3

The three regions defined in Corollary 4.12 shown from two angles in panels A and B. The region defined by (4.53) is shown in shades of yellow and brown. Here, the extinction probability PF3(n) can be bounded from below by the fractional linear extinction probability PFL(n) obtained from (4.32). The yellow plane in A is the boundary p0+p2+p3=1 (p1=0). The region defined by (4.54) is shown in shades of red. Here, PF3(n) cannot be bounded by PFL(n) from one side. The region defined by (4.55) is shown in shades of green. Here, PF3(n) is bounded from below by PFL(n). The boundary plane p0=p2+2p3 (mF3=1) is visible in A, close to the bottom of the cube

We express all relevant functions in terms of p0, p2, and p3 by setting p1=1-p0-p2-p3. Then the pgf is

φF3(x)=φF3(x;p0,p2,p3)=p0+(1-p0-p2-p3)x+p2x2+p3x3, 4.28

the expected number of offspring is

mF3=φF3(1)=1-p0+p2+2p3, 4.29

where φF3 always refers to the derivative with respect to x. Throughout, we assume mF3>1, i.e., p0<p2+2p3. The probability of (ultimate) extinction is

PF3=4p0p3+(p2+p3)2-(p2+p3)2p3. 4.30

Our assumptions imply 0<PF3<1.

Following (3.1), we define γF3=φF3(PF3), which is the rate of convergence of PF3(n) to PF3. A straightforward calculation yields

γF3=1-(p2+3p3)4p0p3+(p2+p3)2-4p0p3-(p2+p3)22p3. 4.31

In the limit p30, we obtain γF31+p0-p2. With the help of Section 5 of the supplementary Mathematica notebook all formulas can be expeditiously verified.

We begin by defining the prospective bounding fractional linear pgf φFL(x;πF3,ρF3). Following Proposition 3.1, we require the conditions in (3.2), i.e., PFL=PF3 and γFL=γF3. These hold if and only if the parameters π=πF3 and ρ=ρF3 of φFL are

ρF3=2p04p0p3+(p2+p3)2(p2+p3)+(1+2p0)4p0p3+(p2+p3)2andπF3=ρF3PF3. 4.32

Throughout, we consider the following region of admissible parameters:

R={(p0,p2,p3):p0>0,p20,p3>0,p0+p2+p31,p0<p2+2p3}. 4.33

A triple (p0,p2,p3)R defines a probability distribution with mF3>1 and 0<PF3<1.

Our main goal will be to determine when

fF3(x)=φF3(x;p0,p2,p3)-φFL(x;πF3,ρF3) 4.34

is positive or negative. (We use properties such as positive, increasing, or convex in the strict sense.) We recall that our construction of φFL implies

fF3(PF3)=0,fF3(PF3)=0,andfF3(1)=0. 4.35

We define the following quantities:

p0(+)=p3-(p2+p3)24p3, 4.36
p0(r)=12-p2+p38p3(p2+p3+8p3+(p2+p3)2), 4.37

and

p0(γ)=12-18p3(2(p2+p3)2+(p2+3p3)8p3+(p2+3p3)2-(p2+3p3)2). 4.38

In the following remark, the meaning of these quantities is explained.

Remark 4.8

(a) p0(+) is the only potentially admissible solution of fF3(PF3;p0,p2,p3) = 0, i.e., such that (p0(+),p2,p3)R under suitable conditions. This follows from fF3(PF3)=1p3(4p0p3+(p2+p3)2-p3)p2+3p3-4p0p3+(p2+p3)2 because the second factor yields p0(+), and the last factor is positive if p0<p2+2p3. We note that

p0>p0(+)if and only iffF3(PF3;p0,p2,p3)>0. 4.39

Therefore, fF3(x)>0 near the critical point x=PF3 if and only if p0>p0(+).

(b) p0(r) is the only potentially admissible solution of p0=ρF3. We note that fF3(0)=p0-ρF3. Therefore,

p0>p0(r)if and only iffF3(0)>0. 4.40

(c) p0(γ) is the only potentially admissible solution p0 of γF3mF3=1 (the proof is outlined in eq. (C.2) of the appendix; a further solution is p0=p2+2p3, which has multiplicity two and is not admissible). We recall from (3.14) that γF3mF31 if and only if fF3(1)0. This is a necessary condition for fF3(x) to be positive for x(PF3,1).

In the following remark we summarize the admissibility conditions of and the relations between p0(+), p0(r), and p0(γ).

Remark 4.9

(a) We have p0(+)>0 if and only if

p2<p3-p3. 4.41

Because we assume p3>0, we have p0(+)<14. Note that if p0(+)>0 then p2<14.

(b) We have p0(r)>0 if and only if (4.41) holds. Indeed, a simple calculation shows that p0(r)>p0(+) if and only if p0(+)>0, and p0(r)=p0(+) if and only if p0(+)=0. Straightforward algebra also shows that p0(r)+p2+p3<1 always holds (because 0<p2+p3<1) and that p0(r)<p2+2p3 if and only if

p3>16orp2>12(p3(4+p3)-5p3), 4.42

where the maximum value p2=3-220.172 is assumed at p3=3/2-20.121.

(c) We have p0(γ)>0 if and only if

p3<12andp2<12p3(4+p3)-3p3. 4.43

Furthermore,

p0(γ)<p0(r) 4.44

holds always (because p3>0).

(d) The inequalities p0(γ)<p0(+), p0(+)<p2+2p3, and p0(γ)<p2+2p3 are equivalent and hold if and only if

p3>19orp2>p3-3p3. 4.45

Hence, p0(+)<p2+2p3 is incompatible with p0(+)p0(γ). If p3>19 or if p2>p3-p3 (which is a restriction only if p3<19), then p0(γ)<p2+2p3 holds.

(e) The following relations hold between the bounds presented above:

p3-3p3<12p3(4+p3)-5p3<12p3(4+p3)-3p3<p3-p3. 4.46

These are valid for all 0<p3<1; equality holds in all cases if p3=0.

Here is a key lemma:

Lemma 4.10

In the region R the following cases can be distinguished:

(1) p0>ρF3 and γF3mF3<1 in R if and only if

max{p0(r),0}<p0<min{1-p2-p3,p2+2p3}, 4.47

where p0(r)<p2+2p3 requires (4.42).

(2) p0=ρF3 and γF3mF3<1 in R if and only if

0<p0(r)=p0<p2+2p3, 4.48

where 0<p0(r)<p2+2p3 requires (4.41) and (4.42).

(3) p0<ρF3 and γF3mF3<1 in R if and only if

max{p0(γ),0}<p0<min{p0(r),p2+2p3}, 4.49

where p0(γ)>0 if and only if (4.43) holds, and p0(γ)<p2+2p3 requires (4.45). The following three subcases occur:

max{p0(γ),0}<p0(+)<p0<min{p0(r),p2+2p3}, 4.50a
max{p0(γ),0}<p0(+)=p0<min{p0(r),p2+2p3}, 4.50b
max{p0(γ),0}<p0<p0(+)<min{p0(r),p2+2p3}. 4.50c

(4) p0<ρF3 and γF3mF3=1 in R if and only if

0<p0=p0(γ)<p2+2p3, 4.51

where 0<p0(γ)<p2+2p3 holds if and only if (4.43) and (4.45) are satisfied.

(5) p0<ρF3 and γF3mF3>1 in R if and only if

0<p0<min{p0(γ),p2+2p3}. 4.52

(6) p0ρF3 and γF3mF31 cannot occur in R.

In cases (2) – (5), p0<1-p2+p3 is satisfied if the respective display equation is fulfilled.

The elementary but tedious proof is given in Appendix C. The following theorem characterizes the sign structure of the function fF3(x) defined in (4.34). Figure 2 illustrates all cases.

Theorem 4.11

We assume that (p0,p2,p3)R.

(1) fF3(x)0 on [0, 1] occurs in cases (1) and (2) of Lemma 4.10; see Fig. 2A,B.

(2) fF3(x) changes sign once on [0, 1] in case (3) of Lemma 4.10. The following subcases occur:

   (i) The sign change occurs below PF3 if p0>p0(+); see Fig. 2C.

   (ii) The sign change occurs at PF3 if p0=p0(+); see Fig. 2D.

   (iii) The sign change occurs above PF3 if p0<p0(+); see Fig. 2E.

(3) fF3(x)0 on [0, 1] occurs in cases (4) and (5) of Lemma 4.10; see Fig. 2F,G.

The proof of this theorem is given in Appendix C. In combination with Proposition 3.1 and Lemma 4.10, Theorem 4.11 immediately yields the aspired characterization concerning lower and upper bounds for the extinction probabilities PF3(n).

Corollary 4.12

Assume the probability distribution defined in (4.27) with the additional constraint p0<p2+2p3. Then the extinction probability by generation n, PF3(n), has the following properties:

(1) PF3(n) satisfies (3.7) for every n0 if and only if

p0(r)p0<1-p2-p3and0<p0<p2+2p3. 4.53

(2) PF3(n) satisfies (3.7) for large n, and (3.10) for small n, if and only if

p0(γ)<p0<p0(r)and0<p0<p2+2p3. 4.54

(3) PF3(n) satisfies (3.10) for every n0 if and only if

p0p0(γ)and0<p0<p2+2p3. 4.55

Analogous statements hold for SF3(n), the survival probability until generation n (cf. Proposition 3.1). To relate the cases in Corollary 4.12 to each other, it is useful to recall from Remark 4.9(b) that p0(+)<p0(r) if and only if 0<p0(+).

Remark 4.9 informs us that (4.42) is a necessary condition for (4.53) to hold, and p2<p3-p3 is necessary for (4.54) and (4.55). Furthermore, (4.53) implies p212 and p023, where p0=23 is attained if p2=0 and p3=13. Next, (4.54) implies p2<14 and p0<13, where the supremum 13 of p0 is attained at p2=0 and p3=16. Finally, (4.55) implies p2<14 and p0<29, where the supremum 29 of p0 is attained at p2=0 and p3=19.

Figure 3 displays the regions defined in statements (1), (2), and (3) of Corollary 4.12, which are the same as those in (1), (2), and (3) of Theorem 4.11. The volume of the region defined in (1) is approximately 86.6% of the total volume of R; that of the region in (2) is approximately 10.2%, and the volume of the region in (3) is approximately 3.2% of the total volume of R.

Remark 4.13

The case p3=0 and p2>p0>0 is treated readily. Retaining the notation from above, we obtain mF3=1-p0+p2>1, PF3=p0p2, and γF3=1+p0-p2<1/mF3. Defining f(x) in analogy to (4.34), where now ρF3=p01+p0 and πF3=p21+p0, we obtain f(x)=(1-x)(p0-p2x)21+p0-p2x. Obviously, we have f(PF3)=f(1)=0, f(PF3)=0, f(PF3)>0, and it follows immediately that f(x)0 on [0, 1]. Graphs look similar to that in Fig. 2A. In particular, PF3(n) satisfies (3.7) for every n0.

Bounds and approximations for Sφ

Analytical expressions for the extinction probability Pφ, hence for the survival probability Sφ, are rarely available. Therefore, the bounds for the extinction and the survival probabilities up to generation n derived on the basis of Proposition 3.1 yield little detailed insight. Of course, numerical evaluation is simple and straightforward. Also the bound for the minimum time Tφ(ϵ) in (3.13), after which survival is ‘almost’ certain in the sense of (3.12), depends on Pφ and γφ. Several authors have derived bounds and approximations for the extinction probability. A classical result is Haldane’s (1927) approximation, who argued that for a Poisson offspring distribution, the probability of survival of a single mutant with a small selective advantage of s is approximately 2s.

We start by presenting results of Quine (1976) and Daley and Narayan (1980), which yield very accurate bounds and approximations for rather general offspring distributions, especially in the slightly supercritical case. In Sect. 5.3, we assume φ(1-)=m=1+s and that φ can be parameterized by s (and other parameters). We derive series expansions of Sφ and γφ in terms of s by assuming that s is sufficiently small. Then we highlight the relation to old and recent results on Haldane’s approximation for Sφ. In Sect. 5.4 we briefly discuss the relation of survival probabilities in the Galton-Watson process with the diffusion approximation for the fixation probabilities in a finite Wright-Fisher population. In Sects. 5.5, 5.6, 5.7, and 5.8, we specify the upper and lower bounds of Quine and of Daley and Narayan, as well as the series expansions of Sφ and γφ for the Poisson, the binomial, the negative binomial, and the generalized Poisson distribution, respectively. The accuracy of these bounds and approximations for Sφ is investigated numerically (Table 2). In Sect. 5.8 we apply the series expansion method to the generalized Poisson distribution and obtain analytical results on the validity of the upper or lower bound, (3.8) or (3.11), for the time-dependent survival probabilities.

Table 2.

The table shows values of Sφ and its bounds and approximations for s=0.2. The data confirm the analytical results of Quine (1976) and Daley and Narayan (1980) that βLφQSφUφDN. Sser denotes the series expansion up to order s3 given in (5.10). Its relative error to Sφ is always smaller than that of LφQ and also than that of UφDN except for φBin. Note that Sser<SGP if λ=0.5 and λ=0.9 because then the coefficient δ2 is negative; see the text below (5.32). For φGP with λ=0.9, UGPDN yields a complex value because condition (5.6) is violated. The last column contains an example in which the variance of the offspring distribution is very small, so that θ is large (θ=4). For simplicity, we chose a fractional linear distribution, for which SFL=1-ππs. In order to achieve m=1+s, we chose ρ=π(1+s)-s; see Sect. 2.3. Also in this case, the approximations are quite accurate, even if not needed for this distribution. The last line shows the generalized version of Haldane’s approximation.

s=0.2 φBin φNB φGP:   λ= φFL
n=5 r=5 0 0.2 0.5 0.9 π=0.2
β 0.3472 0.2315 0.2778 0.1891 0.0794 0.00333 0.6667
LφQ 0.3673 0.2444 0.2936 0.1993 0.0832 0.00346 0.7018
Sφ 0.3804 0.2668 0.3137 0.2228 0.1003 0.00466 0.8000
Sser 0.3875 0.2670 0.3182 0.2228 0.1001 0.00466 0.8000
UφDN 0.3823 0.2733 0.3183 0.2317 0.1158 0.8453
θs 0.5000 0.3333 0.4000 0.2560 0.1000 0.00400 0.8000

Quine’s bounds

Throughout this and the subsequent sections let m=φ(1-) denote the mean of φ, b=φ(1-), c=φ(1-), and σ2=b+m-m2 the variance. In addition to our general assumptions m>1 and 0<σ2<, we assume 0<c<. We define the quantity

β:=2(m-1)b. 5.1

Under the assumption

2β<min1,3b2c, 5.2

Quine (1976, Theorem 2) derived the following lower and upper bounds for the survival probability Sφ:

LφQ<Sφ<UφQ, 5.3

where

LφQ=β+β2φ(1-2β)3b, 5.4
UφQ=β+β2c3b(1-4c3bβ)-3/2. 5.5

(Quine formulated his result for P=1-S, and he used ϕ instead of β.)

Daley and Narayan’s upper bound for Sφ

Daley and Narayan (1980, Lemma 3) proved that if

8c(m-1)<3b2, 5.6

which is equivalent to 2β<3b2c, then

Sφ<UφDN, 5.7

where

UφDN=3b-3b2-83c(m-1)2c. 5.8

They also showed that condition (5.6) cannot be satisfied if m3.2. In addition, they derived a lower bound. It is easy to show that UφDN<UφQ whenever (5.6) holds (and m>1).

For a wide variety of families of probability distributions, From (2007) compared a large number of different upper and lower bounds for Sφ derived by various authors. For m>1 and close to 1, he concluded that UDN is the best upper bound for Sφ among the bounds investigated, and Quine’s lower bound is the best lower bound (slightly better then a bound given by Narayan 1981). In addition, From derived new, simple, general upper and lower bounds in terms of p0, p1, and p2, which are most useful for large m, such as m>1.5.

Series expansions of Sφ and γφ

In the slightly supercritical case, there is a relatively simple, easily automatized procedure to derive series expansions of these quantities. To put this on a firm mathematical basis, we consider families of offspring generating functions φ(x;s) depending smoothly on s0, typically through various parameters that depend on s. We denote partial derivatives of φ(x;s) of order (kl) and evaluated at (x0,s0) by φ(k,l)(x0;s0), and we denote μkl=φ(k,l)(1;0) (we assume that at least the one-sided limits and derivatives at (1-,0+) exist and are finite to the order considered). In particular, we assume that m(s)=φ(1,0)(1-;s)=1+s. We define b=b(s)=φ(2,0)(1-;s) and c=c(s)=φ(3,0)(1-;s). We assume that μ20=lims0+b(s)>0 and μ30=lims0+c(s)0.

To derive a series expansion of Sφ(s) under the assumption that s is small, we set Sφ(s)=i=1kδisi+O(sk+1). Then we expand φ(1-i=1kδisi;s)-(1-i=1kδisi) up to sk+1 (see Sect. 6.2 in the supplementary Mathematica notebook). The resulting coefficient of s vanishes. Equating the coefficients of s2,,sk+1 to 0, yields δ1,,δk.

With m=1+s, the variance is σ2=b-s-s2 and we define σ02=lims0+σ2(s). Then σ02=lims0+b(s)=μ20>0 and we introduce

θ:=2σ02=2μ20. 5.9

The approach outlined above yields

Sφ=θs-δ2s2+δ3s3+O(s4), 5.10

where

δ2=6μ20μ21-4μ303μ203, 5.11a
δ3=19μ205(18μ202μ212-9μ203μ22+16μ302-36μ20μ21μ30+12μ202μ31-6μ20μ40). 5.11b

Higher-order terms are readily derived by this method but are increasingly complicated because the coefficient of sj depends on the mixed partial derivatives of φ(x;s) up to order j+1 and evaluated at (1; 0) (see Sect. 6.2 in the supplementary Mathematica notebook).

Given (5.10), straightforward calculations yield the following expansion of γφ(s)=φ(1,0)(Pφ(s);s):

γφ(s)=1-s+γ2s2-γ3s3+O(s4), 5.12

where

γ2=2μ303μ202, 5.13a
γ3=29μ204(6μ20μ21μ30-4μ302-3μ202μ31+4μ20μ40). 5.13b

Remarkably, the universal coefficient -1 of s arises. This approximation is useful and provides insight because explicit analytical expressions for γφ rarely exist (for a few exceptions, see below).

We recall from (3.14) that mφγφ<1 is equivalent to φ(1,0)(1,s)>φFL(1,0)(1,s), which is a necessary condition for φ(x)>φFL(x) to hold for x[0,1]. From (5.12) we conclude that mφγφ<1 holds if γ2<1 and s is sufficiently small.

The bounds of Quine (Sect. 5.1) and of Daley and Narayan (Sect. 5.2) can be applied to families φ(x;s) of pgfs. Interestingly, series expansions of the lower and upper bounds LφQ and UφQ in (5.3) and of the upper bound UφDN in (5.8) all yield the correct second-order term δ2 in (5.11). The coefficients of s3 differ, and that of UφDN is closer to the true value δ3 than that of UφQ (see Sect. 6.3 in the Mathematica notebook). From the leading-order term β of LφQ and from the expansion (5.10) (or that of UφQ or UφDN) we obtain for sufficiently small s the simple bounds

β(s):=2sb(s)<Sφ<θs, 5.14

provided δ2>0. We have δ2>0 for the Poisson, binomial, and negative binomial distributions, whereas for the generalized Poisson distributed treated below, this holds only for sufficiently small λ. A simple example where δ2<0 is the following: let pk=0 for k3 and p0=12-2s, p1=3s, p2=12-s. Then m=1+s, b(s)=1-2s, and Sφ=2s1-2s=2s+4s2+O(s3)>2s=θs.

These bounds and expansions are closely related to the generalized version of Haldane’s (1927) approximation, which in our notation reads

Sφ(s)=θs+O(s2)ass0+. 5.15

This was derived in a branching-process context, and in various degrees of generality, by Ewens (1969), Eshel (1981), Hoppe (1992), and Athreya (1992); see also Haccou et al. (2005, p. 126). By contrast, Lessard and Ladret (2007) and Boenkost et al. (2021a, 2021b) proved (5.15) for certain Markov chain models of Cannings type (a generalization of the Wright-Fisher model), where then the left-hand side is the fixation probability. Indeed, Lessard and Ladret (2007) proved (a generalized version of) Sφ(s)=1N+θs+o(s), where N is fixed as s0, whence selection is weak relative to random genetic drift. Quite differently, Boenkost et al. (2021a, 2021b) assumed that s is asymptotically equivalent to N-b (where 0<b<12 or 12<b<1) as N. If 0<b<1, selection is stronger than in the diffusion approximation, where s is asymptotically equivalent to N-1.

We note that (5.14) as well as Quine’s bounds in (5.3) imply the generalized version (5.15) of Haldane’s approximation for Galton-Watson processes because lims0b(s)=μ20=2/θ. Interestingly, the lines of research on Haldane’s approximation (cited above) and on bounds for the extinction probability and extinction times (e.g., Seneta 1967; Agresti 1974; Quine 1976; Daley and Narayan 1980; Narayan 1981; From 2007) apparently developed independently as no cross references occur; in the latter case, not even to Haldane.

In Table 2 we present numerical examples that demonstrate the accuracy of the bounds and approximations presented above. We chose the value s=0.2, despite being high for an advantageous mutant, because for the distributions shown the relative errors vanish rapidly as s decreases below 0.1.

Relation to fixation probabilities in the Wright-Fisher model

Following Ewens (2004, p. 120), we define the variance-effective population size by Ne=N/(σ2/m), where m and σ2 are mean and variance of the offspring distribution. Then the diffusion approximation for the fixation probability of a single mutant with selective advantage s in the (haploid) Wright-Fisher model is

PDfix(s,N,Ne)=1-e-2sNe/N1-e-2sNe. 5.16

If we set N=1000, s=0.1, m=1+s, and σ2=12(1+s), 1+s, and 5(1+s) (so that Ne=2N,N,15N), then PDfix0.3297, 0.1813, and 0.0392, respectively. Interestingly, the survival probabilities in the Galton-Watson process with corresponding fractional linear offspring distributions are SFL0.3333, 0.1818, and 0.0392, thus almost identical. With s=0.1 we obtain for the Poisson distribution SPoi0.1761, and for the binomial distribution SBin0.1763 (where we set n=N). The latter two values are nearly identical to the exact fixation probability Pfix0.1761 in the standard Wright-Fisher model with Ne=N (computed from the linear system defining the fixed point of the transition matrix; e.g.  Ewens 2004, p. 87). If N=Ne=100 and s=0.1, then Pfix0.1758, PDfix0.1813, SBin0.1778, and SPoi remains unchanged.

Now we assume Ne=N and the standard Wright-Fisher model. Bürger and Ewens (1995) proved that the diffusion approximation is always an upper bound for the exact fixation probability Pfix, and its error is of order s2. In addition, they derived a bound for the relative error of approximations of the form

PAfix(s,N)=1-e-A(s)1-e-A(s)N, 5.17

where A(s)=a1s+a2s2 (in fact, they admitted convergent series). The relative error is of order s2 if a1=2 and a2=0 (yielding the diffusion approximation). They also showed that a2 can be chosen such that the relative error is of order s3. In the haploid case, their equation (4.11) applies and yields a2=-43-13ν+O(e-2ν), where ν=Ns is large, but constant (indeed the coefficient of e-2ν can be computed explicitly, but is irrelevant in our context). However, this improved, diffusion-like approximation is no longer a global bound for the true Pfix. Its series expansion in s (with ν constant) is 2s-(83+13ν)s2+O(se-2ν)+O(s3), thus nearly identical to the approximation SPoi2s-83s2 in (5.18) below if ν is sufficiently large. The diffusion approximation PDfix has the expansion 2s-23s2+O(se-2ν)+O(s3). If s=0.1, then PAfix0.1758 if N=1000, and PAfix0.1755 if N=100, which are nearly identical to the true values of 0.1761 and 0.1758, respectively, in the Wright-Fisher model. Bürger and Ewens derived also a simple diffusion-like lower bound; it is obtained by setting A(s)=s/(1+s). Its series expansion is 2s-4s2+O(s3). It is informative to compare the series expansions of these bounds with those in the following section.

It would be of interest to explore when the survival probability in a Galton-Watson process yields a better approximation for the fixation probability in the Wright-Fisher model with appropriately chosen Ne than the standard diffusion approximation. The work of Lessard and Ladret (2007) and Boenkost et al. (2021a, 2021b) (discussed above) could provide a valuable starting point.

Poisson distribution

For the Poisson distribution, we obtain the following series expansions directly from (4.2) and (4.3) by using Mathematica (Sect. 6.4 in the notebook):

SPoi=2s-8s23+28s39+O(s4), 5.18
γPoi=1-s+2s23-4s39+O(s4). 5.19

These expansions are based on the Taylor series of the Lambert function, W(x)=k=1(-k)k-1k!xk, which converges if |x|<1/e. The series for SPoi converges if 0s<1.

The upper bound of Daley and Narayan (1980) simplifies to

UPoiDN=3-24/m-152m=2s-8s23+34s39+O(s4). 5.20

The lower bound of Quine (1976) becomes

LPoiQ=2s-8s23-10s33+O(s4). 5.21

and the series expansion of the simple lower bound β=2(m-1)b is

β(s)=2s(1+s)2=2s-4s2+6s3+O(s4). 5.22

The bounds UPoiDN and LPoiQ apply if s<35; β applies always but becomes very inaccurate if s0.5.

Binomial distribution

For the binomial distribution we use the method outlined in Sect. 5.3 to derive a series expansion of SBin. With m=1+s and p=1+sn, we obtain

SBin=2nn-1s-4n(2n-1)3(n-1)2s2+2n(14n2-17n+5)9(n-1)3s3+O(s4) 5.23

By differentiation of the generating function, we obtain γBin=npPBin1-p+pPBin, which yields after substitution of (5.23):

γBin=1-s+2(n-2)3(n-1)s2-4(n-2)29(n-1)2s3+O(s4). 5.24

The upper bound of Daley and Narayan (1980) becomes

UBinDN=3n(1-1-8(n-2)s3(n-1)(1+s))2(n-2)(1+s) 5.25a
=2nn-1s-4n(2n-1)3(n-1)2s2+2n(17n2-32n+23)9(n-1)3s3+O(s4), 5.25b

which is a valid bound if n2 and m85. As already noted, the lower bound LBinQ of Quine (1976) has the same coefficients of s and s2.

Negative binomial distribution

For the negative binomial distribution with m=1+s and p=rr+1+s we obtain by the method outlined in Sect. 5.3,

SNB=2rr+1s-4r(2r+1)3(r+1)2s2+2r(14r2+17r+5)9(r+1)3s3+O(s4) 5.26

and

γNB=1-s+2(r+2)s23(r+1)-4(r+2)2s39(r+1)2+O(s4). 5.27

The bound of Daley and Narayan (1980) becomes

UNBDN=3r2(r+2)(1+s)(1-1-8(r+2)s3(r+1)(1+s)) 5.28a
=2rr+1s-4r(2r+1)3(r+1)2s2+2r(17r2+32r+23)9(r+1)3s3+O(s4). 5.28b

which is a valid bound if m8(r+2)5r+13. The simple lower bound β has the expansion

β=2rs(r+1)(1+s)2=2rr+1s-4rr+1s2+6rr+1s3+O(s4). 5.29

Generalized Poisson distribution

The following generalization of the Poisson distribution was introduced by Consul and Jain (1973):

pGP(k)=μ(μ+kλ)k-1k!e-μ-kλ,k=0,1,2,, 5.30

where μ>0 and 0λ<1. If λ=0, this reduces to the Poisson distribution with μ=m. Johnson et al. (2005, Chap. 7.2.6) call it the Lagrangian Poisson Distribution and summarize relevant properties and relations to other distributions. For a detailed treatment and review of applications consult Chap. 9 of Consul and Famoye (2006). For a relatively simple proof that k=0pGP(k)=1, see Tuenter (2000).

The mean and variance of this unimodal distribution are m=μ1-λ and σ2=μ(1-λ)3, respectively. In addition to the coefficient of variation (σ/m) also its skew and kurtosis increase to infinity if the mean is held constant and the parameter λ is increased from 0 to 1 (e.g. Johnson et al. 2005, Chap. 7.2.6). If λ>0, the generating function is given by

φGP(x;μ,λ)=exp[-μ(1+1λW(-xλe-λ))]. 5.31

We demonstrate the utility of our series expansion by applying it to this distribution. Otherwise, it is difficult to analyze in our context because, apparently, the survival probability SGP cannot be expressed in terms of known functions. All calculations, algebraic and numeric, can be found in detail in Sect. 7 of the supplementary Mathematica notebook.

If m=1+s, then μ=(1+s)(1-λ), b0=b(0)=1(1-λ)2, c0=c(0)=1+2λ(1-λ)4, μ21=1+b0, μ31=6(1-λ)2, μ40=1+λ(6+9λ-λ3)(1-λ)6, and θ=2/b0=2(1-λ)2. Therefore, (5.10) and (5.11) yield

SGP=2(1-λ)2s-23(1-λ)2(4-10λ+3λ2)s2+49(1-λ)3(7-31λ+21λ2-3λ3)s3+O(s4). 5.32

The coefficient of s2 is positive if and only if λ<5-1330.4648. Therefore, SGP>θs for small s if λ>5-133. The coefficient of s3 is positive if λ0.2750. From (5.12) and (5.13) we obtain

γGP=1-s+23(1+2λ)s2-49(1+7λ+λ2)s3+O(s4). 5.33

We do not present the bounds of Quine and of Daley and Narayan because the expressions are quite complicated. However, in Table 2 numerical values are shown.

Using the series expansions of SGP and γGP, approximations for the parameters πGP and ρGP of the bounding fractional linear pgf, φGP(x), can be computed. As in previous sections, we define

fGP(x;s,λ)=φGP(x;(1+s)(1-λ),λ)-φFL(x;πGP,ρGP), 5.34

where usually we omit the dependence on s and λ. The exact version requires numerical evaluation of SGP, γGP, πGP, and ρGP. The qualitatively different shapes are shown in Fig. 4.

Fig. 4.

Fig. 4

Possible shapes of graphs of fGP(x;s,λ). We chose s=0.3 for good visibility. Then PGP0.7435 if λ=0.30, and PGP0.7515 if λ=0.3145. At the critical value λc10.30160, fGP(1) changes sign; at λc20.30596, fGP(PGP) changes sign; at λc00.31433, fGP(0) changes sign

The analytical results below are based on calculating fGP(x) by employing the series expansions in (5.32) and (5.33). We obtain

fGP(0)=e-1+λ-13-4λ+2λ2-(e-1+λ(1-λ)-2(1-λ)2(4+2λ+3λ2)3(3-4λ+2λ2)2)s+O(s2). 5.35

By series expansion of λ around the value at which the term of order 1 vanishes, we find that fGP(0)>0 if and only if λ<λc0, where

λc00.25915+0.1997s 5.36

provides an accurate approximation if s0.3. For instance, if s=0.1, the approximation yields 0.27912 and the numerically determined exact value is λc00.27857.

We recall that by our construction of φFL(x;πGP,ρGP), we have fGP(1)=1+s-γGP-1; cf. (3.14). Because (1+s)γGP=1-13(1-4λ)s2+29(1-8λ-2λ2)s3+O(s4), we find that fGP(1)<0 if and only if λ<λc1, where

λc114(1+34s). 5.37

If s=0.1, then the numerically precise value is λc10.26820, and the simple approximation yields 0.26875.

For the second derivative of fGP at PGP we obtain

fGP(PGP)=1-4λ3(1-λ)2s-1+4λ-74λ2+12λ3+3λ49(1-λ)2s2+O(s3). 5.38

The first term is positive if λ<14 and 1+4λ-74λ2+12λ3+3λ4>0 if λ<0.14867. A rough approximation for the critical value λc2 of λ, below which fGP(PGP)>0 holds, is

λc214+0.202s. 5.39

If s=0.1, then the numerically precise value is λc20.26967.

Clearly, fGP(x)0 can hold for x[0,PGP] only if fGP(0)0 and fGP(PGP)0. Both inequalities are satisfied if 0λλc2 because our approximations satisfy λc2<λc0 (if s<4).

If λc2<λ<λc0, then fGP(PGP)<0 (hence fGP(x)<0 close to PGP), and fGP(0)>0. Therefore, fGP(x) changes sign between x=0 and x=PGP. Finally, if λ>λc0 then fGP(x)<0 near x=0 and near x=PGP. Numerical results suggest that in this case fGP(x)<0 between 0 and PGP, whereas fGP(x)>0 if 0λ<λc2. Thus, by Proposition 3.1, the inequalities (3.7) and (3.8) hold if 0λ<λc2, and the opposite inequalities hold if λ>λc0. The qualitatively different cases are shown in Fig. 4 for s=0.3. Then the range of values λ in which fGP(x) changes sign between 0 and PGP is approximately (0.30596, 0.31433). If s0, then this interval is approximately (0.25, 0.25915).

Based on these and additional numerical results (not shown), we conjecture that the inequalities (3.7) and (3.8) hold if 0λ<λc2, and the opposite inequalities hold if λ>λc0. In the region (λc2,λc0), PGP(n) satisfies (3.7) for small n, and the opposite inequality for large n. This differs from the finitely supported distributions studied in Sect. 4.4; cf. Corollary 4.12(2) and Fig. 2.

Applications

We begin by investigating the accuracy of the approximations for Tφ(ϵ) and Sφ(n) which play a key role in our major application in the final Section 6.3.

Convergence time Tφ(ϵ) of survival probabilities S(n)

First, we apply our results to Tφ(ϵ), the number of generations until the survival probability S(n) differs from the eventual survival probability S by a factor of at most 1+ϵ; see (3.12). We define

Tapp(ϵ)=ln((1+1ϵ)Pφ)-lnγφ, 6.1

where z denotes the least integer greater than or equal to z. According to (3.13), this is an upper bound for the true Tφ(ϵ) if (3.6) holds. It is a lower bound if the reversed inequality (3.9) holds, and it will serve as an approximation if none of the two holds.

Throughout this section we set m=1+s. By using Pφ1-θs+δ2s2 and γφ1-s+γ2s2-γ3s3, where θ, δ2, γ2 and γ3 are given in (5.9), (5.11) and (5.13), respectively, we obtain from (6.1) by series expansion in s

Tapp(ϵ)=(1s-12+γ2)ln(1+1ϵ)-θ+δ2+12θ(1-2γ2-θ)+(γ22-γ3-112)ln(1+1ϵ)s+O(s2). 6.2

Here, ln(1+1ϵ)2.4,4.6,6.9 if ϵ=0.1,0.01,0.001, respectively. We note that the terms in (6.2) remain unchanged under expansions of Pφ and γφ to arbitrary order. Now we define

Tser(ϵ)=(1s-12+γ2)ln(1+1ϵ)-θ. 6.3

Then Tser(ϵ)Tapp(ϵ) for sufficiently small s if the coefficient of s in (6.2) is negative.

We recall from (5.23) and (5.24) that for the binomial distribution θ=2nn-1 and γ2=2(n-2)3(n-1); from (5.26) and (5.27) that for the negative binomial distribution θ=2nn+1 and γ2=2(n+2)3(n+1); and from (5.32) and (5.33) that for the generalized Poisson distribution θ=2(1-λ)2 and γ2=23(1+2λ).

In Table 3 we compare exact values of Tφ(ϵ), obtained by iteration of the generating function, with the approximation Tapp(ϵ) in (6.1) and its simple series approximation Tser(ϵ). We show results for the binomial and negative binomial distribution, the generalized Poisson distribution, and the simple approximation 1sln(1+1ϵ). We note that if n10 and r10, the values Tφ(ϵ) for the binomial and the negative binomial are (nearly) identical to those for the Poisson distribution (λ=0). For the generalized Poisson distribution, for each s the middle value of λ is chosen from the small range for which fGP(x) changes sign between 0 and PGP (see Sect. 5.8). For the two smaller values of λ, fGP(x)0 for every x (and (3.8) holds), and for the two larger values fGP(x)0 (and the opposite of (3.8) holds). For the middle value of λ, the maximum of |fGP(x)| is extremely small (e.g. Fig. 5), thus the fractional linear approximation is extremely accurate.

Table 3.

The table shows values of Tφ(ϵ) for φ=φBin, φNB, and φGP with m=1+s and s, ϵ, and the other parameters as indicated. Here, Tφ(ϵ) is the exact time defined in (2.16) and computed by iterating the generating function φ. Tapp(ϵ) is the approximation defined in (6.1) and computed from the numerically exact values of Pφ and γφ. The series approximation Tser(ϵ) is defined in (6.3). The final column contains the values for the simple approximation shown on its top.

graphic file with name 285_2026_2349_Tab3_HTML.jpg

The data in the table show that mostly the deviations from the very simple approximation 1sln(1+1ϵ) are small. Larger deviations than shown here occur, for instance, if the variance of the offspring distribution is very small, because then θ becomes large and Tφ(ϵ) is reduced. Also a high skew contributes to larger deviations, as is visible for the generalized Poisson distribution with large λ, because then Tφ(ϵ) is increased.

Survival probability up to generation n

Here, we study the accuracy of the bounds and approximations for the survival probability Sφ(n) up to generation n and, equivalently, for the extinction probability Pφ(n) by generation n. From Proposition 3.1 and eq. (3.7) we obtain

1-γφn1-γφnPφPφ(n)Pφ1 6.4

and, by expansion of the left-hand side, the approximation

Pφ(n)Pφ1-θn+θ+n(θ(n+1)+2δ2-2θγ2)2(n+θ)2s+O(s2). 6.5

This is accurate if, approximately, sn<1.

For the generalized Poisson distribution, (6.5) becomes

PGP(n)PGP1-2(1-λ)2n+2(1-λ)2+(1-λ)2(1+13n(7-28λ+6λ2))(1+2n(1-λ)2)2s+O(s2). 6.6

Series expansions of Sφ(n)/Sφ about s=0 are less informative because both terms converge to 0 as s0.

We define

Sapp(n)=Sφ1-γφn(1-Sφ), 6.7

which is the right-hand side of (3.8). Because of its versatility, we use the generalized Poisson distribution to illustrate the accuracy of Sapp(n) if taken as approximation. From the results in Sect. 5.8 and Proposition 3.1, we expect that Sapp(n) is an upper bound for SGP(n) if λ<λc2, and a lower bound if λ>λc0, where λc0>λc2. Figure 5, which displays the relative error (Sapp(n)-SGP(n))/SGP(n) for several values of λ, confirms this. Table 3 informs us that for the parameters shown in Fig. 5, it takes between 22 and 31 generations for SGP(n) to decay below 1.1SGP.

The spread of a favorable mutant in a finite population

Branching process methods have been applied since the early times of population genetics to study the survival of new mutants (Fisher 1922; Haldane 1927). Generalizations of Haldane’s approximation for the fixation probability of an advantageous mutant are discussed in Sect. 5.3 along with the relevant references. Another, more recent branch of research has been concerned with the evolution of the distribution of a favorable mutant in a finite population of constant size N. Despite the great utility of diffusion-approximation methods to study the probability of and the expected time to fixation of a mutant (e.g. Charlesworth 2020), they are apparently not conducive to study the time course of the distribution of allele frequencies.

Desai and Fisher (2007) and Uecker and Hermisson (2011) conditioned on fixation of the favorable mutant and employed branching process methods to approximate its evolution by a deterministic increase starting with the random variable W=limnZn/mn (e.g. Haccou et al. 2005, p. 154) as initial condition, whose absolutely continuous part (W+) describes the stochasticity that accumulates during the spread of the mutant. Martin and Lambert (2015) employed a variant of this approach and approximated the initial and the final phase by a Feller process conditioned on fixation. They derived a semi-deterministic approximation for the distribution of the (beneficial) allele frequency at any given time.

Götsch and Bürger (2024), in the following abbreviated GB2024, developed a related approach that conditions on survival up to the current generation. This has the advantage that the mean number of mutants is described correctly for the initial generations, and also the variance is approximated very closely. They described the initial phase by a supercritical Galton-Watson process and combined it with the deterministic diallelic selection equation in such a way that the relative frequency of mutants in generation n, Xn, is given by Xn=Yn/(Yn+N), where Yn is the random variable with the exponential distribution Ψn(y)=1-e-λny, and λn=S(n)/mn (where the subscript φ is suppressed in this section). The underlying rational was to approximate the distribution of the absolutely continuous part Wn+ of Zn/mn by an exponential distribution because the limiting distribution W+ is often nearly exponential. It is exponential with parameter λ=S for fractional linear offspring distributions, and also Wn+ can be imbedded into the exponential distribution with parameter S(n) (Appendix A in GB2024). Thus, Ψn approximates the distribution of Zn conditioned on Zn>0.

The discrete distribution of the mutant in generation n (which started as a single copy in generation 0) can be approximated by the density

gan(x)=an(1-x)2·exp-anx1-x, 6.8a

where

an=an(m,N,S(n))=NS(n)/mn. 6.8b

The structure of gan(x) is the same as that of the density βt in Martin and Lambert (2015), except that their βt (corresponding to our an) decays exponentially with t and a constant parameter (in our notation 2s), whereas our an has the additional dependence on S(n). For large N, such as N1000, the density in (6.8) provides a very accurate approximation for the allele frequency distribution in the corresponding Wright-Fisher model. For the initial generations, it is much more accurate than previous approximations that conditioned on eventual fixation of the mutant (see Sect. 3.3 in GB2024).

One of the main applications in GB2024 of this result was the derivation of explicit formulas for the time dependence of the mean and the genetic variance of a quantitative trait under exponential directional selection (their Propositions 4.3 and 4.11). They assumed that the trait is determined by an underlying infinite-sites model, i.e., every new mutation that contributes to the trait occurs at a new locus (= site), so that many mutants can segregate simultaneously. A key assumption was that the offspring distribution is such that the survival probabilities S(n) can be bounded above as in (3.8). In Sects. 4.14.4, we proved that Poisson, binomial and some other distributions indeed satisfy this condition. Below we outline for an important special case why this bound for S(n) is essential for the proofs of the results in GB2024 on the evolution of a quantitative trait. In GB2024 this was buried beneath technical complications (see their Appendix D.4).

Given (6.8), the within-population variance of the distribution of mutants in generation n at a single locus is

γ(n)=01x(1-x)gan(x)dx=an(1+an)eanE1(an)-an, 6.9

where E1(a)=ax-1e-xdx denotes the exponential integral. The model in GB2024 assumes that new mutations occur according to a Poisson process, each mutation at a new site. In the simplest case, which we assume here to avoid plenty of technical detail, each mutation contributes the same effect α>0 to the trait, and its fitness (expected number of offspring) is m=esα, s>0. It was shown that the variance of the trait at time τ is

VG(τ)=Θα20τS([t])γ([t])dt, 6.10

where Θ is the expected number of mutations occurring per time unit (generation) in the total population, and [t] denotes the nearest integer to t. Because in GB2024 a distribution of mutation effects α was admitted, an additional integration with respect to α occurs in eq. (4.15) of GB2024, which yields (6.10) for equal effects.

In the absence of an explicit formula for S(n) the integrand in (6.10) needs to be computed recursively. An explicit formula is available only for fractional linear distributions. Using the present results on bounding the survival probabilities as in (3.8), we can quantify when S(n) is sufficiently close to S so that the integration in (6.10) can be simplified by approximating S([t]) by S for sufficiently large t. Indeed, in this case the integral can be calculated explicitly and the error terms can be derived. We focus on the important limiting case τ, i.e., when the per-generation response of the trait mean and the expected variance become constant due to the balance of loss and fixation of new recurrent mutations. For this case, the basic ideas can be presented without excessive technical detail.

Following Proposition 4.5 of GB2024, we consider VG=limτVG(τ) and define γ(t) analogously to γ(n) but with a(t)=NS/mt=NSe-sαt instead of an. We define

V1=0γ(t)dt=1sαNSeNSE1(NS). 6.11

Then α2V1 is the total variance contributed to the trait by a single mutant during its sweep to fixation (conditioned on its fixation).

GB2024 imposed the assumptions that (i)

NsK=CKasN, 6.12

where C>0 is an arbitrary constant and the constant K satisfies K>2, and (ii) the offspring distribution satisfies (3.8). For equal effects α, GB2024 obtained from their Proposition 4.5,

VG=ΘSα2V1+O(N-K1/K), 6.13

where K>K1+1 and K1>1 is an arbitrary constant (their Remark 4.6). Equation (6.13) shows that to leading order in N, the asymptotic variance of the trait depends only on the contribution of mutations that become fixed. The contribution of mutations that are lost is of smaller order.

By recalling (5.10) and that s there corresponds to esα-1sα here, we obtain Sθsα-δ2(sα)2. A well known asymptotic expansion of exE1(x) yields V11sα(1-1NS) if NS is sufficiently large. We note that (6.12) implies NS=O(Ns)=O(N1-1/K). Therefore, we obtain the simple approximation

SV1θ(1-δ2sα), 6.14

because the term 1θNsα is swallowed by O(N-K1/K). The asymptotic per-generation response of the mean phenotype is then ΔG¯Θθsα2(1-δ2sα); see Corollary 4.8 and Remark 4.9 in GB2024.

It is of interest to note that the scaling assumption (6.12) is equivalent to the assumption of ‘moderately strong selection’ used by Boenkost et al. (2021b) in their proof of Haldane’s approximation for Cannings models, which include the classical Wright-Fisher model. For an illustration of the scaling assumption (6.12), we choose K1=3/2 and K=3. Then s=O(N-1/3), Ns=O(N2/3) (which is in contrast to the diffusion approximation), and the error term in (6.13) is O(N-1/2).

The above assumptions are needed to derive the error term in (6.13). From (6.10) we obtain

VG/(Θα2)=0S([t])γ([t])dt=S0γ(t)dt+DV, 6.15

where DV=0IV(t)dt and IV(t)=S([t])γ(t)-Sγ(t). We use the decomposition

0IV(t)dt=0T(ϵ)IV(t)dt+T(ϵ)IV(t)dt, 6.16

where T(ϵ) is defined in (3.12) and studied in Sect. 6.1. The key points are that (i) if tT(ϵ), then S([t])/S1+ϵ and T(ϵ)IV(t)dt can be shown to be of order ϵ (see eq. (D.33) in GB2024), and (ii) T(ϵ) is a relatively short time, such that the variance contributed by the mutant up to T(ϵ) is small, i.e., 0T(ϵ)IV(t)dt is even smaller (see the derivation of inequality (D.36) in GB2024, which uses the explicit form of the bound (3.8) to calculate the integral). For the estimates of both integrals in (6.16), the scaling assumption (6.12) is crucial. Indeed, the proofs show that the choice ϵ=sK1, whence s=O(N-K1/K), yields the error term in (6.13).

The proof of the explicit approximation for the time course of VG(τ) in Proposition 4.11 of GB2024 is also based on an analogous time-scale separation and the corresponding inequalities resulting from (3.8).

Supplementary Information

Below is the link to the electronic supplementary material.

Acknowledgements

Thoughtful comments by two anonymous reviewers are gratefully acknowledged.

Appendices

We recommend to consult the supplementary Mathematica notebook to check the complicated algebraic computations in the proofs below. For Appendices A, B, and C, these are the notebook sections 34, and 5, respectively.

A Proof of Theorem 4.4 for the binomial distribution

The proof requires some preparation. Recall that we assume 0x1, 0<ξ<1, and ξn=PBin. We define

ξ~=k=0n-1ξk=1-ξn1-ξandv(x)=x-ξnξξ~. A.1

We note that

ξ~>nξ(n-1)/2 A.2

(e.g., by the inequality of the arithmetic and geometric means). Then we obtain

0<v(x)<1-ξnξξ~ifξn<x<1 A.3a

and

0>v(x)>v(0)=-ξn-1ξ~-1nξ(n-1)/2>-1nif0<x<ξn. A.3b

Using v(x), we can express the pgfs φBin and φFL as follows:

φBin(x;n,1-ξ1-ξn)=ξn(1+v(x))n, A.4

and

φFL(x;πBin,ρBin)=ξn(1-ξ)(1-ξn)+v(x)[n(1-ξ)-ξ(1-ξn)](1-ξ)(1-ξn)+v(x)[nξn(1-ξ)-ξ(1-ξn)]. A.5

Proof of Theorem 4.4

From (A.4) and (A.5) we infer that φFL(x)φBin(x) for every x[0,1] if and only if

fBin(x):=(1+v(x))n((1-ξ)(1-ξn)+v(x)[nξn(1-ξ)-ξ(1-ξn)])-((1-ξ)(1-ξn)+v(x)[n(1-ξ)-ξ(1-ξn)])0 A.6

for every x[0,1]. In the following, we omit the dependence on x and write fBin(v). By simple rearrangement of (A.6), we obtain

fBin(v)=[((1+v)n-1)(1-(1+v)ξ)(1-ξn)-nv(1-ξ)(1-(1+v)nξn)]=(1-ξ)2(1-(1+v)ξ)f^Bin(v), A.7

where, by using 1-un1-u=k=0n-1uk twice,

f^Bin(v):=((1+v)n-1)11-ξk=0n-1ξk-nvk=0n-1(1+v)k11-ξξk. A.8

By expansion of (1-ξ)-1 and subsequent reordering of the sums we arrive at

f^Bin(v)=vj=0n-1nj+1vj11-ξk=0n-1ξk-vj=0n-1vj(k=jn-1kjξk1-ξ) A.9a
=vj=0n-1vjnj+1[k=0n-1(k+1)ξk+nξn1-ξ] A.9b
-vj=0n-1vjnk=0n-j-1j+k+1kξj+k+nj+1ξn1-ξ. A.9c

We note that the last terms in each of the brackets in (A.9b) and (A.9c) cancel. By setting k=l-j in (A.9c), using l+1l-j=l+1j+1, and then returning from l to k, we obtain

f^Bin(v)=vj=0n-1vjk=0n-1ξknj+1(k+1)-nk+1j+1, A.10

which further simplifies to

f^Bin(v)=v2j=0n-2k=0n-2vjξkcf(n,j,k), A.11a

where

cf(n,j,k)=nj+2(k+1)-nk+1j+2. A.11b

The coefficients cf(n,j,k) of vjξk are always nonnegative. It is sufficient to consider j+1kn-2. Then

nj+2(k+1)nk+1j+2=i=0j(n-1-i)i=0j(k-i)1. A.12

This proves that f^Bin(v)0 if v0, whence fBin(x)0 follows if ξnx1.

Now we assume 0<x<ξn, i.e., v(x)<0. In the representation (A.11) of f^Bin(v) we consider two consecutive terms, starting with j=0, i.e.,

vjcf(n,j,k)+vcf(n,j+1,k), A.13

where j=0,2,4,. We will prove that this is always positive. Simple calculations show that nj+3(k+1)=nj+2(k+1)n-j-2j+3 and nk+1j+3=nk+1j+2k-j-1j+3. Because v>-1n by (A.3b), we obtain

cf(n,j,k)+vcf(n,j+1,k)cf(n,j,k)-1ncf(n,j+1,k)=nj+2(k+1)1-1nn-j-2j+3-nk+1j+21-1nk-j-1j+3. A.14

By (A.12), the ratio of these two terms simplifies to

i=0j(n-1-i)i=0j(k-i)·n(j+3)-n-j-2n(j+3)-k-j-1, A.15

which is greater or equal than

n-1k·n(j+3)-n-j-2n(j+3)-k-j-1>1, A.16

where the last inequality follows from (n-1)(n(j+3)-n-j-2)-k(n(j+3)-k-j-1)=(n-k-1)((n+1)(j+2)-k)>0 because kn-2. If the number of such pairs is odd, then n is even. Then the remaining term is positive because vn>0. This finishes the proof of Theorem 4.4.

Remark A.1

By (3.14), it follows from Theorem 4.4 that mBinγBin<1. Here is a simple direct proof. From (4.19) we obtain mBinγBin=mBin2ξn-1 and, by (A.2),

mBinγBin=mBin2ξn-1=(nξ~)2ξn-1<(ξ-(n-1)/2)2ξn-1=1. .17

B Proof of Theorem 4.6 for the negative binomial distribution

We start by recalling that ζr=PNB(0,1) and defining

ζ~=k=0r-1ζkandy(x)=ζr-xζ~. B.1

Using (A.2), we observe that

y(0)=ζrζ~<1rζ(r+1)/2<1r,y(ζr)=0,y(1)=ζr-1ζ~=-(1-ζ). B.2

With these abbreviations, we can express φNB in (4.23) and φFL in (2.9), where πNB and ρNB are given in (4.25), as follows:

φNB(x;r,ζ(1-ζr)1-ζr+1)=ζr11+y(x)r B.3

and

φFL(x;πNB,ρNB)=ζr1-x-ry(x)1-x-rζry(x). B.4

We note that numerator and denominator are always positive. These, as all other nontrivial formulas here, are easily verified using the code in Sect. 4 of the supplementary Mathematica notebook.

We define

fNB(x;r,ζ):=ζrφFL(x;πNB,ρNB)-1-φNB(x;r,ζ(1-ζr)1-ζr+1)-1 B.5

and

gNB(y(x);r,ζ):=fNB(x;r,ζ)1-x-ry(x)(1-ζ)2, B.6

where 1-x-ry(x)>0. Proving the inequality in (4.26) is equivalent to showing that gNB(y(x);r,ζ)>0 for every x[0,ζr), r2, and 0<ζ<1. In the following we write y=y(x). Using the transformation x=ζr-y1-ζr1-ζ, we obtain

gNB(y;r,ζ)=fNB(ζr-y1-ζr1-ζ;r,ζ)(1-ζ)(1-ζr)+y(1-ζr-r(1-ζ))(1-ζ)3, B.7

which we can rewrite as

gNB(y)=y((1+y)r-1)(r(1-ζ)-(1-ζr))-(1-ζ)(1-ζr)(((1+y)r-1)-ry)(1-ζ)3 B.8

By the binomial expansion (1+y)r-1=yj=0r-1rj+1yj and after collection of coefficients of yj, we obtain

gNB(y)=y2(1-ζ)2j=0r-1yjrj+1(r-1-ζr1-ζ)-rj+2(1-ζr) B.9a
=y2j=0r-1yjrj+11(1-ζ)2(r-1-ζr1-ζ)-r-j-1j+2(1-ζr). B.9b

Finally, expansion in terms of ζ yields after appropriate rearrangement

gNB(y;r,ζ)=y2j=0r-1yjrj+1cg(r,j,ζ), B.10

where

cg(r,j,ζ)=12(j+2)k=0r-2ζk(k+1)[2r(1+j)-(2+j)k-2]+ζr-11-ζr(r+1)j. B.11

Because 2r(1+j)-(2+j)k-22+j(2r-k)2, we obtain cg(r,j,ζ)>0 for every 0jr-1 and every ζ(0,1) if y>0. Therefore, gNB(y;r,ζ)>0 if y>0 and fNB(x;r,ζ)>0 if 0x<ζr.

We note that the structure of gNB in (B.10) and the positivity of the coefficients implies that gNB is strictly convex at y=0. Therefore, fNB(x) is also positive if x is slightly larger than ζr=PNB.

In the following we show that fNB0 for every x[0,1] if r=2,,5. We use the transformation y=u-(1-ζ). Then, by (B.2), 0u1-ζ if PNB=ζrx1. We use series expansion of g~NB(u;r,ζ):=1-ζ(u-(1-ζ))2gNB(u-(1-ζ);r,ζ). Analytically, this is quite cumbersome due to the complicated structure of gNB. However, Mathematica performs this task expeditiously (also for r>5):

g~NB(u;2,ζ)=u, B.12a
g~NB(u;3,ζ)=u(1+4ζ+ζ2)+u2(2+ζ), B.12b
g~NB(u;4,ζ)=u(1+4ζ+10ζ2+4ζ3+ζ4)+u2(2+10ζ+6ζ2+2ζ3)+u3(3+2ζ+ζ2), B.12c
g~NB(u;5,ζ)=u(1+4ζ+10ζ2+20ζ3+10ζ4+4ζ5)+u2(2+10ζ+30ζ2+20ζ3+10ζ4+3ζ5)+u3(3+18ζ+13ζ2+8ζ3+3ζ4)+u4(4+3ζ+2ζ2+ζ3). B.12d

In general, the coefficient of u is 1(1-ζ)2(ζ~2-r2ζr-1)>0, where the inequality follows from (A.2). The coefficient of ur-1 is 11-ζ(r-ζ~)>0 if 0<ζ<1.

Remark B.1

From (4.24) we obtain γNB=ζr+1mNB, where mNB=r/(ζζ~). Therefore, using again (A.2), we arrive at

mNBγNB=mNB2ζr+1=r2ζr-1ζ~2<r2ζr-1r2ζr-1=1. B.13

C Proofs for distributions with pk=0 for k4

Proof of Lemma 4.10

We recall from Remark 4.8(b) that p0(r) is the only (potentially admissible) solution of p0=ρF3. By Remark 4.9(b), this is the case if and only if (4.41) and (4.42) are satisfied. Furthermore, by straightforward algebra,

p0<ρF3if and only ifp0<p0(r). C.1

It is not difficult to show that we can write

γF3mF3-1=12p32(p2+3p3-4p0p3+(p2+p3)2)2×(12-4p0p3+(p2+p3)2+(p2+3p3)4p0p3+(p2+p3)24p3). C.2

(One way to derive this is to solve z=4p0p3+(p2+p3)2 for p0, substitute this expression for p0 in γF3mF3-1, and then factorize the resulting quartic polynomial in z; see Sect. 5.2 in the Mathematica notebook). We consider γF3mF3-1 as a function of p0. It is well defined if p0-(p2+p3)2/(4p3). At this value, it is positive. For sufficiently large p0, it becomes negative and tends to - as p0. The equation γF3mF3-1=0 has the solutions p0=p2+2p3 (which has multiplicity two) and p0=p0(γ). The only potentially admissible solution is p0=p0(γ) because p0=p2+2p3 yields mF3=γF3=1 By Remark 4.9(c) and (d), p0=p0(γ) is admissible if and only if (4.43) and (4.45) hold. By (C.1) and (4.44), this establishes Case (4).

The solution p0=p2+2p3 is a critical point of γF3mF3-1, and it is a local maximum if and only if (4.45) is satisfied, i.e., if p0(γ)<p2+2p3. In this case, γF3mF3>1 if 0<p0<p0(γ), and γF3mF3<1 if p0(γ)<p0<p2+2p3. This holds independently of the sign of p0(γ). Therefore, if p0(γ)0, then γF3mF3<1 for every admissible p0. If p0(γ)>p2+2p3, then p0=p2+2p3 is a local minimum of γF3mF3, and γF3mF3>1 holds for every admissible p0. We conclude that

γF3mF3>1if0<p0<min{p0(γ),p2+2p3}, C.3a
γF3mF3<1ifmax{p0(γ),0}<p0<p2+2p3. C.3b

Therefore, Case (1) holds by (C.1), (4.44), and (C.3b). Case (2) follows from (4.44) and (C.3a) because p0=p0(r) is the only (potentially admissible) solution of p0=ρF3. Case (3) follows again from (C.1), (4.44), and (C.3a). We deal with the subcases below. Case (4) was already settled above, and Case (5) follows from (C.1), (4.44), and (C.3a). Finally, by (C.1) and (4.44), p0ρF3 implies p0>p0(r)>p0(γ), which is incompatible with γF3mF3>1 by (C.3a).

Finally, we settle the subcases in Case (3). Indeed, from Remark 4.9(b), (c), and (d) we already know that p0(+)<p0(r) holds if and only if 0<p0(+), p0(γ)<p0(r) holds always, and p0(γ)<p2+2p3 is equivalent to p0(γ)<p0(+)<p2+2p3.

We start with the proof of Theorem 4.11 after some additional preparation. We note that the fourth (and all higher) derivatives of fF3(x) are negative on [0, 1]. Therefore, fF3(x) is decreasing on [0, 1]. Using the substitution p0PF3(p2+p3+p3PF3) (obtained from (4.30)), we can write

fF3(x)=(1-x)(PF3-x)2(-p3+(p2+p3+2p3PF3)(p2+p3+p3PF3+p3x))1+(p2+p3+2p3PF3)(PF3-x). C.4

The denominator is positive on [0, 1] because

1+(p2+p3+2p3PF3)(PF3-x)1+(p2+p3+2p3PF3)(PF3-1)PF3(p2+p3+p3PF3)+p2+p3+(p2+p3+2p3PF3)(PF3-1)=PF3(2p2+3p3PF3)>0,

where in the second inequality we used 1p0+p2+p3. The numerator is a polynomial of degree four in x with negative leading coefficient. This informs us that, in addition to the zeroes PF3 and 1, which occur by definition, fF3(x) has at most one additional zero in [0, 1]. Recalling from (4.35) that PF3 is a critical point of fF3, it has (at least) multiplicity two. Therefore, fF3 can have at most one additional zero in [0, 1].

Remark C.1

(a) We recall from (4.39) that critical point PF3 is a local minimum if and only if p0>p0(+). Therefore, PF3 is a local maximum if and only if p0<p0(+), which is possible only if (4.41) holds.

(b) We recall from Remark 4.8(a) that fF3(PF3)=0 if and only p0=p0(+). Assume fF3(PF3)=0. Then PF3 and 1 are the only zeroes of fF3 because PF3 has multiplicity three. In addition, fF3(PF3)=3p3(p2-p3+3p3), which is positive if 0<p0(+)<p2+2p3 by Remark 4.9. Hence, fF3(x) changes sign from negative to positive as x increases from below PF3 to above PF3. By the considerations above, PF3(x) cannot have additional zeroes, whence PF3(x)<0 on [0,PF3) and PF3(x)>0 on (PF3,1).

Proof of Theorem 4.11

We distinguish the following cases and recall that fF3(x) can have three different zeroes only if fF3(PF3)0.

Case fF3(1)<0. This is satisfied if and only if γF3mF3<1. There are the following subcases:

(a) fF3(x) has only the zeroes PF3 and 1 in [0, 1]. Then fF3(x)>0 on (PF3,1), with a local maximum in this interval. Clearly, fF3(x) cannot have a local maximum at PF3. If fF3(PF3)>0, then fF3 has a local minimum (of 0) at PF3. It follows that fF3(x)>0 on [0,PF3) and, in particular, fF3(0)>0, which is equivalent to p0>ρF3. Hence, this occurs precisely in case (1) of Lemma 4.10 and is displayed in Fig. 2A.

If fF3(PF3)=0, i.e., p0=p0(+), Remark C.1 informs us that fF3(x) changes sign at PF3 and is negative below PF3, and positive above. By Remarks 4.9(b) and (d), p0=p0(+) can occur only if p0<p0(r) and p0(γ)<p0<p2+2p3. Therefore, fF3(PF3)=0 can occur only in case (3) of Lemma 4.10.

(b) fF3(x) has a third zero, x1[0,PF3). Then fF3(x)>0 on (x1,PF3) and on (PF3,1), with local maxima in each of these intervals and a local minimum of 0 at PF3, and fF3(x)<0 on [0,x1). If x1=0, then fF3(0)=0 and p0=ρF3; see (4.40). This is precisely case (2) of Lemma 4.10 and is displayed in Fig. 2B.

If x1>0, then fF3(0)<0, i.e., p0<ρF3 (Fig. 2C). By Lemma 4.10, this case applies if (4.49) holds and, in addition, p0(γ)<p0(+)<p0. These additional inequalities result from the fact that fF3(x) has local minimum at PF3 (whence p0(+)<p0) and from Remark 4.9(d).

(c) fF3(x) has a third zero, x2(PF3,1). Then fF3(x)>0 on (x2,1) and fF3(x)<0 on (PF3,x2). Moreover fF3(x) must have a local maximum (of 0) at PF3, whence fF3(x) is negative on [0,PF3); see Fig. 2E. In particular, fF3(0)<0, which is equivalent to p0<ρF3. By Lemma 4.10, this case applies if (4.49) holds and, in addition, p0<p0(+)<p0(r). These additional inequalities result from the fact that fF3 has a maximum at PF3 and from Remark 4.9(b).

Case fF3(1)>0. This is satisfied if and only if γF3mF3>1. There are the following subcases:

(a) fF3(x) has only the zeroes PF3 and 1. Because we already know that fF3(x) cannot change from positive to negative at PF3 (when f(PF3)=0), we conclude that fF3(x) is negative everywhere else and has local maximum of 0 at PF3, and a local minimum in (PF3,1); see Fig. 2G. In particular, fF3(0)<0, which is equivalent to p0<ρF3. This is precisely case (5) of Lemma 4.10.

(b) fF3(x) has a third zero, either in [0,PF3) or in (PF3,1). This is impossible because then fF3(0)0, i.e., p0ρF3, which is incompatible with γF3mF3>1 by case (6) of Lemma 4.10.

Case fF3(1)=0. This is can be satisfied if only if γF3mF3=1, which implies p0=p0(γ). Because p0<p2+2p3 needs to hold, Remark 4.9 yields p0<p0(+). Hence fF3 has a local maximum (of 0) at PF3. This is precisely case (4) of Lemma 4.10. The shape and positivity properties of fF3 are analogous to those in case fF3(1)>0 (a) (compare Figs. 2F and 2G).

The statements about the validity of (3.7) and (3.10) follow because it is sufficient to have fF3(x)>0 (fF3(x)<0) for 0<x<PF3. The reason is the monotonicity of φ(n)(0).

Funding

Open access funding provided by University of Vienna.

Data Availability

A Mathematica notebook is provided as Supplementary Material.

Declarations

Competing interests

I have no competing interests to declare that are relevant to the content of this article.

Compliance with ethical standards

The research did not involve human participants or animals.

Footnotes

1

After acceptance of this paper, the validity of the inequality in (4.26) could be established for every x[0,1]. The proof will be made available in due course.

Supplementary file was missing from this article and has now been uploaded.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

4/4/2026

Supplementary file was missing from this article and has now been uploaded

References

  1. Agresti A (1974) Bounds on the extinction time distribution of a branching process. Adv Appl Probab 6:322–335 [Google Scholar]
  2. Alsmeyer G, Hoang VH (2025) Power-fractional distributions and branching processes. arXiv preprint arXiv:2503.18563
  3. Athreya KB (1992) Rates of decay for the survival probability of a mutant gene. J Math Biol 30:577–581 [PubMed] [Google Scholar]
  4. Athreya KB, Ney PE (1972) Branching processes. Springer, Berlin-Heidelberg [Google Scholar]
  5. Boenkost F, Casanova AG, Pokalyuk C, Wakolbinger A (2021) Haldane’s formula in Cannings models: the case of moderately weak selection. Electron J Probab 26:1–36. 10.1214/20-EJP572 [Google Scholar]
  6. Boenkost F, González Casanova A, Pokalyuk C, Wakolbinger A (2021) Haldane’s formula in Cannings models: the case of moderately strong selection. J Math Biol 83:70 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bürger R, Ewens WJ (1995) Fixation probabilities of additive alleles in diploid populations. J Math Biol 33:557–575 [Google Scholar]
  8. Charlesworth B (2020) How long does it take to fix a favorable mutation, and why should we care? Am Nat 195:753–771 [DOI] [PubMed] [Google Scholar]
  9. Consul PC, Famoye F (2006) Lagrangian Probability Distributions. Birkhäuser, Boston-Basel [Google Scholar]
  10. Consul PC, Jain GC (1973) A generalization of the Poisson distribution. Technometrics 15:791–799 [Google Scholar]
  11. Corless RM, Gonnet GH, Hare DE, Jeffrey DJ, Knuth DE (1996) On the Lambert W function. Adv Comput Math 5:329–359 [Google Scholar]
  12. Daley D, Narayan P (1980) Series expansions of probability generating functions and bounds for the extinction probability of a branching process. J Appl Probab 17:939–947 [Google Scholar]
  13. Desai MM, Fisher DS (2007) Beneficial mutation selection balance and the effect of linkage on positive selection. Genetics 176:1759–1798 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Eshel I (1981) On the survival probability of a slightly advantageous mutant gene with a general distribution of progeny size-a branching process model. J Math Biol 12:355–362 [DOI] [PubMed] [Google Scholar]
  15. Ewens WJ (1969) Population Genetics. Methuen, London [Google Scholar]
  16. Ewens WJ (2004) Mathematical Population Genetics: Theoretical Introduction. Springer, New York [Google Scholar]
  17. Fisher RA (1922) On the dominance ratio. Proc R Soc Edinb 42:321–341 [Google Scholar]
  18. From SG (2007) Some new bounds on the probability of extinction of a Galton-Watson process with numerical comparisons. Communications in Statistics - Theory and Methods 36:1993–2009. 10.1080/03610920601126597 [Google Scholar]
  19. Götsch H, Bürger R (2024) Polygenic dynamics underlying the response of quantitative traits to directional selection. Theor Popul Biol 158:21–59 [DOI] [PubMed] [Google Scholar]
  20. Haccou P, Jagers P, Vatutin VA (2005) Branching processes: Variation, growth, and extinction of populations. Cambridge University Press [Google Scholar]
  21. Haldane JBS (1927) A mathematical theory of natural and artificial selection, part V: Selection and mutation. Math Proc Cambridge Philos Soc 23:838–844 [Google Scholar]
  22. Harris TE (1963) The theory of branching processes. Springer, Berlin [Google Scholar]
  23. Hoppe FM (1992) Asymptotic rates of growth of the extinction probability of a mutant gene. J Math Biol 30:547–566 [DOI] [PubMed] [Google Scholar]
  24. Johnson NL, Kemp AW, Kotz S (2005) Univariate Discrete Distributions, 3rd edn. John Wiley & Sons [Google Scholar]
  25. Kimura M (1964) Diffusion models in population genetics. J Appl Probab 1:177–232 [Google Scholar]
  26. Kolmogorov A (1931) Über die analytischen Methoden in der Wahrscheinlichkeitstheorie. Math Annal 104:415–458
  27. Lessard S, Ladret V (2007) The probability of fixation of a single mutant in an exchangeable selection model. J Math Biol 54:721–744 [DOI] [PubMed] [Google Scholar]
  28. Martin G, Lambert A (2015) A simple, semi-deterministic approximation to the distribution of selective sweeps in large populations. Theor Popul Biol 101:40–46 [DOI] [PubMed] [Google Scholar]
  29. Narayan P (1981) On bounds for probability generating functions. Australian Journal of Statistics 23:80–90 [Google Scholar]
  30. Pollak E (1971) On survival probabilities and extinction times for some branching processes. J Appl Probab 8:633–654 [Google Scholar]
  31. Quine M (1976) Bounds for the extinction probability of a simple branching process. J Appl Probab 13:9–16 [Google Scholar]
  32. Sagitov S, Lindo A (2016) A special family of galton-watson processes with explosions, in: Branching processes and their applications. Springer, pp. 237–254
  33. Seneta E (1967) On the transient behaviour of a poisson branching process. J Aust Math Soc 7:465–480 [Google Scholar]
  34. Steinrücken M, Wang YR, Song YS (2013) An explicit transition density expansion for a multi-allelic wright-fisher diffusion with general diploid selection. Theor Popul Biol 83:1–14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Tuenter HJ (2000) On the generalized poisson distribution. Stat Neerl 54:374–376 [Google Scholar]
  36. Uecker H, Hermisson J (2011) On the fixation process of a beneficial mutation in a variable environment. Genetics 188:915–930 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Wikipedia contributors, 2025. Lambert W function — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Lambert_W_function&oldid=1272629041. [Online; accessed 6-February-2025]
  38. Wright S (1931) Evolution in Mendelian populations. Genetics 16:97–159 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

A Mathematica notebook is provided as Supplementary Material.


Articles from Journal of Mathematical Biology are provided here courtesy of Springer

RESOURCES