Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Sep 1.
Published in final edited form as: Nonlinearity. 2017 Mar 15;30(4):1682–1707. doi: 10.1088/1361-6544/aa5499

SCALING LIMITS OF A MODEL FOR SELECTION AT TWO SCALES

SHISHI LUO 1,3, JONATHAN C MATTINGLY 2
PMCID: PMC5580332  NIHMSID: NIHMS897715  PMID: 28867875

Abstract

The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0, 1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

Keywords: Markov chains, limiting behavior, evolutionary dynamics, Fleming–Viot process, scaling limits

1. Introduction

We study the model, introduced in [15], of a trait that is advantageous at a local or individual level but disadvantageous at a larger scale or group level. For example, an infectious virus strain that replicates rapidly within its host will outcompete other virus strains in the host. However, if infection with a heavy viral load is incapacitating and prevents the host from transmitting the virus, the rapidly replicating strain may not be as prevalent in the overall host population as a slow replicating strain.

A simple mathematical formulation of this phenomenon is as follows. Consider a population of m groups. Each group contains n individuals. There are two types of individuals: type I individuals are selectively advantageous at the individual (I) level and type G individuals are selectively advantageous at the group (G) level. Replication and selection occur concurrently at the individual and group level according to the Moran process [8] and are illustrated in Fig 1. Type I individuals replicate at rate 1 + s, s ≥ 0 and type G individuals at rate 1. When an individual gives birth, another individual in the same group is selected uniformly at random to die. To reflect the antagonism at the higher level of selection, groups replicate at a rate which increases with the number of type G indivduals they contain. As a simple case, we take this rate to be w(1+rkn), where kn is the fraction of indivduals in the group that are type G, r ≥ 0 is the selection coefficient at the group level, and w > 0 is the ratio of the rate of group-level events to the rate of individual-level events. As with the individual level, the population of groups is maintained at m by selecting a group uniformly at random to die whenever a group replicates. The offspring of groups are assumed to be identical to their parent.

Figure 1.

Figure 1

Schematic of the particle process. (a) Left: A population of m = 3 groups, each with n = 3 individuals of either type G (filled small circles) or type I (open small circles). Middle: A type I individual replicates in group 3 and a type G individual is chosen uniformly at random from group 3 to die. Right: Group 1 replicates and produces group 2′. Group 2 is chosen uniformly at random to die. (b) The states in (a) mapped to a particle process. Left: Group 2 has no type G individuals, represented by ball 2 in urn 0. Similarly, group 3 is represented by ball 3 in urn 2 and group 1 by ball 1 in urn 3. Middle: The number of type G individuals in group 3 decreases from two to one, therefore ball 3 moves to urn 1. Right: A group with zero type G individuals dies, while a group with three type G individuals is born. Therefore ball 2 leaves urn 0 and appears in urn 3 as ball 2′.

As illustrated in Fig 1, this two-level process is equivalent to a ball-and-urn or particle process, where each particle represents a group and its position corresponds to the number of type G individuals that are in it. We note that similar, though more general, particle models of evolutionary and ecological dynamics at multiple scales have been studied and we mention several particularly relevant works here. Dawson and Hochberg [6] also consider a population at multiple levels, albeit with one type per level, not two. In [4], Dawson and Greven consider a more general model, allowing for infinitely many hierarchical levels and for migration, selection, and mutation. Méléard and Roelly [16], in a study also inspired by host-pathogen interactions, investigate a model that allows for non-constant host and pathogen populations as well as mutation. In these more general contexts, determining long-term behavior is less straightforward than in our specific setting.

We now define the stochastic process that is the focus of this work. Let Xti be the number of type G individuals in group i at time t. Then

μtm,n:=1mi=1mδXti/n

is the empirical measure at time t for a given number of groups m and individuals per group n. δx(y) = 1 if x = y and zero otherwise. The Xti are divided by n so that μtm,n is a probability measure on En:={0,1n,,1}.

For fixed T > 0, μtm,nD([0,T],P(En)), the set of càdlàg processes on [0, T] taking values in P(En), where P(S) is the set of probability measures on a set S. With the particle process described above, μtm,n has generator

(Lm,nψ)(v)=i,j(R1+wR2)(v,vi,j)[ψ(vi,j)ψ(v)] (1)

where vij:=v+1m(δjnδin), ψCb(P([0,1])) are bounded continuous functions, and vP(En)P([0,1]). The transition rates (R1 + wR2) are given by

R1(v,vij)={mv(in)i(1in)(1+s)ifj=i1,i<nmv(in)i(1in)ifj=i+1,i>00otherwise}

and

R2(v,vij)=mv(in)v(jn)(1+rjn).

R1 represents individual-level events while R2 represents group-level events.

2. Main results

We prove the weak convergence of this measure-valued process as m, n → ∞ under two natural scalings. The first scaling leads to a deterministic partial differential equation. We derive a closed-form expression for the solution of this equation and study its steady-state behavior. The second scaling leads to an infinite dimensional stochastic process, namely a Fleming-Viot process.

Let us briefly introduce some notation. By m, n → ∞ we mean a sequence {(mk, nk)}k such that for any N, there is an n0 such that if k ≥ n0, mk, nk ≥ N. We define f,v=01f(x)v(dx) where f is a test function and v a measure. Lastly, δx will denote the delta measure for both continuous and discrete state spaces.

To provide intuition for the two scalings and the corresponding limits, take ψ to be of the form ψ(v) = F (〈g, v〉), where g is some suitable function on [0, 1], and apply the generator in (1) to it:

Lm,nψ(μ)=F{i[1ng(in)sg(in)]in(1in)μ(in)+wr[iing(in)μ(in)ig(in)μ(in)jjnμ(jn)]}+1mwF{ig(in)2μ(in)(ig(in)μ(in))2+12ri,j(g(jn)g(in))2jnμ(jn)μ(in)}+o(1m)+o(1n) (2)

This suggests two natural scalings. The first is to take m, n → ∞ without rescaling any parameters. The g″ and F″ terms vanish and we have a deterministic process. The second is to let s=σn, r=ρm, and nmθ. The terms F″ and g″ no longer vanish and the process converges to a limit that is stochastic. The precise statement of the weak convergence of the finite state space system to the deterministic limit is in terms of a weak measure-valued solution to a partial differential equation:

Theorem 1

Suppose the particles in the system described by μtm,n are initially independently and identically distributed according to the measure μ0m,n, where μ0m,nμ0P([0,1]) as m, n → ∞. Then, as m, n → ∞, μtm,nμtD([0,T],P([0,1])), weakly, where μt solves the differential equation

ddtf,μt=x(1x)f,μt+λ[xf,μtf,μtx,μt] (3)

for any positive-valued test function fC1([0, 1]) and with initial conditionf, μ0〉. Here, λ:=wrs and time has been sped up by a factor of s.

Throughout we will denote the measure-valued solutions to (3) by μt(dx). We note that strong, density-valued solutions, denoted by ηt(x), solve:

tηt=x[x(1x)ηt]+ληt(x01yηt(y)dy) (4)

with initial density η0(x). In this more transparent form one can see that the first term on the right is a flux term that transports density towards x = 0 whereas the second term is a forcing term that increases the density at values of x above the mean of the density. The flux corresponds to the individual-level moves: nearest neighbor moves in the particle system. The forcing term corresponds to group-level moves: moves to occupied sites in the particle system.

We will see that if we start with an initial measure μ0 which is the sum of delta measures, then the solution μt retains the same form. More explicitly, if

μ0(dx)=iai(0)δxi(0)(dx)

where xi(0) ∈ [0, 1], ai(0) > 0, and ai(0)=1, then we will see (from Lemma 5) that the solution μt to (3) has the form

μt(dx)=iai(t)δxi(t)(dx).

Moreover, the parameters (ai(t), xi(t)) satisfy the following set of coupled equations

{dxidt=xi(1xi)daidt=λai(xiy,μt)=λai(xijajxj). (5)

Notice that the positions of the delta masses change according to a negative logistic function, independently of the other masses and the density. The weight ai increases at time t if the position of the particle xi is above the mean, ajxj, and decreases if it is below the mean. To build intuition, it is instructive to consider some simple examples of this form.

Example 1

According to (5), if μ0 = δ1, then μt = μ0. This can also be seen directly from (3). In the case of an initial condition containing some delta mass at 1, all of the rest of the mass will migrate towards zero. Eventually all of the mass will be below the mean as the mass at one will not move and will ever be increasing its mass as it is always above the mean. Once this happens it is clear that all of the mass will drain from all of the points not at one and hence μtδ1 as t → ∞. This reasoning holds in a more general setting and is included in Theorem 3.

Example 2

According to (5), if μ0 = δ0, then μt = μ0. This too can be seen directly from (3). In the case of an initial condition containing no mass at one and only finite number of masses total, the mass will eventually all move towards zero and hence hence μt → δ0 as t → ∞. If an infinite number of masses are allowed the situation is not as simple. Theorem 3 hints at the possible complications by giving an example of a density which is invariant.

These simple examples correspond to similarly straightforward biological scenarios: once the population is entirely composed of type I individuals or of type G individuals, it stays in that state. Though δ0 is a fixed point of the system attracting many initial configurations, it is not Lyapunov stable. This means that even small perturbations of δ0 can lead to an arbitrary large excursion away from δ0 even though the system eventually returns to δ0. Rather than making a precise statement which would require quantifying the size of a perturbation, consider the example of μ0 = (1−ε)δ0 +εδ1−α. As ε → 0, the distance between μ0 and δ0 goes to zero in any reasonable metric. If we write μt=(1at)δ0+atδxt then as α → 0 one can ensure that the system spends arbitrarily long time with xt>12 and hence at will grow to as close to one as one wants in this time. Thus the system could be described as making an an arbitrarily big excursion away from δ0 even though μt → δ0 as t → ∞.

It natural to ask if there are other fixed points beyond δ0 and δ1.

Lemma 2 (Fixed points)

The measures delta δ0, δ1, and densities in the Beta(λ − α, α) family of distributions:

1B(λα,α)xλα1(1x)α1

with α ∈ (0, λ), are fixed points of (3). B(λ − α, α) is the normalizing constant that makes the density integrate to 1 over the interval [0, 1].

For measure-valued initial data, we show that the basins of attraction for the fixed points are determined by whether they charge the point x = 1 and their Hölder exponent around x = 1.

Theorem 3 (Steady state behavior)

Consider measure valued solution μt(dx) to (3) with initial probability measure μ0(dx). If μ0({1}) > 0 then

μtδ1ast

and if μ0([1 − ε, 1]) = 0 for some ε > 0 then

μtδ0ast.

Alternatively, suppose that for some α > 0 and C > 0

xαμ0([1x,1])Casx0.

If α < λ, then

μt(dx)Beta(λα,α)ast.

Otherwise, if α ≥ λ,

μt(dx)δ0(dx)ast

The α < λ case in the above theorem is particularly relevant to theoretical evolutionary biology because it implies that coexistence of the two types is possible in this infinite population limit.

The results of Theorem 3 should be contrasted with the original Markov chain before taking the limit m, n → ∞. In the Markov chain, all individuals eventually become either entirely type G or type I. These two homogeneous states are absorbing states for the individual level dynamics. The population level state made of individuals that are all either homogeneous of type G or I is absorbing for the group level dynamics. Hence, the state of the system eventually becomes composed entirely of homogeneous groups of solely G or I and stays in that state for all future times. These two absorbing states of the Markov chain, with finite m and n, correspond to the states δ0 and δ1 in the scaling limit. Hence the natural discretization for the Beta distribution to the lattice {kn:0<k<n}, given by

1Z(m,n,λ,α)(kn)λα1(1kn)α1,

cannot be invariant. (Here Z is the normalization constant which ensures the probabilities sum to one.) However for large m and n, it is reasonable to expect it to be nearly invariant in the sense that if the initial states {Xi(0):1im} are independent and distributed as the discrete Beta distribution then the Markov chain dynamics will keep the distribution close to the product of discretized Beta distributions for a long time. The expectation of this time will grow to infinity as m, n → ∞. In the context of evolutionary biology, this suggests that although a large finite population ultimately becomes fixed in one of two homogeneous states, it may be trapped for a long time in a state where both types coexist. Furthermore, this nearly invariant state should be similar to the discretization of the Beta distribution above.

We will not pursue a rigorous proof of this near or quasi invariance here. Nonetheless, we now briefly sketch the argument as we understand it, giving the central points. If the distribution of the Markov chain is close to a product of discretized Beta distributions, then the empirical mean will be highly concentrated around the mean of continuous Beta when m and n are large. Hence the generator projected on to any Xi is nearly decoupled from the other particles and close to being Markovian. More precisely, the dynamics of any fixed Xi is well approximated in this setting by the one-dimensional Markov chain obtained by replacing the mean of the empirical measure in the full generator with the mean of the Beta distribution. It is straightforward to see that for m and n large the discretized Beta distribution is an approximate left-eigenfunction of this one-dimensional generator with an eigenvalue which goes to zero as m, n → ∞.

All of these observations can be combined to show that if the systems starts in the product discretized Beta distribution then it will say close to the product discretized Beta distribution for a long time if m and n are large.

We now turn to the second scaling. Let s=σn, r=ρm, and nmθ, and let vtm,n denote the empirical measure under this scaling. The terms F″ and g″ in the generator (1) no longer vanish and the process converges to a limit that is stochastic. Our weak convergence result is proved and stated in terms of a martingale problem.

Theorem 4

Suppose nmθ, w = O(1), s=σn, r=ρm, and we speed up time by a factor of n. Suppose the particles in the rescaled vtm,n process are initially independently and identically distributed according to the measure v0m,n where v0m,nv0 as m, n → ∞. Then the rescaled process converges weakly to vt as m, n → ∞, where νt satisfies the following martingale problem:

Nt(f)=f,vtf,v00tAf,vzdzwθρ0t{0101f(x)V(z,vz,y)Q(vz;dx,dy)}dz (6)

is a martingale with conditional quadratic variation

N(f)t=2wθ0t[0101f(x)f(y)Q(vτ;dx,dy)]dτ (7)

where

Af(x)=x(1x)[d2dx2f(x)σddxf(x)]
V(t,v,x)=x
Q(v;dx,dy)=v(dx)(δx(dy)v(dy))

and fC2([0, 1]).

The drift part of the martingale (6) comprises a second order partial differential operator A and the centering term from the global jump dynamics (the expression in curly brackets). Note that A is in fact the generator of the Wright-Fisher diffusion with selection [8]. The entire process is a Fleming-Viot process [11]. Fleming-Viot processes frequently arise in models of population genetics (for example [3, 12]; see [10] for a review). In these contexts, the variable x can represent the geographical location of an individual, or as in the original paper of Fleming and Viot [11], the genotype of an individual (where genotype is a continuous instead of a discrete variable).

As an aside, it may be helpful to mention an alternative characterization of this Fleming-Viot process as an infinite system of ordinary stochastic differential equations. Instead of a martingale problem, where both m, n have been taken to , we consider the n → ∞ limit first. In this case, we have a finite collection of delta masses (each of mass 1m) moving on the interval [0, 1]. The positions of these delta masses can be represented by a coupled system of stochastic differential equations (SDEs). From the generator equation (2), one can see that each SDE comprises a diffusion part (corresponding to the individual-level dynamics) and a jump process (corresponding to the group-level dynamics). Specifically, a delta mass jumps to the position of another delta mass according to a Poisson process with rates dependent on the positions of the delta masses. Donnelly & Kurtz [7] characterize a population process in terms of such a system of SDES and show that the infinite population limit corresponds to a martingale problem for the Fleming-Viot process.

We briefly discuss other scalings one might obtain from the particle system and their biological significance. The two scalings studied here correspond, respectively, to what is called ‘strong’ selection and ‘weak’ selection occurring at both levels. (In the field of theoretical evolutionary biology, strong selection is defined as the selection parameter being constant in the population size whereas weak selection has the selection parameter scaling with the inverse of the population size). One can also use the same techniques to characterize the limiting system when selection is strong at one level and weak at the other. The dynamics of these limiting systems are more straightforward. For example, if selection is weak at the individual level (s=O(1n)) and strong at the group level (no rescaling of r), one can see from the generator equation (2) that the highest order term corresponds to selection at the group level. The limit is therefore deterministic and the steady-state is a population homogeneous in the group type with the largest proportion of type G individuals present in the initial state. Note that it is possible, by further rescaling w, to obtain a limit from a mixture of weak and strong selection. A biological interpretation of these observations is that for selection to manifest itself at two biological levels, the selective forces must be comparable in some sense: either both levels undergo the same type of selection (weak or strong), or if weak selection acts on one level but not the other, this weak selection must be compensated by a faster timescale.

The dynamical properties of the deterministic partial differential equation (3) are the focus of the next section. The proofs of weak convergence (Theorems 1 and 4) are deferred to section 4.

3. Properties of the deterministic limit

We begin with a closed-form expression for solutions to the deterministic partial differential equation (3).

Lemma 5

The solution to the deterministic partial differential equation (3) with initial measure μ0 is given by

μt(dx)=(Gtμ0)(dx)=(μ0ϕt1)(dx)wt(x) (8)

where

ϕt1(x)=xet+x(1et)
wt(x)=[(et+x(1et))et0th(z)dz]λ

and h(t) satisfies h(t) = 〈x, μt

Remark 1

(μ0ϕt1)(dx):=μ0(ϕt1(dx)) captures the changes in the initial data that are solely due to the flux term. This expression is also known as the push-forward measure of μ0 under the dynamics of ϕ. As we will see in the proof, ϕt(x) is precisely the characteristic curve for the spatial variable x and includes a normalizing constant. The multiplication by wt(x) captures the changes in the initial data that are due to the forcing term in (3) and includes a normalizing factor.

Remark 2

Density-valued solutions are given by

ηt(x)=η0(ϕt1(x))xϕt1(x)wt(x)=η0(xet+x(1et))[et+x(1et)]λ2e(λ1)tλ0th(z)dz (9)

To see this, suppose μ0(dx) = η0(x)dx. Then for any test function f,

01f(x)(μ0ϕt1)(dx)=01(fϕt)(x)μ0(dx)=01f(y)η0(ϕt1(y))yϕt1(y)dy

The first equality follows from the change-of-variable property of push-forward measures and the second from a standard change of variables. The limits of integration do not change because 0 and 1 are fixed points of both ϕt and ϕt1.

Proof of Lemma 5

We apply the method of characteristics (see for example [17]) to obtain a formula for a density-valued solution. We then prove that the weak, measure-valued analog of this solution satisfies (3). Consider the following modification of (4):

tξ(t,x)=x[x(1x)ξ(t,x)]+λξ(t,x)[xh(t)] (10)

where h(t) is a general function in time and ξ0C1([0, 1]). Note that when h(t)=01yξ(t,y)dy, this differential equation is equivalent to (4). To be clear about which equation we are solving, we use ξ(t, x) to denote solutions when h(t) is unspecified.

Rewriting (10):

(tξ,xξ,1)(1,x(1x),[(12x)+λ(xh(t))]ξ)=0

The second vector is therefore tangent to the solution surface and gives the rates of change for the t, x, and ξ coordinates. Let the initial condition be parameterized as (0, x, ξ0(x)) = (0, p, ξ0(p)). The t, x, and ξ coordinates change according to the characteristic equations

dtdq=1t(0,p)=0dxdq=x(1x)x(0,p)=pdξdq=[(12x(q,p))+λ(x(q,p)h(t(q,p)))]ξξ(0,p)=ξ0(p)

where q is the parameter as we move through the solutions in time. The first two ordinary differential equations have solutions

t(q,p)=qx(q,p)=pp(p1)eq=:ϕq(p) (11)

From this, the third differential equation can be solved exactly:

dξdq=[1+pp(p1)eq(λ2)λh(q)]ξ
ξ(q,p)=ξ0(p)exp{qλ0qh(z)dz+(λ2)0qpp(p1)ezdz}=ξ0(p)eqλ0qh(z)dz[p(eq1)+1](λ+2)

Next, make the substitutions q = t and p=ϕt1(x) from (11) to obtain ξ in terms of t and x:

ξ(t,x)=(ξ0ϕt1)(x)[et+x(1et)](λ2)e(λ1)tλ0th(z)dz=(ξ0ϕt1)(x)xϕt1(x)wt(x) (12)

If h(t) satisfies h(t)=01yξ(t,y)dy, then by definition, ξ(t, x) solves the partial differential equation (4). Conversely, if ξ(t, x) solves the partial differential equation (4), it also solves the differential equation (10) with h(t)=01yξ(t,y)dy. Therefore this above expression, along with the condition h(t)=01yξ(t,y)dy, are equivalent to solutions of (4).

To extend this result to measures, suppose we have a strong solution ηt(x) with initial condition η0:

ηt(x)=(η0ϕt1)(x)xϕt1(x)wt(x)

Using a similar calculation as that in Remark 2, the measure μt(dx) corresponding to ηt(x) is given by

μt(dx)=(μ0ϕt1)(dx)wt(x)

It remains to check that this satisfies the weak deterministic partial differential equation (3), with h(t) = 〈x, μt〉. The left hand side of the equation is

ddtf,μt=ddt01f(x)wt(x)(μ0ϕt1)(dx)=ddt01f(ϕt(x))wt(ϕt(x))μ0(dx)

Differentiating under the integral sign, expanding out the expressions for tϕt and t(wt(ϕt(x)), and applying change of variables for push-forward measures again, we obtain

ddtf,μt=01x(1x)f(x)wt(x)(μ0ϕt1)(dx)+λ01[xh(t)]f(x)wt(x)(μ0ϕt1)(dx)

This matches right hand side of the weak deterministic partial differential equation (3). □

In practice, the condition h(t) = 〈x, μt〉 is difficult to use. The following provides an equivalent and simpler condition.

Lemma 6

(Conservation of measure condition). Suppose ξ is a weak measure-valued solution to the deterministic partial differential equation (10) with initial condition 01ξ0(dx)=1. Then

h(t)=01yξ(t,dy)ifandonlyif01ξ(t,dy)=1t>0

Proof

(⇒ direction) Suppose h(t)=01yξ(t,dy). Then ξ is a weak measure-valued solution to (3). Taking the test function f ≡ 1, we obtain

ddt1,ξ=0+λ[x,ξ1,ξx,ξ]=0

Thus, if the initial data has total measure 1, 〈1, ξ〉 remains constant at 1 for all t ≥ 0. (⇐ direction) Suppose 01ξ(t,dx)=1forallt>0. Again take the test function f ≡ 1 but this time with unspecified h(t):

0=ddt1,ξ=0+λ[x,ξ1,ξh(t)]=λ[x,ξh(t)].

For this to hold, we must have h(t)=01xξ(t,dx). □

The above lemmas imply that solutions μt(dx) to (3) can be obtained by using formula (8) from Lemma 5 and imposing the conservation of measure condition 〈1, μt 1 from Lemma 6. We illustrate this with some examples of exactly solvable solutions for special choices of initial data. We will see that the long time behavior of the examples is consistent with results stated in Theorem 3.

Example 3

Initial measure concentrated at x0 ∈ [0, 1], i.e. μ0=δx0 Using formula (8),

f(x)μt(dx)=f(x)wt(x)δx0(ϕt1(dx))=f(ϕt(x0))wt(ϕt(x0))=f(x)wt(x)δϕt(x0)(dx)

Thus μt(dx)=wt(x)δϕt(x0)(dx). Imposing the conservation of measure condition gives μt(dx)=δϕt(x0)(dx). In other words, an initial delta measure at x0 moves as a delta measure along the x axis with position given by ϕt(x0), the solution to the negative logistic equation with initial position x0.

Example 4

Initial uniform density: η0(x) = 1, i.e μ0(dx) = dx Using formula (9),

ηt(x)=e(λ1)tλ0th(z)dz[et+x(1et)](λ2)

Imposing conservation of measure:

e(λ1)tλ0th(z)dz=[01[et+x(1et)](λ2)dx]1={(λ1)(1et)1e(λ1)tifλ11ettifλ=1

Thus,

ηt(x)={(λ1)(1et)1e(λ1)t[et+x(1et)](λ2)ifλ11ett[et+x(1et)](λ2)ifλ=1

Note that η0 ≡ 1 corresponds to an initial condition satisfying the hypothesis of Theorem 3 with α = 1. As predicted when λ > 1, we obtain η(t, x) → (λ − 1)xλ−2 = Beta(λ – 1, 1) as t → ∞.

The following is an example with α > 1.

Example 5

If η0(x) = 2(1 − x), i.e. μ0 ([1 – x, 1]) = x2, then the corresponding α from Theorem 3 is α = 2.

Using formula (9)

ηt(x)=2e(λ2)tλ0th(z)dz(1x)[et+x(1et)](λ3)

Imposing the condition in Lemma 6 to solve for the h(z) term

e(λ2)tλ0th(z)dz=[201(1x)[et+x(1et)](λ3)dx]1={(λ2)(1et)2[1(λ1)(1et)e(λ1)t(λ1)(1et)e(λ2)t]1ifλ2(1et)22tetifλ=2

As predicted by Theorem 3 for λ > 2 = α,

ηt(x)12(λ2)(λ1)(1x)xλ3=Beta(λ2,2)

as t → ∞.

Example 6

η0(x)=1c1[0,c](x) with c < 1.

Using formula (9)

ηt(x)=1c1{xϕt(c)}wt(x)xϕt1(x)

Since ϕt(c)=cet1c+cet0 as t∞, ηt(x) 0 for any x > 0. Since η must have total mass 1, it follows that regardless of the value of λ, ηt(x)dx → δ0(dx) for any c < 1. This can also be seen by applying Theorem 3 and noting that μ0([1c,1])=1c1η0(x)dx=0.

We end these examples with solutions for μ0 that are mixtures of delta measures and densities. First, note that it is straightforward to extend Example 3 to the case where μ0(dx)=aiδxi(dx) is a linear combination of delta measures, ai > 0 for all i. Applying (8), we obtain

μ(t,dx)=iaiwt(x)δϕt(xi)(dx)=iai(t)δxi(t)(dx)

where xi(t) = ϕt(xi) and ai(t)=aiwt(x)|x=xi(t). Our earlier system of equations (5) is obtained from this and the definitions of ϕt(x) and wt(x).

Second, we consider a combination of a delta measure and a density

μ0(dx)=aδx0(dx)+(1a)v0(x)dx

Notice that the formula for the solution (8) at first seems linear in the initial condition:

f(x)μt(dx)=f(x)(Gtμ0)(dx)=f(x)wt(x)[aδϕt(x0)(dx)+(1a)v0(ϕt1(x))xϕt1(x)dx]=f(x)[a(Gtδx0)(dx)+(1a)(Gtv0)(dx)]

This gives (Gtμ0)(dx)=a(Gtδx0)(dx)+(1a)(Gtv0)(dx). However, this notation is misleading because implicit in the Gt operator is the function h(t), the mean of the overall process over time. Here, h(t) involves both the delta measure and the density. The solution operator Gt is therefore not linear for this reason.

Nevertheless, we can still use this formula to obtain expressions for solutions. We illustrate this with a concrete example.

Example 7

Take x0 = 0 and v0(x) the density function for Beta(λα, α) with α ∈ (0, λ). Using the solution formula and direct calculation, we obtain

μt(dx)=awt(0)δ0(dx)+(1a)wt(x)(v0ϕt1)(x)xϕt1(x)dx=eλ0th(z)dz){aδ0(dx)+(1a)e(λα)tv0(x)dx}

Note in particular that μt remains a linear combination of δ0 and the Beta distribution. The Beta distribution ultimately dominates because λ > α.

We now use Lemma 5 to show that Beta distributions, δ0, and δ1 are fixed points for the deterministic partial differential equation and thus provide a proof of Lemma 2 announced earlier in this note.

Proof of Lemma 2

Note that we could prove this lemma by substituting δ0, δ1, and the Beta distribution into the deterministic partial differential equation (3) and showing the right-hand side equals zero. Instead, we will show that these distribution are fixed points of the solution operator. Let v be the density of the Beta distribution,

v(x)=1B(λα,α)xλα1(1x)α1.

The mean of v is λαλ. Using (9)

(Gtv)(x)=v(xet+x(1et))[et+x(1et)]λ2e(λ1)t(λα)t=ν(x)

v is therefore a fixed point of the solution operator and hence is a fixed point of the deterministic partial differential equation.

For δ0 and δ1, we use Example 3 above to obtain (Gtδx0)(dx)=δϕt(x0)(dx). Since x0 = 0 and x0 = 1 are fixed points of ϕt, it follows that δ0 and δ1 are fixed points of Gt. □

We now prove when the fixed points are stable. We begin with a lemma which gives more general conditions than those given in Theorem 3 for the delta measure at zero to attract a given initial condition.

Lemma 7

If for some αλ > 0,

limx0xαμ0([1x,1])<

then μt → δ0 as t → ∞. In particular, this condition holds if μ0([1 − ε, 1]) = 0 for some ε > 0.

To prove this and subsequent results, we will need the following technical lemma.

Lemma 8

Setting h(t) = 〈x, μt〉, the following two implications hold:

0h(t)dt<h(t)0ast.
0[1h(t)]dt<h(t)1ast.

Proof of Lemma 8

Since h(t) ≥ 0 and 1 − h(t) ≥ 0, the only obstruction to the implication is that h(t) (or 1 − h(t)) could have ever shorter and shorter intervals were they return to an order one value before returning to a value close to zero. This would require h(t) to have unbounded derivatives. However this is not possible since

dhdt(t)=(hx2,μt)+λ(x2,μth2)

from which one easily see that 1dhdt(t)λ since 0 ≤ h − 〈x2, μt〉 ≤ 1 and 0 ≤ 〈x2, μt〉 − h ≤ 1. □

Proof of Lemma 7

As usual let h(t) = 〈x, μt〉. We begin by observing that if

0h(t)dt<

then h(t) → 0 as t by Lemma 8 and μtδ0 as we wish to prove. Thus, we henceforth assume that 0h(t)dt=. Under this assumption, we will show that for any continuous function f

01f(x)μt(dx)f(0)ast.

Since f is continuous, given any ε > 0, there exists a δ > 0 so that |f(x) − f(0)| < ε whenever xδ. Hence

|01f(x)μt(dx)f(0)|01|f(x)f(0)|μt(dx)ε+δ1|f(x)f(0)|μt(dx) (13)

Now setting

δ1|f(x)f(0)|μt(dx)=ϕt1(δ)1|(fϕt)(x)f(0)|(wtϕt)(x)μ0(dx)2fϕt1(δ)1(wtϕt)(y)μ0(dy).

Since for all y[ϕt1(δ),1] and t > 0, we have

(wtϕt)(y)eλtλ0th(s)ds

we see that

δ1|f(x)f(0)|μt(dx)2feλtλ0th(s)dsμ0([ϕt1(δ),1]).

Now using the assumptions on μ0 and that ϕt1(δ)1De1 for some D > 0 and all t >0 one has that

eλtλ0th(s)dsμ0([ϕt1(δ),1])D^e(αλ)tλ0th(s)ds

for some constant D^ and all t > 0. Since αλ and 0h(s)ds= this bound converges to zero as t → ∞ and the proof is complete as the ε in (13) was arbitrary. □

Proof of Theorem 3

We start with the setting when μ0({1}) > 0 and begin by writing μt(dx) = atδ1(dx) + (1 − at)νt(dx) for some time dependent process at ∈ [0, 1] with a0 > 0 and some probability measure valued process νt(dx). As usual we define h(t) = 〈x, μt〉 and using the representation given in (8), one sees that at solves

datdt=λat(1h(t))at=a0exp(λ0t[1h(s)]ds).

Since 1 − h(t) ≥ 0, we know that 0t[1h(s)]ds converges as t → ∞. If it converges to ∞ then at also converges to ∞ since a0 > 0. However this is impossible since at ∈ [0, 1] for all t ≥ 0. Thus, we conclude that 0t[1h(s)]ds<. Then Lemma 8 implies that h(t) → 1 which in turn implies that μtδ1 as t → ∞.

We know turn to the setting when xαμ0([1 − x, 1]) → C > 0 as x → 0. The case when λα is already handled by Lemma 7 leaving only the case when λ > α > 0 to be proven. For x ∈ [0, 1], define U(x) = μ0([0, x]). Since μ0 is a probability measure we know that U has finite variation and is regular in the sense that both the right limit U(x+) and the left limit U(x) exist, where U(x±) = lim U(y) as y± x. At the extreme points, only the limit obtained by staying in [0, 1] is defined.

Now for any smooth function f of [0, 1], we have from (8) that

01f(x)μt(dx)=Zt01f(x)gt(x)(μ0ϕt1)(dx)=Zt01[(fgt)ϕt](x)μ0(dx)

where wt(x) has been written as the product of gt(x) = (et + x(1 − et))λ and Zt some positive, time dependent normalizing constant. It is enough to show that for some time positive, dependent constant Kt,

Kt01[(fgt)ϕt](x)μ0(dx)01f(x)xλα1(1x)α1dxast. (14)

Since xf(x)gt(x) is continuous on [0, 1], even if U(x) has discontinuities the integration by parts formula for Lebesgue-Stieltjes integrals produces

01[(fgt)ϕt](x)μ0(dx)=(fgtU)(1)(fgtU)(0+)01x[(fgt)ϕt](x)U(x)dx=[fgt(U1)](1)+[fgt(1U)](0+)+01x[(fgt)ϕt](x)[1U](x)dx.

Here we have used that ϕt is continuous with ϕt(1) = 1 and ϕt(0) = 0.

First observe that 1 − U(1) = 0 since μ0([1 – x, 1]) → 0 as x → 0 by assumption and that gt(0+) = eλt. Hence

[fgt(1U)](1)+[fgt(U1)](0+)=[U(0+)1]f(0)eλt. (15)

Now turning to the integral term, applying the chain rule and changing variables to y = ϕt(x) produces

01x[(fgt)ϕt](x)[1U](x)dx=01[x(fgt)ϕt](x)[1U](x)(xϕt)(x)dx=01[x(fgt)](y)[(1U)ϕt1](y)dy

For any fixed x ∈ (0, 1) by direct calculation and use of the assumption on μ0, one sees that

x(fgt)(x)x(xλf)(x)eαt(1U(ϕt1(x)))=eαtμ0([ϕt1(x),1])C(1xx)α}ast.

Combining these facts with (15) and the fact that e−(λα)t → 0 as t since λ > α produces

eαt01[(fgt)ϕt](x)μ0(dx)C01x(xλf)(x)(1xx)αdxast.

for some new positive constant C. Now since integration by parts implies that

1α01x(xλf)(x)(1xx)αdx=01f(x)xλα1(1x)α1dx

the last part of the proof is complete. □

4. Proofs of weak convergence

The proofs of Theorems 1 and 4 follow a standard procedure [13, 12, 3]. Both proofs require: (i) tightness of the sequence of stochastic processes – which implies a subsequential limit, and (ii) uniqueness of this limit. For the tightness of {μtm,n}m,n on D([0, T], P ([0,1])), it is sufficient, by Theorem 14.26 in Kallenberg [14] to show that {f,μtm,n} tight on D([0, T], ℝ) for any test function f from a countably dense subset of continuous, positive functions on [0, 1]. For the uniqueness of solutions to the partial differential equation in Theorem 1, we apply Gronwall’s inequality. For uniqueness of solutions to the martingale problem in Theorem 4, we apply a Girsanov theorem by Dawson [5].

4.1. Semimartingale property of multilevel selection process

It will be useful for what follows to treat f,μtm,n as a semimartingale. Below, Dx+f is the first order difference quotient of f taken from the right, Dxf is the first order difference quotient of f taken from the left, and Dxxf is the second order difference quotient.

Lemma 9

For fC2([0, 1]) and μtm,n with generator Lm,n defined in (1),

f,μtm,nf,μ0m,n=Atm,n(f)+Mtm,n(f) (16)

where Atm,n(f) is a process of finite variation, Atm,n(f):=0tazm,n(f)dz, with

atm,n(f)=iμtm,n(in)in(1in)[1nDxxf(in)sDxf(in)]+wr{jμtm,n(jn)jnf(jn)iμtm,n(in)f(in)jμtm,n(jn)jn} (17)

and Mtm,n(f) is a càdlàg martingale with (conditional) quadratic variation

Mm,n(f)t=1m0t{1niμzm,n(in)in(1in)[(Dx+f(in))2+(1+s)(Dxf(in))2]+wi,jμzm,n(in)μzm,n(jn)(1+rjn)(f(in)f(jn))2}dz (18)
Proof

By Dynkin’s formula (see, for example, Lemma 17.21 in [14]),

ψ(μtm,n)ψ(μ0m,n)0t(Lm,nψ)(μsm,n)ds

where ψdom(Lm,n), is a càdlàg martingale. In particular, this is true for

ψ(μtm,n)=F(f,μtm,n)

where fC2([0, 1]) and F: ℝ → ℝ. Setting F (x) = x and plugging this f into (1):

(Lm,nf,)(v)=iv(in)in(1in)[1nDxxf(in)sDxf(in)]+wr{jv(jn)jnf(jn)iv(in)f(in)jv(jn)jn}

Thus,

f,μtm,nf,μ0m,n0tazm,n(f)dz=Mtm,n(f) (19)

where Mtm,n(f) is some martingale and atm,n(f)=(Lm,nf,)(μtm,n). At(f) is a process of finite variation because for a given f, atm,n(f) is uniformly bounded in t.

Next, setting F (x) = x2 and plugging this ψ into (1):

(Lm,nf,2)(v)=2f,vatm,n(f)+1mniv(in)in(1in)[(Dx+f(in))2+(1+s)(Dxf(in))2]+wmi,jv(in)v(jn)(1+rjn)(f(in)f(jn))2

Thus,

f,μtm,n2f,μ0m,n20tczm,n(f)dz=martingale (20)

where ctm,n(f)=(Lm,nf,2)(μtm,n).

Alternatively, take Yt=f,μtm,n and apply Ito’s formula (for example, p78 in [18]) to Yt2 to obtain

f,μtm,n2f,μ0m,n2=20tf,μzazm,n(f)dz+[Mm,n(f)]t+martingale (21)

where [Mm,n(f)]t is the quadratic variation process of Mtm,n. Since 〈Mm,n(f)〉t is the compensator of [Mm,n(f)]t,

[Mm,n(f)]tMm,n(f)t

is a martingale. Thus,

f,μtm,n2f,μ0m,n220tf,μzm,nazm,n(f)dzMm,n(f)t=martingale (22)

The compensator 〈Mm,n(f)〉t is a predictable process of finite variation (see p118 in[18]). By the Doob-Meyer inequality (p103 in [18]), the martingale in (22) is the same as the martingale in (20). Equating these martingale parts we obtain

20tf,μzm,nazm,n(f)dz+Mm,n(f)t=0tczm,n(f)dz. (23)

Substituting in the expressions for azm,n and czm,n then gives the explicit expression for the conditional quadratic variation (18) in the statement of the lemma. □

4.2. Proof of deterministic limit

To prove Theorem 1, we need the two following lemmas. The first uses criteria in Billingsley [2] to show tightness of the sequence of processes f,μtm,n. The second uses Gronwall’s inequality to show uniqueness of solutions to the limiting system.

Lemma 10

The processes f,μtm,n, as a sequence in {(m, n)}, is tight for all positive-valued test functions fC1([0, 1]).

Proof

By Theorem 13.2 in [2], a sequence of probability measures {Pn} on D([0, T], ℝ+) is tight if and only if (i) for all η > 0, there exists a such that

Pn(x:supt[0,T]|x(t)|a)ηforn1

and (ii) for all ε > 0 and η > 0, there exists δ ∈ (0, 1) and n0 such that

Pn(x:wx(δ)ε)ηforalln>n0

where w′ is the modulus of continuity for càdlàg processes and is defined

wx(δ):=inf{ti}max1ivsups,t[ti1,ti)|x(s)x(t)|

where {ti} is a partition of [0, T] such that maxi{titi1}δ and xD([0, T], ℝ+) is distributed according to Pn.

First, note that since μtm,n is a probability measure, we have

|f,μtm,n|f

for all t, m, and n. Thus, (i) holds.

For (ii), we have by Markov’s inequality:

Pm,n(w(δ)ε)1εEm,n(w(δ)) (24)

where w(δ):=wf,μtm,n(δ). We will use the fact that f,μtm,n is a pure jump process to bound the right-hand side. The process f,μtm,n has two types of jumps: nearest-neighbor, and occupied-site jumps. Nearest-neighbor jumps occur at rate

imμtm,n(in)i(1in)(2+s)mn4(2+s)

and have magnitude

|f,μtm,nf,μtm,n|=|f,μtm,n+1m(δi±1nδin)f,μtm,n|1mnmaxi|Dxf(in)|

Occupied-site jumps occur at rate

i,jmμtm,n(in)μtm,n(jn)(1+rjn)m(1+r)

and have magnitude

|f,μtm,nf,μtm,n|=|f,μtm,n+1m(δjnδin)f,μtm,n|2mf

Putting this together,

Em,n(w(δ))Em,n[number of nearest-neighbor jumps in timeδ]1mnmaxi|Dxf(in)|+Em,n[number of occupied-site jumps in timeδ]21mf
mn4(2+s)δ1mnmaxi|Dxf(in)|+m(1+r)δ2mf={2+s4maxi|Dxf(in)|+2(1+r)f}δ

Because fC1([0, 1]), the expression in curly brackets is uniformly bounded by Cf, a constant that depends on f but not on m nor n. Substituting the above into (24) we get that for δ<εηCf,

Pm,n(w(δ)ε)η

for all m and n. Thus, both conditions for tightness are satisfied and f,μtm,n is tight. □

Lemma 11

The integro-partial differential equation (3) in Theorem 1 has a unique solution.

Proof

Suppose μt satisfies (3). Fix t ≥ 0 and let ψt(x) be a function of time t and space x. By the chain rule and the differential equation (3),

ddtψt,μt=ddzψz,μt|z=t+ddzψt,μz|z=t=tψt,μtsx(1x)ψtx,μt+wr[xψt,μtψt,μtx,μt]
ψt,μt=ψ0,μ0+0tzψz(x)+Gψz(x),μzdz+wr0txψz,μzψz,μzx,μzdz (25)

where Gf=sx(1x)xf. Let Pt be the semigroup operator associated with G. In fact, using the method of characteristics (or Lemma 5 with λ = 0),

Ptf=f(xest1x+xest) (26)

Now, set ψz(x) = Pt−zf(x) for 0 ≤ zt, where fC1([0, 1]) is some test function. Substituting this into (25), we have

P0f,μt=Ptf,μ0+0tzPtzf(x)+GPtzf(x),μzdz+0twr[xPtzf,μzPtzf,μzx,μz]dz
f,μt=Ptf,μ0+0twr[xPtzf,μzPtzf,μzx,μz]dz (27)

since zPtzf=GPtzf. Thus, any μt that satisfies (3) also satisfies (27). We show that (27) has a unique solution, which in turn implies that (3) has a unique solution.

Suppose μt and νt both satisfy (27), with μ0 = ν0. Let t ≥ 0.

μtνtTV=supf1f,μtf,νt=supf1{0twrxPtzf,μzνz+wr[x,μzPtzf,μzx,νzPtzf,νz]dz} (28)

We can bound the first term in the integrand by

wr|xPtzf,μzνz|wrμzνzTV

because ║xPt−zf ≤ ║Pt−zf≤║f≤1, where the first inequality follows from x ∈ [0, 1] and second from (26). For the second term in the integrand of (28), add and subtract 〈x, νz〉 〈Ptzf, μz〉:

wr|x,μzPtzf,μzx,νzPtzf,νz|=wr|x,μzνzPtzf,μz+x,νzPtzf,μzνz|wr(PtzμzνzTV+μzνzTV)wr(f+1)μzνzTV

again, the inequalities follow from x ∈ [0, 1], ║Ptf ≤ ║f and also that μz and νz are probability measures. Substituting this back into (28),

μtνtTV0t3wrμzνzTVdz

By Gronwall’s inequality, ║μtνtTV = 0, so we have uniqueness. □

Proof of Theorem 1

The uniqueness of the limit is given by Lemma 11 and the tightness of the process by Lemma 10. It remains to show that {f,μtm,n}m,n converges to the solution of (3). Recall from Lemma 9 that

f,μtm,nf,μ0m,n=Atm,n(f)+Mtm,n(f)

Since tightness implies relative compactness (Prohorov’s theorem), there exists a subsequence of μtm,n that converges to a limit, call it μt. Thus, f,μtm,nf,μt. We also have by f,μ0m,nf,μ0 assumption. In addition,

Atm,n(f)=0t{iμzm,n(in)in(1in)[1nDxxf(in)sDxf(in)]+wr[jμzm,n(jn)jnf(jn)iμzm,n(in)f(in)jμzm,n(jn)jn]}dz0t{x(1x)sdfdx,μz+wr[xf(x),μzf(x),μzx,μz]}dz:=At(f)

The factor of 1m in the quadratic variation (18) implies that Mtm,n0 as m, n → ∞. Therefore,

f,μtf,μ0=At(f)

or,

ddtf,μt=x(1x)sdfdx,μt+wr[xf(x),μtf(x),μtx,μt]

4.3. Proof of Fleming-Viot limit

The elementary proof for tightness in Theorem 1 does not easily carry over for the case of Theorem 4. We thus use a criterion by Aldous [1] to prove tightness for the martingale part of the stochastic process.

First, consider the semimartingale formulation of f,μtm,n (16) with the rescaled parameters s=σn and ρ=rm. Let Etm,n(f):=0tezm,n(f)dz and Ntm,n(f) denote the drift and martingale parts of f,νtm,n, the rescaled process. Then

Etm,n(f)=0tiνzm,n(in)in(1in)[Dxxf(in)σDxf(in)]+wρnm{jνzn(jn)jnf(jn)iνzn(in)f(in)jνzn(jn)jn}dz (29)

and

Nm,n(f)t=0t{nm2iνzm,n(in)in(1in)[(Dx+f(in))2+(1+σn)(Dxf(in))2]+wnmi,jνzm,n(in)νtm,n(jn)(1+ρmjn)(f(in)f(jn))2}dz (30)

Lemma 12

The processes f,νtm,n, as a sequence in {(m, n)}, is tight for all fC2([0, 1]).

Proof

Since f,νtm,n=Etm,n(f)+Ntm,n(f), it suffices, by the triangle inequality applied to Billingsley’s tightness criterion (Theorem 13.2 in [2]), to show tightness of Em,n(f) and Nm,n(f) separately.

For the tightness of the finite variation term Etm,n(f):

|etm,n(f)|14iνzm,n(in)[|Dxxf(in)|+σ|Dxf(in)|]+wρnm{jνzn(jn)jn|f(jn)|+iνzn(in)|f(in)|jνzn(jn)jn}

For a given γ > 0, we can choose n and m sufficiently large such that nm(θγ,θ+γ), |Dxxf(in)|f+γ,and |Dxf(in)|f+γ. We thus obtain

|etm,n(f)|14[f+γ+σ(f+γ)]+2ωρ(θ+γ)f

There are only a finite number of m and n for which this condition is not satisfied. Taking the maximum of the right-hand side of the above equation with the value of |etm,n(f)| for such m and n, we obtain that for all m and n,

|etm,n(f)|Gf

and therefore

supt[0,T]|Etm,n(f)|GfT

where Gf is a constant that depends on f. Using the same conditions for tightness as in the proof of Theorem 1, condition (i) is satisfied because Etm,n(f) is bounded uniformly in t, m, and n. Condition (ii) is satisfied because |Et+δm,nEtm,n|δGf for all t, m, and n and therefore we can always choose δ to be sufficiently small so that |Et+δm,nEtm,n|ε for some prescribed ε.

We will show tightness for the martingale part Ntm,n(f)t using Aldous’ tightness condition (we use the result as stated in [9]). First, note that by equation (30),

Ntm,n(f)tJft

for fC2([0, 1]), where Jf is a constant that depends on f. Thus for fixed t,

Pm,n(|Ntm,n(f)|>a)1aEm,n|Ntm,n(f)|1a(Em,n(Ntm,n(f))2)1/2=1a(Em,nNtm,n(f)t)1/2Jfta

Given ε > 0, choose a>Jftε and we have that Nτm,n(f) is tight for each t. Next, let τ be a stopping time, bounded by T, and let ε > 0. For κ > 0,

Pm,n(|Nτ+κm,n(f)Nτm,n(f)|ε)1εEm,n|Nτ+κm,n(f)Nτm,n(f)|

Now (suppressing subscripts on expected value for clarity),

E|Nτ+κm,n(f)Nτm,n(f)|[E(Nτ+κm,n(f)Nτm,n(f))2]1/2=[E(Nτ+κm,n(f)2Nτm,n(f)2+2Nτm,n(f)(Nτm,n(f)Nτ+κm,n(f))]1/2=[E(Nm,n(f)τ+κNm,n(f)τ)]1/2Jfκ

Hence,

Pm,n(|Nτ+κm,n(f)Nτm,n(f)|ε)1εJfκ

By taking κ<ε4Jf, we satisfy the conditions of Aldous’ stopping criterion.

Lemma 13

The martingale problem (6) and (7) has a unique solution.

Proof

The martingale problem with V (t, ν, x) = 0 corresponds to a neutral Fleming-Viot with linear mutation operator. Its uniqueness has previously been established (see for example [5]). To show uniqueness for nontrivial V, we use a Girsanov-type transform by Dawson [5]. It suffices to check that

supt,μ,x|V(t,μ,x)|V0(aconstant) (31)

In our case, V (t, μ, x) = x and since x ∈ [0, 1], the condition is satisfied and the martingale problem has a unique solution. □

Proof of Theorem 4

The uniqueness of the limit is given by Lemma 13 and the tightness of the process by Lemma 12. To see that the limit is the martingale problem stated in Theorem 4, note that for a fixed t,

Etm,n(f)0t01x(1x)[2x2f(x)σsf(x)]vz(dx)
wρθ{01xf(x)vz(dx)01f(x)vz(dx)01xvz(dx)}dz

as n;m →∞ and

Nm,n(f)t0twθ0t0t(f(x)f(y))2νz(dx)νz(dy)dz

Finally, notice that

0101(f(x)f(y))2vz(dx)vz(dy)=20101f(x)2vz(dx)vz(dy)20101f(x)f(y)vz(dx)vz(dy)=20101f(x)f(y)vz(dx)[δx(dy)vz(dy)]

and

01xf(x)vz(dx)01f(x)vz(dx)01xvz(dx)=0101f(x)yvz(dx)[δx(dy)vz(dx))]

satisfying the form of the martingale problem in the theorem. □

Acknowledgments

The authors would like to thank Mike Reed and Katia Koelle for their roles in the collaboration out of which this paper’s central model grew. We would also like to thank Rick Durrett of a number of useful discussions. SL would further like to thank Don Dawson, who sent her an early version of his notes on this topic [5], and Sylvie Méléard, who took time out of a conference to explain to her the elements of the proof for weak convergence. JCM would like to thank the NSF for its support though DMS-08-54879. SL would like to thank support from the NSF (grants NSF-EF-08-27416 and DMS-0942760), NIH (grant R01-GM094402), and the Simons Institute for the Theory of Computing.

Footnotes

Recommended by Professor Leonid Bunimovich

Mathematics Subject Classification numbers: 35F55, 35Q92, 37L40, 60J28, 60J68, 60J70

References

  • 1.Aldous David. Stopping times and tightness. The Annals of Probability. 1978;6(2):335–340. [Google Scholar]
  • 2.Billingsley Patrick. Convergence of Probability Measures. 2nd. Wiley-Interscience; 1999. [Google Scholar]
  • 3.Champagnat Nicolas, Ferrière Régis, Méléard Sylvie. Unifying evolutionary dynamics: from individual stochastic processes to macroscopic models. Theoretical Population Biology. 2006;69(3):297–321. doi: 10.1016/j.tpb.2005.10.004. [DOI] [PubMed] [Google Scholar]
  • 4.Dawson DA, Greven A. Hierarchically interacting Fleming-Viot processes with selection and mutation: Multiple space time scale analysis and quasi-equilibria. Electronic Journal of Probability. 1999;4(4):1–81. [Google Scholar]
  • 5.Dawson Donald A. Introductory Lecture on Stochastic Population Systems, Technical Report Series of the Laboratory for Research Statistics and Probability. Carleton University -University of Ottawa; 2010. (Technical Report 451). [Google Scholar]
  • 6.Dawson Donald A, Hochberg KJ. A multilevel branching model. Advances in Applied Probability. 1991;23:701–715. [Google Scholar]
  • 7.Donnelly Peter, Kurtz Thomas G. Genealogical processes for Fleming-Viot models with selection and recombination. Annals of Applied Probability. 1999;9(4):1091–1148. [Google Scholar]
  • 8.Durrett Richard. Probability Models for DNA Sequence Evolution. 2nd. Springer; 2008. [Google Scholar]
  • 9.Etheridge Alison. An Introduction to Superprocesses. American Mathematical Society; 2000. [Google Scholar]
  • 10.Ethier Stewart N, Kurtz Thomas G. Fleming-viot processes in population genetics. SIAM Journal on Control and Optimization. 1993;31(2):345–386. [Google Scholar]
  • 11.Fleming Wendell H, Viot Michel. Some measure-valued markov processes in population genetics theory. Indiana University Mathematics Journal. 1979;28(5):817–843. [Google Scholar]
  • 12.Fournier Nicolas, Méléard Sylvie. A microscopic probabilistic description of a locally regulated population and macroscopic approximations. Annals of Applied Probabability. 2004;14(4):1880–1919. [Google Scholar]
  • 13.Joffe A, Metivier M. Weak convergence of sequences of semimartingales with applications to multi-type branching processes. Advances in Applied Probability. 1986;18:20–65. [Google Scholar]
  • 14.Kallenberg Olav. Foundations of Modern Probability. 1st. Springer; 1997. [Google Scholar]
  • 15.Luo Shishi. A unifying framework reveals key properties of multilevel selection. Journal of Theoretical Biology. 2014;341:41–52. doi: 10.1016/j.jtbi.2013.09.024. [DOI] [PubMed] [Google Scholar]
  • 16.Méléard Sylvie, Roelly Sylvie. A host-parasite multilevel interacting process and continuous approximations. arXiv preprint arXiv:1101.4015. 2011 [Google Scholar]
  • 17.Pinchover Yehuda, Rubinstein Jacob. An Introduction to Partial Differential Equations. Cambridge University Press; 2005. [Google Scholar]
  • 18.Protter Philip E. Stochastic Integration and Differential Equations. 2nd. Springer; 2004. [Google Scholar]

RESOURCES