Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Mar 14.
Published in final edited form as: Adv Appl Probab. 2012 Jun;44(2):408–428. doi: 10.1239/aap/1339878718

APPROXIMATE SAMPLING FORMULAS FOR GENERAL FINITE-ALLELES MODELS OF MUTATION

Anand Bhaskar 1,*, John A Kamm 1,**, Yun S Song 1,*,**,
PMCID: PMC3953561  NIHMSID: NIHMS386649  PMID: 24634516

Abstract

Many applications in genetic analyses utilize sampling distributions, which describe the probability of observing a sample of DNA sequences randomly drawn from a population. In the one-locus case with special models of mutation such as the infinite-alleles model or the finite-alleles parent-independent mutation model, closed-form sampling distributions under the coalescent have been known for many decades. However, no exact formula is currently known for more general models of mutation that are of biological interest. In this paper, models with finitely-many alleles are considered, and an urn construction related to the coalescent is used to derive approximate closed-form sampling formulas for an arbitrary irreducible recurrent mutation model or for a reversible recurrent mutation model, depending on whether the number of distinct observed allele types is at most three or four, respectively. It is demonstrated empirically that the formulas derived here are highly accurate when the per-base mutation rate is low, which holds for many biological organisms.

Keywords: Sampling probability, coalescent theory, urn models, martingale

1. Introduction

An important problem in genetic analyses concerns computing the probability of observing a randomly drawn sample of chromosomes under a given model of evolution. Popular applications of this probability computation include maximum likelihood estimation of model parameters and ancestral inference (see [19] for a nice introduction). The coalescent [14, 15] is a useful mathematical framework for performing model-based full-likelihood analyses, but in most cases it is intractable to obtain a closed-form formula for the probability of a given dataset. A well-known exception to this complication is the celebrated Ewens sampling formula (ESF) [3], which describes the stationary probability distribution of a sample configuration under the one-locus infinite-alleles model in the coalescent or the diffusion limit. A Pólyalike urn model interpretation [9] of the formula has been known for some time, and recently a new combinatorial proof of the ESF has been provided [6]. Furthermore, the ESF also arises in several interesting contexts outside biology, including random partition structures; the ESF is a special case of the two-parameter sampling formula [17, 18] for exchangeable random partitions. See [1] for examples of other interesting combinatorial connections.

In the case of finitely-many alleles, a closed-form sampling formula is known [20] only for the parent-independent mutation (PIM) model, in which the probability of mutating from allele j to allele i depends only on the child allele i. For a general non-PIM mutation model, finding an exact, closed-form sampling formula has remained a challenging open problem.

In this paper, we make progress on this problem by deriving approximate, closed-form sampling formulas that are highly accurate when the mutation rate is low. More precisely, given a sample configuration n and the model parameters (mutation rate θ and transition matrix P), we consider the Taylor expansion of the sampling probability q(n | θ, P) about θ = 0. As discussed later, if P is irreducible when restricted to the observed alleles in the sample, then the leading order term in the expansion is proportional to θ|𝒪n|−1, where |𝒪n| is the number of distinct observed alleles in the sample configuration n. Hence,

q(n|θ,P)=θ|𝒪n|1Q(n|P)+O(θ|𝒪n|), (1)

where Q(n | P) is the leading order coefficient that depends on the mutation transition matrix P but not on the mutation rate θ. In this paper, we consider the problem of obtaining exact closed-form formulas for Q(n | P). As many organisms typically have small per-base mutation rates, our results are of biological interest.

By restricting the set of events in the coalescent genealogy for a given sample, Jenkins and Song [12] provided closed-form formulas for Q(n | P) for an arbitrary transition matrix P when |𝒪n| ≤ 3. In this paper, we provide new proofs of those results, and extend them by supplying a closed-form formula for Q(n | P) when |𝒪n| = 4 and the transition matrix P is reversible restricted to the observed alleles. We prove our results using martingale arguments and use an urn construction related to the coalescent to develop a recursion for the approximate sampling probability, which can then be solved in closed-form using combinatorial techniques. As a corollary of our results, it can be seen that the simple general formula in [12, Theorem 6.3] for Q(n | P) when P is parent-independent restricted to the observed alleles also holds when P is reversible restricted to the observed alleles, provided that |𝒪n| ≤ 3. That formula fails to hold when |𝒪n| = 4 and P is not parent-independent restricted to the observed alleles.

As there are four distinct DNA bases, our extension to the |𝒪n| = 4 case seems natural. A more interesting reason is as follows: In multi-locus models with finite recombination rates, no closed-form sampling formula is known, even for the simplest case of two loci with either infinite-alleles or finite-alleles PIM models. However, recently a new framework based on asymptotic series has been developed [2, 10, 11, 13] to derive useful closed-form results when the recombination rate is moderate to large. The main idea behind that research is to perform an asymptotic expansion of the sampling probability in inverse powers of the recombination rate. We note that our one-locus sampling formula for the |𝒪n| = 4 case provides an accurate approximation of the sampling probability for a completely linked (i.e., with zero recombination rate) pair of loci with two observed alleles at each locus (as is typical in single-nucleotide polymorphism data). Hence, our work serves as a starting point for finding approximate two-locus sampling formulas when the recombination rate is small, complementary to the earlier work [2, 10, 11, 13] for large recombination rates. We leave this problem for future research.

We remark that, for a given sample configuration n and fixed parameters θ and P, the exact sampling probability q(n | θ, P) can be found numerically by solving a system of coupled linear equations in O(|n|K) variables, where |n| denotes the total sample size and K denotes the number of allele types in the assumed model. One of the main motivations of our work is to remedy this high computational complexity. Evaluating our closed-form approximations is much more efficient, in both time and space complexity.

The rest of this paper is structured as follows. In Section 2, we lay out the model and notation used throughout the paper. In Section 3, we summarize our main closed-form sampling formulas, which we prove in Section 4 using martingale arguments and an urn construction. Numerical experiments demonstrating the usefulness of our approximate sampling formulas are provided in Section 5.

2. Model and notation

We consider Kingman’s coalescent with a K-allelic recurrent mutation model specified by the population-scaled mutation rate θ/2 and ergodic transition matrix P, where Pji denotes the probability of allele j mutating to allele i forward in time given that a mutation occurs. The stationary distribution of P is denoted by π = (π1, …, πK).

The following definitions will be used throughout:

Definition 1. (n, sample configuration.) A sample of individuals is denoted by n = (ni)i∈[K], where ni ∈ ℤ≥0 denotes the number of individuals in the sample with allele i. The size |n| of the sample n is denoted by the same letter in non-bold-face, n. For notational convenience, we use ei to denote the sample configuration with a single individual of type i and write n = n1e1 + ⋯ + nKeK. For a subset S ⊆ [K], we define nS = ∑iS niei and nS = |nS|.

Definition 2. (𝒪n, observed allele types.) Given a sample n, let 𝒪n ⊆ [K] denote the set of observed allele types; i.e., 𝒪n = {i ∈ [K] | ni > 0}. The number of observed allele types is denoted by |𝒪n|.

When the indices h, i, j, k and l are used in indefinite summations or products, they are assumed to range over 𝒪n, unless stated otherwise.

By exchangeability, the probability of any ordered sample with configuration n is invariant under all permutations of the sampling order. We use q(n | θ, P) to denote the stationary sampling probability of any particular ordered sample with configuration n. From the standard coalescent arguments [7, 8], it can be deduced that q(n | θ, P) is the unique solution to the recursion

n(n1+θ)q(n|θ,P)=ini(ni1)q(nei|θ,P)+θi,jPjiniq(nei+ej|θ,P), (2)

with boundary conditions

q(ei|θ,P)=πi,for all i[K]. (3)

If P is irreducible when restricted to the observed alleles 𝒪n, then by unwinding recursion (2), it can be seen that |𝒪n| − 1 is the smallest power of θ with a non-vanishing coefficient in the Taylor series expansion of q(n | θ, P) about θ = 0. Intuitively, for a sample with m distinct observed alleles, the coefficient of θm−1 in the Taylor expansion corresponds to the total probability of coalescent genealogies with the most parsimonious number (i.e., m − 1) of mutations. That P is irreducible when restricted to 𝒪n is a sufficient (but not necessary) condition for the existence of such a parsimonious genealogy for sample n.

Letting Q(n | P) denote the coefficient of θ|𝒪n|−1 in the Taylor expansion, q(n | θ, P) can be written as in (1). For simplicity, in what follows we simply write q(n) and Q(n) instead of q(n | θ, P) and Q(n | P), respectively.

We now introduce some notation used throughout the paper. For a sample configuration n, we define the combinatorial quantity Λ(n) as

Λ(n)=i𝒪n(ni1)!(n1)!. (4)

For k ∈ ℤ≥0, the kth falling factorial of x (denoted (x)k) and the kth rising factorial of x (denoted (x)k) are defined as

(x)k=x(x1)(xk+1),
(x)k=x(x+1)(x+k1),

with (x)0↓ = (x)0↑ = 1. The kth harmonic number Hk is defined as

Hk=1+12++1k,

with H0 = 0. Given a sample configuration n = (n1, …, nK), a K-tuple m = (m1, …, mK) satisfying 0mn means 0 ≤ mi < ni for all i ∈ 𝒪n and mi = 0 for all i ∉ 𝒪n, while 0mn means 0 < mini for all i ∈ 𝒪n and mi = 0 for all i ∉ 𝒪n. Also, 0mn denotes 0 ≤ mini for all i ∈ [K].

3. A summary of closed-form results for Q(n)

In the case of |𝒪n| = 1, it is easy to see that Q(n) = πi for n = nei. In this paper, we derive closed-form expressions for the leading order coefficient Q(n) when |𝒪n| ≤ 3 and P is an arbitrary mutation transition matrix that is irreducible when restricted to the observed alleles 𝒪n; and also when |𝒪n| = 4, and P is irreducible and reversible when restricted to 𝒪n (i.e., πiPij = πjPji for all i, j ∈ 𝒪n). These closed-form results are summarized below.

Theorem 1. For |𝒪n| = 2 and P an arbitrary mutation transition matrix that is irreducible when restricted to 𝒪n, Q(n) is given by

Q(n)=Λ(n)i,j𝒪n:ijnjnπjPji.

Theorem 2. For |𝒪n| = 3 and P an arbitrary mutation transition matrix that is irreducible when restricted to 𝒪n, Q(n) is given by

Q(n)=Λ(n)distinct i,j,k𝒪n{πjPjiPjk[(nj)2n(nj+nk1)ninjn(ni+nk)2ninjnkn(nj+nk)2+2ninjnk(nj+nk+1)3(HnHni1)]+πkPkjPji[njnkn(nj+nk1)+2ninjnkn(nj+nk)22ninjnk(nj+nk+1)3(HnHni1)]}.

Corollary 3. Suppose |𝒪n| = 3 with sample configuration n = naea + nbeb + ncec, where a, b, c are distinct alleles in [K]. If the mutation transition matrix P is reversible and irreducible when restricted to the observed alleles 𝒪n, Q(n) is given by

Q(n)=Λ(n)(nanπaPabPac+nbnπbPbaPbc+ncnπcPcaPcb).

Theorem 4. For |𝒪n| = 4, if the mutation transition matrix P is reversible and irreducible when restricted to the observed alleles 𝒪n, then Q(n) is given by

Q(n)=Λ(n)distinct i,j,k,l𝒪n[πiPijPikPilγ(n,i,j,k,l)+πiPijPikPjlδ(n,i,j,k,l)],

where

γ(n,i,j,k,l)=nin{[ni12(ni+nj+nk1)2njnl(ni+nj+nk)2]+nl2(nj+nk+nl)[nl(ni1)(nk+nl)(ni+nj1)2njnl(ni+nj)2]}+2ninjnl(ni+nj+nk+1)3(HnHnl1)2ninjnl(ni+nj+1)3(HnHnk+nl1),

and

δ(n,i,j,k,l)=nin{[njni+nj+nk1+2njnl(ni+nj+nk)2][njnl(nk+nl)(ni+nj1)+2njnl(ni+nj)2]}2ninjnl(ni+nj+nk+1)3(HnHnl1)+2ninjnl(ni+nj+1)3(HnHnk+nl1).

4. Proofs of the main results

In this section, we construct an urn process to derive the closed-form formulas for Q(n) mentioned in the previous section. We use the urn process to decompose Q(n) into a sum-product of two vectors, one which depends only on the sample configuration n and the other which depends only on the mutation transition matrix P. Using this decomposition, we show that Q(n) corresponds to the probability of a certain event in the urn process.

Throughout, we use R(n) to denote the following rescaled version of Q(n):

R(n)=Q(n)Λ(n), (5)

where Λ (n) is the combinatorial coefficient defined in (4).

4.1. Description of the urn process

Let n be the sample configuration of interest. We have an urn with n balls, ni of which have color i. We remove balls one at a time uniformly at random until there are no more balls in the urn. However, whenever we “kill” a color (i.e., remove the last ball of that color), we add back a ball of a different color. We do this by picking another ball from the urn, copying it, and returning both copies to the urn. Note that when we kill the last color, we do not add any balls back, since there are no more colors to choose from.

Suppose that when we kill color i, we add back a ball of color j. We then call j the parent of i, and call the last surviving color the root. This generates a rooted tree whose vertices consist of the |𝒪n| observed colors (alleles).

Let T be any rooted tree on 𝒪n. We denote the probability of generating T under the above process as ℙn(T). Let E(T) be the edge set of T, and let ρ(T) denote the root vertex of T. By convention, we draw edges as pointing away from the root, so the edge (ji) indicates that j is the parent of i.

The main idea of this section is that to compute Q(n), it is enough to compute ℙn(T) for each T. In particular, we prove the following theorem in Section 4.2:

Theorem 5. Recall that for a transition matrix P that is irreducible when restricted to 𝒪n, Q(n) denotes the first nonzero coefficient in the Taylor expansion (1) of q (n) about θ = 0. Given a rooted tree T described above, define fP (T) as

fP(T)=πρ(T)(ji)E(T)Pji.

Then, the quantity R(n) = Q(n)/Λ(n) is given by

R(n)=Tn(T)fP(T)=𝔼n[fP(T)], (6)

where the sum is taken over all rooted trees T with |𝒪n| vertices bijectively labeled by 𝒪n. That is, R(n) is the expectation of fP (T) under the above process.

Note that we can view fP (T) as a probability as well. In particular, suppose we relabel the vertices of T as follows: we assign a new label from [K] to ρ(T) according to the stationary distribution π, and for each edge in T, we assign a new label to the child according to the new label of its parent and the transition matrix P. Then fP (T) is the probability that we assign the original labels to all the vertices, given that we drew T. That is, if 𝒞𝒪n is the event that we assign the original labels to all vertices, then

fP(T)=(𝒞𝒪n|T)πρ(T)(ji)E(T)Pji.

This immediately leads to the following interpretation:

R(n)=T(𝒞𝒪n|T)n(T)=n(𝒞𝒪n). (7)

That is, R(n) is the unconditional probability that we correctly label all the alleles, if we use the urn process to generate a tree on the alleles and then use the tree to assign labels.

4.2. An inductive proof of Theorem 5

In this subsection, we provide an inductive proof of Theorem 5. In Section 4.3, we provide an alternative proof based on a modified coalescent process which provides a more intuitive explanation for why the urn process works.

Proof of Theorem 5. Recall the recursion in (2):

n(n1+θ)q(n)=ini(ni1)q(nei)+θi,jPjiniq(nei+ej).

Recall also that if P is irreducible when restricted to 𝒪n, q(n) has leading order power θ|𝒪n|−1 in its Taylor series. Hence we get the following recursion for Q(n):

n(n1)Q(n)=i:ni>1ni(ni1)Q(nei)+i:ni=1j:jiPjiniQ(nei+ej).

Plugging in Q(n) = Λ(n)R(n) and simplifying gives us the following recursion for R(n):

n(n1)R(n)=i:ni>1ni(n1)R(nei)+i:ni=1j:jiPjinjR(nei+ej). (8)

A simple induction over |𝒪n| and n shows that this recursion has a unique solution given the boundary conditions R(ei). So if we can show (6) when |𝒪n| = n = 1, and then show that ∑T ℙ (𝒞𝒪n | T)ℙn(T) satisfies the recursion (8), then we will be done. The base case is trivial: when 𝒪n = {a}, there is only one possible tree, T = {a}, with ℙn(T) = 1 and ℙ(𝒞𝒪n | T) = πa = limθ→0 q(n) = Q(n) = Λ(n)R(n) = R(n).

To show ∑T ℙ(𝒞𝒪n | T)ℙn(T) satisfies (8), we start by giving recursions for ℙn(T) and ℙ(𝒞𝒪n | T). Let z(i) be the parent of i in T, and let L(T) be the set of leafs of T (where the root is not considered a leaf). Conditioning on the first event in the urn process gives us

n(T)=i:ni>1ninnei(T)+iL(T):ni=1nz(i)n(n1)nei+ez(i)(T\{i}). (9)

Furthermore, if iL(T), we have

(𝒞𝒪n|T)=Pz(i),i(𝒞𝒪n\{i}|T\{i}). (10)

Using (9) and (10), and collecting terms, we arrive at

n(n1)T(𝒞𝒪n|T)n(T)=T(𝒞𝒪n|T)[i:ni>1ni(n1)nei(T)+iL(T):ni=1nz(i)nei+ez(i)(T\{i})]=i:ni>1ni(n1)Tnei(T)(𝒞𝒪n|T)+i:ni=1j:jiPjinjTnei+ej(T)(𝒞𝒪n\{i}|T),

where the sum over T′ is taken over all rooted trees with vertex set 𝒪n \ {i}. Hence, ∑T ℙ(𝒞𝒪n | T)ℙn(T) satisfies (8).

4.3. Connection to the coalescent

In this subsection, we motivate our urn process by drawing a connection to the coalescent. We then use this connection with the coalescent to provide an alternate proof of Theorem 5.

Let ℋ be a history of mutation and coalescence events on n labeled individuals, and let q(ℋ) be the probability of ℋ. Then we have

q(n)= consistent with nq(). (11)

It turns out that only histories with exactly |𝒪n| − 1 mutations contribute to the leading order term of q(n); this is the observation also utilized in [12]. Furthermore, each history of choices in our urn process corresponds with a genealogical history of |𝒪n| − 1 mutations. This provides the basic intuition for why the urn sampling scheme works.

We start by providing a modified coalescent that generates a history ℋ that is consistent with n and has exactly |𝒪n| − 1 mutations. We then show that this modified coalescent is equivalent to our urn sampling process. Finally, we prove Theorem 5 by relating the modified coalescent with Kingman’s coalescent.

Consider the following modified coalescent process on our sample:

  1. Select allele i with probability mi/m, where m is our current configuration of alleles.

  2. If mi > 1, choose a random pair in allele i to coalesce (so m is replaced with mei).

  3. If mi = 1, have the last individual of allele i mutate to allele j with probability mj/(m − 1) (so m is replaced with mei + ej).

  4. Repeat steps 1 to 3 until all individuals have coalesced.

It should be clear that the modified coalescent only generates histories with exactly |𝒪n| − 1 mutations, since each mutation kills an allele permanently.

If we take an unordered view of our sample, then the modified coalescent is equivalent to the urn process, for they have the same initial configuration and transition probabilities between configurations. In particular, when mi > 1 we move from m to mei with probability mi/m, and when mi = 1 we move from m to mei + ej with probability mj/(m)2↓. We generate trees on 𝒪n by drawing an edge (ji) whenever we make a transition from m to mei + ej, i.e. whenever there is a mutation from i to j.

We now give a proof of Theorem 5, using the modified coalescent in place of the urn process:

Alternative proof of Theorem 5. Let ℋ be a coalescent history with exactly M mutations. Running time backwards from the present, we suppose that the ith mutation was from allele ui to allele υi, and that the most recent common ancestor has allele ρ. We further suppose that Ji is the total number of lineages at the time of the ith mutation. Then we have that

q()=πρ(i=1MPυiui)θMi=1MJi(θ+Ji1)2n1n!(θ+n1)(n1),

since the ith coalescence contributes probability nini+θ(ni+12)1=2(ni+1)(ni+θ), and the ith mutation contributes probability θPυiuiJi(Ji1+θ).

Now, observe that

Q()limθ0q()θM=πρ(i=1MPυiui)2n1n!(n1)!i=1MJi(Ji1). (12)

Therefore, the Taylor series for q(ℋ) has leading power θM, with coefficient Q(ℋ).

Hence by (11), the Taylor series for q(n) has leading power θ|𝒪n|−1, and its leading coefficient is given by the sum of all Q(ℋ) such that ℋ is consistent with n and has |𝒪n| − 1 mutations.

For such an ℋ, let ℙn(ℋ) be the probability of generating ℋ under our modified coalescent. Then we have that

n()=2n1n!k=1|𝒪n|(nk1)!i=1|𝒪n|1Ji(Ji1). (13)

To see this, note that if our current sample is m, the probability that the next event is a coalescence on allele i with mi > 1 is

mim2mi(mi1)=2m(mi1),

and if mi = 1, the probability that the next event is a mutation from allele i to allele j (where ji) is

mjm(m1).

Multiplying the probabilities of the mutation and coalescence events in ℋ, and noting that the numerator of each mutation term cancels with the denominator of a future coalescence term, yields the equation (13).

Combining (12) with (13) yields

Q()=Λ(n)πρ(i=1|𝒪n|1Pυiui)n()

Now let 𝒯 (ℋ) be the resulting tree on 𝒪n if we draw an edge (ji) when allele i mutates to allele j. Then we have

Q(n)= has |𝒪n|1 mutationsconsistent with nQ()=Λ(n)Tπρ(T)((ji)TPji)(:𝒯()=Tn())=Λ(n)Tπρ(T)((ji)TPji)n(T)=Λ(n)TfP(T)n(T),

and hence

R(n)=TfP(T)n(T),

as needed.

4.4. A martingale property

Here, we prove Theorem 1 and Corollary 3 by using martingales to compute ℙn(T) for 𝒪n = {a, b}, and for 𝒪n = {a, b, c} when P is reversible when restricted to 𝒪n. We run time as follows: whenever we remove a ball in the urn process, count this as one time step. If in doing so, we kill a color, count the adding of another ball as a separate time step.

Let ℱt be the σ-algebra generated by all sequences of choices up to time t. Let Xt be the proportion of balls that have color a at time t; so X0 = na/n. It is easy to check that {Xt} is a martingale with respect to {ℱt}: Suppose that m is the remaining sample after time t − 1, and we are deleting a ball at time t. Then,

𝔼[Xt|t1]=mamma1m1+iamimmam1=mam=Xt1.

On the other hand, if we are adding a ball at time t, then

𝔼[Xt|t1]=mamma+1m+1+iamimmam+1=mam=Xt1.

So, {(Xt, ℱt), t ≥ 0} is a martingale.

Proof of Theorem 1. Suppose 𝒪n = {a, b}. Let T be the tree whose vertex set is 𝒪n, with a being the root. Let τ be the the first time we kill a color. Noting that τ is a stopping time, we obtain

n(T)=𝔼[n(T|τ)]=𝔼[𝕀(Color a is the last remaining at timeτ)]=𝔼[Xτ]=𝔼[X0]=nan.

Therefore, by Theorem 5,

Q(n)=Λ(n)(nanπaPab+nbnπbPba).

Proof of Corollary 3. Suppose 𝒪n = {a, b, c} and P is reversible when restricted to 𝒪n. Note that ℙ (𝒞𝒪n | T) does not depend on how T is rooted, for by reversibility we can move the root around by

πρPρk=πkPkρ,k𝒪n,kρ.

Therefore, we redefine ℙn(T) to be the probability of drawing the undirected tree T. We still have R(n) = ∑T ℙ (𝒞𝒪n | T)ℙn(T), but now the sum is taken over undirected T. Now let T be the tree on {a, b, c} whose interior vertex is a. We draw T if and only if a is chosen as the parent of the first color that we kill. So, letting τ be the first killing time and noting Xτ = ℙn(T | ℱτ), we have

n(T)=𝔼[n(T|τ)]=𝔼[Xτ]=𝔼[X0]=nan.

Therefore, by Theorem 5,

Q(n)=Λ(n)(nanπaPabPac+nbnπbPbaPbc+ncnπcPcaPcb).

4.5. A recursion for R(n)

In this section, we derive a recursion for R(n) which will be useful for deriving closed-form formulas for Q(n) when |𝒪n| = 3, 4. Given a sample configuration n and a subsample m, define the expression (nm) as

(nm)=i𝒪n(nimi).

The following proposition provides a recursion relating R(n) to R(m) where |𝒪m| = |𝒪n| − 1.

Proposition 6. Suppose P is irreducible when restricted to 𝒪n and let θ|𝒪n|−1 Q(n) = θ|𝒪n|−1 Λ(n)R(n) denote the leading order term in the Taylor expansion (1) of q(n) about θ = 0. Then, R(n) for |𝒪n| > 1 satisfies the recursion

R(n)=i,j𝒪n:ijPjimi=10mn:(nm)(nm)mjR(mei+ej)m(m1), (14)

with boundary conditions

R(n)=πa, (15)

for all sample configurations n = naea, where a ∈ [K].

Proof of Proposition 6. We can derive this recursion from the urn process as follows. Let Dij(m) be the event where the first killing replaces a ball of color i with a ball of color j, and where m is the (unordered) configuration immediately before this killing. Then for any event A,

n(A)=i,jimi=10mn:n(Dij(m))n(A|Dij(m)) (16)

where we use the fact that ℙn(Dij(m)) = 0 if mi ≠ 1 or mj = 0 for any j ∈ 𝒪n.

We compute ℙn(Dij(m)) when m ≻ 0 and mi = 1. The probability that m is the remaining configuration after nm draws is

(nm)!k(nkmk)!k(nk)nkmk(n)nm=(nm)(nm).

To see this, note that the first term is the number of ways we can make nm draws that result in the configuration m, and the second term is the probability of each such sequence of draws.

When our current configuration is m with mi = 1, the probability that on the next draw we replace the last ball of color i with a ball of color j is mj/(m)2↓. Hence we get that

n(Dij(m))=(nm)(nm)mjm(m1).

when m0 and mi = 1.

Plugging this into (16) yields

n(A)=i,jimi=10mn:(nm)(nm)mjm(m1)n(A|Dij(m)). (17)

Now recall from (7) that R(n) = ℙn(𝒞𝒪n). That is, R(n) is the probability that we assign the original labels to all alleles, if we use the urn process to generate a tree on 𝒪n and then use the tree to assign new labels to the alleles. Note that

(𝒞𝒪n|Dij(m))=Pjimei+ej(𝒞𝒪n\{i})=PjiR(mei+ej),

since we need to use the urn process with sample mei+ej to correctly relabel 𝒪n\{i}, and then assign the correct label to {i} with probability Pji. Plugging this into (17) with A = 𝒞𝒪n yields the desired recursion,

R(n)=i,jiPjimi=10mn:(nm)(nm)mjR(mei+ej)m(m1).

In the next two subsections, we use the recursion in Proposition 6 to provide proofs of Theorem 2 and Theorem 4.

4.6. Proof of Theorem 2 (|𝒪n| = 3)

For |𝒪n| = 3, the following expression for R(n) can be derived using Proposition 6:

R(n)=i,jiPjimi=10mn:(nm)(nm)mjR(mei+ej)m(m1)=i,jiPjimi=10mn:(nm)(nm)mjm(m1)lk and k,lik,l:mk+δj,kmπkPkl=i,jiPjimi=10mn:{(nm)(nm)1m2(m1)×[l:li,jmj(mj+1)πjPjl+k:ki,jmjmkπkPkj]}=i,jiPjim=3nmi=1,|m|=m0mn:{(nm)(nm)1m2(m1)×[k:ki,jmj(mj+1)πjPjk+k:ki,jmjmkπkPkj]}=i,j,k distinctm=3nmi=1,|m|=m0mn:(nm)(nm)πjPjiPjkmj(mj+1)+πkPkjPjimjmkm2(m1), (18)

where in the second equality, the formula from Theorem 1 is used, noting that |𝒪mei+ej| = 2. If we define the quantities α(n, i, j, k) and β(n, i, j, k) as

α(n,i,j,k)=m=3n1m2(m1)mi=1,|m|=m0mn:(nm)(nm)mj(mj+1), (19)

and

β(n,i,j,k)=m=3n1m2(m1)mi=1,|m|=m0mn:(nm)(nm)mjmk,

then (18) can be rewritten as

R(n)=i,j,k distinctπjPjiPjkα(n,i,j,k)+i,j,k distinctπkPkjPjiβ(n,i,j,k). (20)

Now consider α(n, i, j, k) defined by (19). We can remove the restriction in the inner sum that mi = 1 by defining m′ = mei, and so |m′| = m − 1. Also, since ji in (20), mj=mj. Making this change of variables from m to m′ in the inner sum of (19), we get

mi=1,|m|=m0mn:(nm)(nm)mj(mj+1)=(nnim1)(nm)ni|m|=m10mnniei:(nnieim)(nnim1)mj(mj+1). (21)

Using identity (34) in Fact 5 of the Appendix, the summation over m′ in (21) can be written as

|m|=m10mnniei:(nnieim)(nnim1)mj(mj+1)=i,jTT[L]:(1)|T|[(nj)2(m1)2(nninT)2+2nj(m1)nninT](nninTm1)(nnim1) (22)

The only sets T satisfying the conditions in the summation in (22) are T = ∅ and T = {k}. Hence, substituting (21) and (22) in (19), we have

α(n,i,j,k)=m=3n1m2(m1)mi=1,|m|=m0mn:(nm)(nm)mj(mj+1)=m=3n1m2(m1)(nnim1)(nm)ni|m|=m10mnniei:(nnieim)(nnim1)mj(mj+1)=m=3nni(nnim1)m2(m1)(nm){(nj+nkm1)(nnim1)[(nj)2(nj+nk)2(m1)2+2nj(m1)nj+nk](njm1)(nnim1)[(nj)2(nj)2(m1)2+2njnj(m1)]}=m=3nnim2(m1){(nj+nkm1)(nm)[(nj)2(nj+nk)2(m1)2+2nj(m1)nj+nk](njm1)(nm)m(m1)}=m=1nnin{(nj+nkm)(n1m)[(nj)2(nj+nk)2m1m+1+2njnj+nk1m+1](njm)(n1m)}. (23)

Applying Facts 1 and 3 in the Appendix to (23) yields

α(n,i,j,k)=m=1nnin[(nj)2(nj+nk)2(nj+nkm)(n1m)(njm)(n1m)+2njnk(nj+nk)2(nj+nkm)(n1m)1m+1]=nin{(nj)2(nj+nk)2nj+nkninjni+nk+2njnk(ni+nk)2[nnj+nk+1(HnHni1)1]}=(nj)2n(nj+nk1)ninjn(ni+nk)2ninjnkn(nj+nk)2+2ninjnk(nj+nk+1)3(HnHni1). (24)

Following a similar line of computation as above, we can find a closed-form expression for β(n, i, j, k) as follows:

β(n,i,j,k)=m=3n1m2(m1)mi=1,|m|=m0mn:(nm)(nm)mjmk=m=3n1m2(m1)(nnim1)(nm)ni|m|=m10mnniei:(nnieim)(nnim1)mjmk=m=3n1m2(m1)(nj+nkm1)(nm)ninjnk(nj+nk)2(m1)2=m=1nninnjnk(nj+nk)2(nj+nkm)(nm)(12m+1)=ninnjnk(nj+nk)2{nj+nkni2[nnj+nk+1(HnHni1)1]}=njnkn(nj+nk1)+2ninjnkn(nj+nk)22ninjnk(nj+nk+1)3(HnHni1), (25)

where the second equality above is the same change of variables from m to m′ = mei as in the α(n, i, j, k) term. The third equality follows from identity (35) in Fact 5, and the second to last equality follows from Facts 1 and 3. Substituting (24) and (25) into (20), and using (5) gives

Q(n)=Λ(n)i,j,k distinct[πiPjiPjkα(n,i,j,k)+πkPkjPjiβ(n,i,j,k)]=Λ(n)i,j,k distinct{πiPjiPjk [(nj)2n(nj+nk1)ninjn(ni+nk)2ninjnkn(nj+nk)2+2ninjnk(nj+nk+1)3(HnHni1)]+πkPkjPji [njnkn(nj+nk1)+2ninjnkn(nj+nk)22ninjnk(nj+nk+1)3(HnHni1)]}. (26)

Note that if P is reversible when restricted to the observed alleles 𝒪n, then (26) simplifies to the expression given in Corollary 3.

4.7. Proof of Theorem 4 (|𝒪n| = 4)

Using Corollary 3, we first note the following alternate expression for R(n) when |𝒪n| = 3 and P is reversible restricted to the observed alleles:

R(n)=i,j,k distinctninπiPijPik2. (27)

Suppose |𝒪n| = 4 and assume that P is reversible restricted to the observed alleles 𝒪n. Then using Proposition 6, we obtain

R(n)=l,hlPhlml=10mn:(nm)(nm)mhR(mel+eh)m(m1)=l,hlPhlml=10mn:(nm)(nm)mhm(m1)i,j,kli,j,k distinct,mi+δi,hmπiPijPik2=i,j,k,l distinct12πiPijPikPilml=10mn(nm)(nm)mi(mi+1)m2(m1)+i,j,k,l distinctπiPijPikPjl0mnml=1(nm)(nm)mimjm2(m1), (28)

where the second equality follows from using (27) since P is reversible when restricted to the alleles {i, j, k} ⊂ 𝒪n. Similar to the proof in Section 4.6, if we define quantities ζ(n, i, j, k, l) and δ(n, i, j, k, l) as

ζ(n,i,j,k,l)=m=4n1m2(m1)0mn:ml=1,|m|=m(nm)(nm)mi(mi+1),

and

δ(n,i,j,k,l)=m=4n1m2(m1)0mn:ml=1,|m|=m(nm)(nm)mimj,

then, using (5) and (28), we obtain the following expression for Q(n) = Λ(n)R(n):

Q(n)=Λ(n)i,j,k,l distinct[πiPijPikPilζ(n,i,j,k,l)2+πiPijPikPjlδ(n,i,j,k,l)]. (29)

By a very similar calculation to that in Section 4.6, using Facts 1 and 3, and identities (34) and (35) in Fact 5 of the Appendix, we obtain the following closed-form expressions for ζ(n, i, j, k, l) and δ(n, i, j, k, l):

ζ(n,i,j,k,l)=nln{ni+nj+nknl(ni)2(ni+nj+nk)2+ninj+nk+nl+2ni(nj+nk)(ni+nj+nk)2(nni+nj+nk+1(HnHnl1)1)[ni+njnk+nl(ni)2(ni+nj)2+2ninj(ni+nj)2(nni+nj+1(HnHnk+nl1)1)][ni+nknj+nl(ni)2(ni+nk)2+2nink(ni+nk)2(nni+nk+1(HnHnj+nl1)1)]}.

and

δ(n,i,j,k,l)=nln{ni+nj+nknlninj(ni+nj+nk)22ninj(ni+nj+nk)2(nni+nj+nk+1(HnHnl1)1)[ni+njnk+nlninj(ni+nj)22ninj(ni+nj)2(nni+nj+1(HnHnk+nl1)1)]}.

Simplifying the expression for δ(n, i, j, k, l), we get the expression stated in Theorem 4. Observing that ζ(n, i, j, k, l) is symmetric in j and k, we see that for all i, j, k, and l distinct in 𝒪n,

ζ(n,i,j,k,l)+ζ(n,i,k,j,l)2=γ(n,i,j,k,l)+γ(n,i,k,j,l),

where γ(n, i, j, k, l) is given by:

γ(n,i,j,k,l)=nin{[ni12(ni+nj+nk1)2njnl(ni+nj+nk)2]+nl2(nj+nk+nl)[nl(ni1)(nk+nl)(ni+nj1)2njnl(ni+nj)2]}+2ninjnl(ni+nj+nk+1)3(HnHnl1)2ninjnl(ni+nj+1)3(HnHnk+nl1).

Using the fact that πiPijPikPil is also symmetric in j and k, we can then rewrite (29) as

Q(n)=Λ(n)i,j,k,l distinct[πiPijPikPilγ(n,i,j,k,l)+πiPijPikPjlδ(n,i,j,k,l)].

5. Empirical study of accuracy

Here, we investigate the accuracy of approximating the sampling probability q(n) by using only the leading order term θ|𝒪n|−1 Q(n). In this study, we solve the recursion (2) numerically to obtain the true sampling probability q(n) for moderate sample sizes.

For a given sample n, define the approximate sampling probability, qapprox(n), by

qapprox(n)=θ|𝒪n|1Q(n).

We can then define the relative error, Err(n), of the approximation qapprox(n) from the true sampling probability q(n) as

Err(n)=|q(n)qapprox(n)|q(n).

For a given sample size n, another natural measure of the approximation quality is the expected relative error under the distribution arising from the coalescent on samples of size n. Since q(n) is the probability of a particular ordered sample consistent with n, the probability p(n) of the unordered sample n, when sampling order is ignored, is given by

p(n)=(nn1,,nK)q(n).

We can then define the expected relative error for a sample size n by AvgErr(n), given by

AvgErr(n)=n:|n|=np(n) Err(n)=n:|n|=n(nn1,,nK)|q(n)qapprox(n)|.

We also define the worst-case relative error, WorstErr(n), for a given sample size n as the worse relative error among all samples of size n. Specifically,

WorstErr(n)=maxn:|n|=n Err(n)=maxn:|n|=n|q(n)qapprox(n)|q(n).

To study the accuracy of approximating q(n) by qapprox(n), we examine the behavior of AvgErr(n) and WorstErr(n) for a transition matrix estimated from real biological data. Specifically, we use the reversible phylogenetic mutation rate matrix estimated in [21, Table 1, matrix (1)] for the ψη-globin pseudogenes of six primate species. Since their estimated matrix is a matrix of nucleotide substitution rates used for phylogenetic analysis, we rescale it by the minimum amount that can make it a valid Markov transition matrix. This rescaled matrix, denoted by , is given below to three digits of precision, and is used in our numerical experiments with different values of the mutation parameter θ:

P^=(0.4330.3980.0740.0950.6650.0000.1640.1710.0740.0980.3940.4340.1470.1590.6740.020), (30)

in the (T, C, A, G) basis. The stationary distribution corresponding to this transition matrix is π̂ = (0.308, 0.185, 0.308, 0.199) to three digits of precision.

For many neutral regions of the human genome, typical mutation rates per base are in the range 10−3 ≤ θ ≤ 10−2 [16], and we consider θ ∈ {10−3, 5 × 10−3, 10−2} in our study. For the transition matrix in (30), the expected relative error AvgErr(n) and the worst-case relative error WorstErr(n) are plotted in Figures 1(a) and 1(b), respectively, as functions of the sample size n. As can be seen from the plots, both the expected relative error and the worst-case relative error grow very slowly with the sample size n. Further, the ratio of WorstErr(n) to AvgErr(n) is a small number between 1.3 and 2.1 for all n ≤ 360, and is decreasing in n. Hence, it appears that the approximation quality of qapprox(n) is uniformly good over all samples n for any given size n.

Figure 1.

Figure 1

Error plots as a function of the sample size n, for the transition matrix in (30) and mutation rate θ ∈ {10−3, 5 × 10−3, 10−2}. (a) The expected relative error, AvgErr(n). (b) The worst-case relative error, WorstErr(n).

Acknowledgments

We thank Paul Jenkins for useful discussion. This research is supported in part by an NIH grant R01-GM094402, an Alfred P. Sloan Research Fellowship, and a Packard Fellowship for Science and Engineering.

Appendix

Here, we provide some general combinatorial identities which are used several times for proving the main results in this paper.

Fact 1. For any positive integers x, y, a and b where ba and xy,

m=xy(bm)(am)=(a+1xa+1b)(aya+1b)(ab). (31)

Proof. Starting from the left hand side of (31), we have:

m=xy(bm)(am)=b!(ab)!a!m=xy(amab)=(a+1xa+1b)(aya+1b)(ab),

where the last equality follows from the standard combinatorial identity that for all positive integers a, n, and k,

i=an(nik)=(na+1k+1).

Fact 2. For positive integers a and b,

m=1a1m(amb)=(ab)(HaHb).

Fact 2 can be verified by induction [4] or by the method of Wilf-Zeilberger pairs [5].

Fact 3. For positive integers a and b where ba,

m=1b(bm)(am)1m+1=a+1b+1(Ha+1Hab)1. (32)

Proof. Starting from the left hand side of (32), we have:

m=1b(bm)(am)1m+1=b!(ab)!a!m=1b(amab)1m+1=1(ab)m=2b+1(a+1mab)1m=1(ab)[m=1b+1(a+1mab)1m(ab)]=1(ab)[(a+1b+1)(Ha+1Hab)(ab)]=a+1b+1(Ha+1Hab)1,

where the fourth equality follows from using Fact 2.

We also list some facts about the moments of a hypergeometric distribution which are appealed to several times in the paper.

Fact 4. If a multivariate hypergeometric distribution is parameterized by n = (n1, n2, …, nL), where n = |n|, and a sample of size m, m = (m1, m2, …, mL), is drawn from it, then for any t = (t1, t2, …, tL) where ti ≥ 0 for all i, t = |t| and tn,

𝔼[i=1L(mi)ti]=|m|=m0mn:(nm)(nm)i=1L(mi)ti=i=1L(ni)ti(n)t(m)t (33)

Proof. Starting from the middle term in (33), we get:

|m|=m0mn:(nm)(nm)i=1L(mi)ti=|m|=m0mn:i=1L(ni)ti(n)t(m)t(ntmt)(ntmt)=i=1L(ni)ti(n)t(m)t|m|=mt0mnt:(ntm)(ntm)=i=1L(ni)ti(n)t(m)t,

where the last equality follows because the term being summed is the probability mass function of a multivariate hypergeometric distribution parameterized by nt, and the summation is over the entire domain of the distribution, and hence is 1.

In the following fact, we compute some second moments of the hypergeometric distribution parameterized by n when restricted to those samples m which are non-zero at all types.

Fact 5. If n = (n1, n2, …, nL), where n = |n|, and 1 ≤ jkL, then we have the following identities:

|m|=m0mn:(nm)(nm)mj(mj+1)=jTT[L]:(1)|T|[(nj)2(m)2(nnT)2+2njmnnT](nnTm)(nm) (34)
|m|=m0mn:(nm)(nm)mjmk=jTT[L]:(1)|T|mjmk(m)2(nnT)2(nnTm)(nm) (35)

Proof. Applying the inclusion-exclusion principle and using Fact 4, the identity in (34) can be obtained as

|m|=m0mn:(nm)(nm)mj(mj+1)=jTT[L]:(1)|T|[|m|=m0mnnT:(nnTm)(nnTm)((mj)2+2mj)](nnTm)(nm)=jTT[L]:(1)|T|[(nj)2(m)2(nnT)2+2njmnnT](nnTm)(nm).

Similarly for (35), we have

|m|=m0mn:(nm)(nm)mjmk=j,kTT[L]:(1)|T|[|m|=m0mnnT:(nnTm)(nnTm)mjmk](nnTm)(nm)=j,kTT[L]:(1)|T|mjmk(m)2(nnT)2(nnTm)(nm).

References

  • 1.Arratia A, Barbour AD, Tavaré S. Logarithmic Combinatorial Structures: A Probabilistic Approach. Switzerland: European Mathematical Society Publishing House; 2003. [Google Scholar]
  • 2.Bhaskar A, Song YS. Closed-form asymptotic sampling distributions under the coalescent with recombination for an arbitrary number of loci. Advances in Applied Probability. 2011 doi: 10.1239/aap/1339878717. in press. (Preprint arXiv: 1107.4700) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Ewens WJ. The sampling theory of selectively neutral alleles. Theoretical Population Biology. 1972;3:87–112. doi: 10.1016/0040-5809(72)90035-4. [DOI] [PubMed] [Google Scholar]
  • 4.Fu Y-X. Statistical properties of segregating sites. Theoretical Population Biology. 1995;48:172–197. doi: 10.1006/tpbi.1995.1025. [DOI] [PubMed] [Google Scholar]
  • 5.Griffiths RC. The frequency spectrum of a mutation, and its age, in a general diffusion model. Theoretical Population Biology. 2003;64:241–251. doi: 10.1016/s0040-5809(03)00075-3. [DOI] [PubMed] [Google Scholar]
  • 6.Griffiths RC, Lessard S. Ewens’ sampling formula and related formulae: combinatorial proofs, extensions to variable population size and applications to ages of alleles. Theoretical Population Biology. 2005;68:167–177. doi: 10.1016/j.tpb.2005.02.004. [DOI] [PubMed] [Google Scholar]
  • 7.Griffiths RC, Tavaré S. Ancestral inference in population genetics. Stat. Sci. 1994;9:307–319. [Google Scholar]
  • 8.Griffiths RC, Tavaré S. Sampling theory for neutral alleles in a varying environment. Proc. R. Soc. London B. 1994;344:403–410. doi: 10.1098/rstb.1994.0079. [DOI] [PubMed] [Google Scholar]
  • 9.Hoppe F. Pólya-like urns and the Ewens’ sampling formula. J. Math. Biol. 1984;20:91–94. [Google Scholar]
  • 10.Jenkins PA, Song YS. Closed-form two-locus sampling distributions: accuracy and universality. Genetics. 2009;183:1087–1103. doi: 10.1534/genetics.109.107995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Jenkins PA, Song YS. An asymptotic sampling formula for the coalescent with recombination. Annals of Applied Probability. 2010;20:1005–1028. doi: 10.1214/09-AAP646. (Technical Report 775, Department of Statistics, University of California, Berkeley, 2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Jenkins PA, Song YS. The effect of recurrent mutation on the frequency spectrum of a segregating site and the age of an allele. Theoretical Population Biology. 2011;80:158–173. doi: 10.1016/j.tpb.2011.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Jenkins PA, Song YS. Padé approximants and exact two-locus sampling distributions. Annals of Applied Probability. 2011 doi: 10.1214/11-AAP780. in press. (Technical Report 793, Department of Statistics, University of California, Berkeley, 2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Kingman JFC. The coalescent. Stochastic Processes and Their Applications. 1982;13:235–248. [Google Scholar]
  • 15.Kingman JFC. On the genealogy of large populations. Journal of Applied Probability. 1982;19:27–43. [Google Scholar]
  • 16.Nachman MW, Crowell SL. Estimate of the mutation rate per nucleotide in humans. Genetics. 2000;156:297–304. doi: 10.1093/genetics/156.1.297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Pitman J. Technical report. Vol. 345. U.C. Berkeley: Department of Statistics; 1992. The two-parameter generalization of Ewens’ random partition structure. [Google Scholar]
  • 18.Pitman J. Exchangeable and partially exchangeable random partitions. Probab. Th. Rel. Fields. 1995;102:145–158. [Google Scholar]
  • 19.Stephens M. Inference under the coalescent. In: Balding D, Bishop M, Cannings C, editors. Handbook of Statistical Genetics. Chichester, UK: Wiley; 2001. pp. 213–238. [Google Scholar]
  • 20.Wright S. Adaptation and selection. In: Jepson GL, Mayr E, Simpson GG, editors. Genetics, Paleontology and Evolution. Princeton University Press; 1949. pp. 365–389. [Google Scholar]
  • 21.Yang Z. Estimating the pattern of nucleotide substitution. Journal of Molecular Evolution. 1994;39:105–111. doi: 10.1007/BF00178256. [DOI] [PubMed] [Google Scholar]

RESOURCES