Skip to main content
Systematic Biology logoLink to Systematic Biology
. 2016 Sep 11;66(1):e66–e82. doi: 10.1093/sysbio/syw077

Fundamentals and Recent Developments in Approximate Bayesian Computation

Jarno Lintusaari 1,*, Michael U Gutmann 1,2, Ritabrata Dutta 1, Samuel Kaski 1, Jukka Corander 2,3
PMCID: PMC5837704  PMID: 28175922

Abstract

Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.]

Introduction

Many recent models in biology describe nature to a high degree of accuracy but are not amenable to analytical treatment. The models can, however, be simulated on computers and we can thereby replicate many complex phenomena such as the evolution of genomes (Marttinen et al. 2015), the dynamics of gene regulation (Toni et al. 2009), or the demographic spread of a species (Currat and Excoffier 2004; Fagundes et al. 2007; Itan et al. 2009; Excoffier et al. 2013). Such simulator-based models are often stochastic and have multiple parameters. While it is usually relatively easy to generate data from the models for any configuration of the parameters, the real interest is often focused on the inverse problem: the identification of parameter configurations that would plausibly lead to data that are sufficiently similar to the observed data. Solving such a nonlinear inverse problem is generally a very difficult task.

Bayesian inference provides a principled framework for solving the aforementioned inverse problem. A prior probability distribution on the model parameters is used to describe the initial beliefs about what values of the parameters could be plausible. The prior beliefs are updated in light of the observed data by means of the likelihood function. Computing the likelihood function, however, is mostly impossible for simulator-based models due to the unobservable (latent) random quantities that are present in the model. In some cases, Monte Carlo methods offer a way to handle the latent variables such that an approximate likelihood is obtained, but these methods have their limitations, and for large and complex models, they are “too inefficient by far” (Green et al. 2015, p. 848). To deal with models where likelihood calculations fail, other techniques have been developed that are collectively referred to as likelihood-free inference or approximate Bayesian computation (ABC).

In a nutshell, ABC algorithms sample from the posterior distribution of the parameters by finding values that yield simulated data sufficiently resembling the observed data. ABC is widely used in systematics. For instance, Hickerson et al. (2006) used ABC to test for simultaneous divergence between members of species pairs. Fan and Kubatko (2011) estimated the topology and speciation times of a species tree under the coalescent model using ABC. Their method does not require sequence data, but only gene tree topology information, and was found to perform favorably in terms of both accuracy and computation time. Slater et al. (2012) used ABC to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. They found their ABC approach to be comparable to likelihood-based methods that use complete data sets. In addition, it can handle extremely sparsely sampled phylogenies and trees containing very large numbers of species. Ratmann et al. (2012) used ABC to fit two different mechanistic phylodynamic models for interpandemic influenza A(H3N2) using both surveillance data and sequence data simultaneously. The simultaneous consideration of these two types of data allowed them to drastically constrain the parameter space and expose model deficiencies using the ABC framework. Very recently, Baudet et al. (2015) used ABC to reconstruct the coevolutionary history of host–parasite systems. The ABC-based method was shown to handle large trees beyond the scope of other existing methods.

While widely applicable, ABC comes with its own set of difficulties, that are of both computational and statistical nature. The two main intrinsic difficulties are how to efficiently find plausible parameter values and how to define what is similar to the observed data and what is not. All ABC algorithms have to deal with these two issues in some manner, and the different algorithms discussed here essentially differ in how they tackle the two problems.

The remainder of this article is structured as follows. We next discuss important properties of simulator-based models and point out difficulties when performing statistical inference with them. The discussion leads to the basic rejection ABC algorithm which is presented in the subsequent section. This is followed by a presentation of popular ABC algorithms that have been developed to increase the computational efficiency. We then consider several recent advances that aim to improve ABC both computationally and statistically. The final section provides conclusions and a discussion about likelihood-free inference methods related to ABC.

Simulator-based Models

Definition

Simulator-based models are functions Inline graphic that map the model parameters Inline graphic and some random variables Inline graphic to data Inline graphic. The functions Inline graphic are generally implemented as computer programs where the parameter values are provided as input and where the random variables are drawn sequentially by making calls to a random number generator. The parameters Inline graphic govern the properties of interest of the generated data, whereas the random variables Inline graphic represent the stochastic variation inherent to the simulated process.

The mapping Inline graphic may be as complex as needed, and this generality of simulator-based models allows researchers to implement hypotheses about how the data were generated without having to make excessive compromises motivated by mathematical simplicity, or other reasons not related to the scientific question being investigated.

Due to the presence of the random variables Inline graphic, the outputs of the simulator fluctuate randomly even when using exactly the same values of the model parameters Inline graphic. This means that we can consider the simulator to define a random variable Inline graphic whose distribution is implicitly determined by the distribution of Inline graphic and the mapping Inline graphic acting on Inline graphic for a given Inline graphic. For this reason, simulator-based models are sometimes called implicit models (Diggle and Gratton 1984). Using the properties of transformation of random variables, it is possible to formally write down the distribution of Inline graphic. For instance, for a fixed value of Inline graphic, the probability that Inline graphic takes values in an Inline graphic neighborhood Inline graphic around the observed data Inline graphic is equal to the probability to draw values of Inline graphic that are mapped to that neighborhood (Fig. 1),

Pr(YθBϵ(y0))=Pr(M(θ,V)Bϵ(y0)). (1)

Computing the probability analytically is impossible for complex models. But it is possible to test empirically whether a particular outcome Inline graphic of the simulation ends up in the neighborhood of Inline graphic or not (Fig. 1). We will see that this property of simulator-based models plays a key role in performing inference about their parameters.

Figure 1.

Figure 1.

Illustration of the stochastic simulator Inline graphic run multiple times with a fixed value of Inline graphic. The black dot Inline graphic is the observed data and the arrows point to different simulated data sets. Two outcomes, marked in green, are less than Inline graphic away from Inline graphic. The proportion of such outcomes provides an approximation of the likelihood of Inline graphic for the observed data Inline graphic.

Example

As an example of a simulator-based model, we here present the simple yet analytically intractable model by Tanaka et al. (2006) for the spread of tuberculosis. We will use the model throughout the article for illustrating different concepts and methods.

The model begins with one infectious host and stops when a fixed number of infectious hosts Inline graphic is exceeded (Fig. 2). In the simulation, it is assumed that each infectious host randomly infects other individuals from an unlimited supply of hosts with the rate Inline graphic, each time transmitting a specific strain of the communicable pathogen, characterized by its haplotype. It is thus effectively assumed that a strong transmission bottleneck occurs, such that only a single strain is passed forward in each transmission event, despite the eventual genetic variation persisting in the within-host pathogen population. Further, each infected host is considered to be infectious immediately. The model states that a host stops being infectious, that is, recovers or dies, randomly with the rate Inline graphic, and the pathogen of the host mutates randomly within the host at the rate Inline graphic, thereby generating a novel haplotype under a single-locus infinite alleles model. The parameters of the model are thus Inline graphic. The output of the simulator is a vector of cluster sizes in the simulated population of infected hosts, where clusters are the groups of hosts infected by the same haplotype of the pathogen. After the simulation, a random sample of size Inline graphic is taken from the population yielding the vector of cluster sizes Inline graphic present in the sample. For example, Inline graphic corresponds to a sample of size Inline graphic containing one cluster with six infected hosts, one cluster with three hosts, two clusters with two hosts each, as well as seven singleton clusters. Note that this model of pathogen spread is atypical in the sense that the observation times of the infections are all left implicit in the sampling process, in contrast to the standard likelihood formulation used for infectious disease epidemiological models (Anderson and May 1992).

Figure 2.

Figure 2.

An example of a transmission process simulated under a parameter configuration Inline graphic without subsampling of the simulated infectious population. Arrows indicate the sequence of random events taking place in the simulation and different colors represent different haplotypes of the pathogen. The simulation starts with one infectious host who transmits the pathogen to another host. After one more transmission event, the pathogen undergoes a mutation within one of the three hosts infected so far (event three). As the sixth event in the simulation, one of the haplotypes is removed from the population due to the recovery/death of the corresponding host. The simulation stops when the infectious population size exceeds Inline graphic and the simulator outputs the generated Inline graphic. The nodes not connected by arrows show all the other possible configurations of the infectious population, but which were not visited in this example run of the simulator. The bottom row lists the possible outputs of the simulator (cluster size vectors) under their corresponding population configuration.

Difficulties in Performing Statistical Inference

Values of the parameters Inline graphic that are plausible in the light of the observations Inline graphic can be determined via statistical inference either by finding values that maximize the probability in Equation (1) for some sufficiently small Inline graphic or by determining their posterior distribution. In more detail, in maximum likelihood estimation, the parameters are determined by maximizing the likelihood function Inline graphic,

L(θ)=limϵ0cϵPr(YθBϵ(y0)), (2)

where Inline graphic is a proportionality factor that may depend on Inline graphic, which is needed when Inline graphic shrinks to zero as Inline graphic approaches zero. If the output of the simulator can only take a countable number of values, Inline graphic is called a discrete random variable and the definition of the likelihood simplifies to Inline graphic, which equals the probability of simulating data equal to the observed data. In Bayesian inference, the essential characterization of the uncertainty about the model parameters is defined by their conditional distribution given the data, that is, the posterior distribution Inline graphic,

p(θ|y0)L(θ)p(θ), (3)

where Inline graphic is the prior distribution of the parameters.

For complex models neither the probability in Equation (1) nor the likelihood function Inline graphic are available analytically in closed form as a function of Inline graphic, which is the reason why statistical inference is difficult for simulator-based models.

For the model of tuberculosis transmission presented in the previous section, computing the likelihood function becomes intractable if the infectious population size Inline graphic is large, or if the death rate Inline graphic (Stadler 2011). This is because for large Inline graphic, the state space of the process, that is, the number of different cluster vectors, grows very quickly. This makes exact numerical calculation of the likelihood infeasible because in essence, every possible path to the outcome should be accounted for (Fig. 2). Moreover, if the death rate Inline graphic is nonzero, the process is allowed to return to previous states which further complicates the computations. Finally, the assumption that not all infectious hosts are observed contributes additionally to the intractability of the likelihood. Stadler (2011) approached the problem using transmission trees (Fig. 3). The likelihood function stays, however, intractable because of the vast number of different trees that all yield the same observed data and thus need to be considered when evaluating the likelihood of a parameter value.

Figure 3.

Figure 3.

The transmission process in Figure 2 can also be described with transmission trees (Stadler 2011) paired with mutations. The trees are characterized by their structure, the length of their edges, and the mutations on the edges (marked with small circles that change the color of the edge, where colors represent the different haplotypes of the pathogen). The figure shows three examples of different trees that yield the same observed data at the observation time Inline graphic. Calculating the likelihood of a parameter value requires summing over all possible trees yielding the observed data, which is computationally impossible when the sample size is large.

Inference via Rejection Sampling

We present here an algorithm for exact posterior inference that is applicable when Inline graphic can only take countably many values, that is, if Inline graphic is a discrete random variable. As shown above, in this case Inline graphic. The presented algorithm forms the basis of the algorithms for ABC discussed in the later sections.

In general, samples from the prior distribution Inline graphic of the parameters can be converted into samples from the posterior Inline graphic by retaining each sampled value with a probability proportional to Inline graphic. This can be done sequentially by first sampling a parameter value from the prior, Inline graphic and then accepting the obtained value with the probability Inline graphic. This procedure corresponds to rejection sampling (see e.g., Robert and Casella 2004, Chapter 2). Now with the likelihood Inline graphic being equal to the probability that Inline graphic, the latter step can be implemented for simulator-based models even when Inline graphic is not available analytically: we run the simulator and check whether the generated data equal the observed data. This gives the rejection algorithm for simulator-based models summarized as Algorithm 1. Rubin (1984) used it to provide intuition about how Bayesian inference about parameters works in general.

graphic file with name syw077a1.jpg

To obtain another interpretation of Algorithm 1, recall that for discrete random variables the posterior distribution Inline graphic is, by definition, equal to the joint distribution of Inline graphic and Inline graphic, normalized by the probability that Inline graphic. That is, the posterior is obtained by conditioning on the event Inline graphic. We can thus understand the test for equality of Inline graphic and Inline graphic on line 5 of the algorithm as an implementation of the conditioning operation.

To illustrate Algorithm 1, we generated a synthetic data set Inline graphic from the tuberculosis transmission model by running the simulator with the parameter values Inline graphic, Inline graphic, Inline graphic, and setting the population size to Inline graphic. We further assumed that the whole population is observed, which yielded the observed data Inline graphic. The assumptions about the size of the population, and that the whole population was observed, are unrealistic but they enable a comparison to the exact posterior distribution, which in this setting can be numerically computed using Theorem 1 of Stadler (2011). In this case, the histogram of samples obtained with Algorithm 1 matches the posterior distribution very accurately (Fig. 4). To obtain this result, we assumed that both of the parameters Inline graphic and Inline graphic were known and assigned a uniform prior distribution in the interval Inline graphic for the sole unknown parameter, the transmission rate Inline graphic. A total of 20 million data sets Inline graphic were simulated, out of which 40,000 matched Inline graphic (acceptance rate of 0.2%).

Figure 4.

Figure 4.

Exact inference for a simulator-based model of tuberculosis transmission. A very simple setting was chosen where the exact posterior can be numerically computed (black line), and where Algorithm 1 is applicable (blue bars).

Fundamentals of approximate Bayesian computation

The Rejection ABC Algorithm

While Algorithm 1 produces independent samples from the posterior, the probability that the simulated data equal the observed data is often negligibly small, which renders the algorithm impractical as virtually no simulated realizations of Inline graphic will be accepted. The same problem holds true if the generated data can take uncountably many values, that is, when Inline graphic is a continuous random variable.

To make inference feasible, the acceptance criterion Inline graphic in Algorithm 1 can be relaxed to

d(yθ,y0)ϵ, (4)

where Inline graphic and Inline graphic is a “distance” function that measures the discrepancy between the two data sets, as considered relevant for the inference. With this modification, Algorithm 1 becomes the rejection ABC algorithm summarized as Algorithm 2. The first implementation of this algorithm appeared in the work by Pritchard et al. (1999).

graphic file with name syw077a2.jpg

Algorithm 2 does not produce samples from the posterior Inline graphic in Equation (3) but samples from an approximation Inline graphic,

pd,ϵ(θ|y0)abcPr(d(Yθ,y0)ϵ)p(θ), (5)

which is the posterior distribution of Inline graphic conditional on the event Inline graphic. Equation (5) is obtained by approximating the intractable likelihood function Inline graphic in Equation (2) with Inline graphic,

Ld,ϵ(θ)abcPr(d(Yθ,y0)ϵ). (6)

The approximation is two-fold. First, the distance function Inline graphic is generally not a metric in the mathematical sense, namely Inline graphic even if Inline graphic. This may happen, for example, when Inline graphic is defined through summary statistics that remove information from the data (see below). Second, Inline graphic is chosen large enough so that enough samples will be accepted. Intuitively, the likelihood of a parameter value is approximated by the probability that running the simulator with said parameter value produces data within Inline graphic distance of Inline graphic (Fig. 1).

The distance Inline graphic is typically computed by first reducing the data to suitable summary statistics Inline graphic and then computing the distance Inline graphic between them, so that Inline graphic, where Inline graphic is often the Euclidean or some other metric for the summary statistics. When combining different summary statistics, they are usually re-scaled so that they contribute equally to the distance (as, e.g., done by Pritchard et al. 1999).

In addition to the accuracy of the approximation Inline graphic, the distance Inline graphic and the threshold Inline graphic also influence the computing time required to obtain samples. For instance, if Inline graphic and the distance Inline graphic is such that Inline graphic if and only if Inline graphic, then Algorithm 2 becomes Algorithm 1 and Inline graphic becomes Inline graphic but the computing time to obtain samples from Inline graphic would typically be impractically large. Hence, on a very fundamental level, there is a trade-off between statistical and computational efficiency in ABC (see e.g., Beaumont et al. 2002, p. 2027).

We next illustrate Algorithm 2 and the mentioned trade-off using the previous example about tuberculosis transmission. Two distances Inline graphic and Inline graphic are considered,

d1(yθ,y0)=|T1(yθ)T1(y0)|,d2(yθ,y0)=|T2(yθ)T2(y0)|, (7)

where Inline graphic is the number of clusters contained in the data divided by the sample size Inline graphic and Inline graphic is a genetic diversity measure defined as Inline graphic, where Inline graphic is the size of the Inline graphic-th cluster. For Inline graphic, we have Inline graphic and Inline graphic. For both Inline graphic and Inline graphic, the absolute difference between the summary statistics is used as the metric Inline graphic.

For this example, using the summary statistic Inline graphic instead of the full data does not lead to a visible deterioration of the inferred posterior when Inline graphic (Fig. 5a). For summary statistic Inline graphic, however, there is a clear difference as the posterior mode and mean are shifted to larger values of Inline graphic and the posterior variance is larger too (Fig. 5b). In both cases, increasing Inline graphic, that is, accepting more parameters, leads to an approximate posterior distribution that is less concentrated than the true posterior.

Figure 5.

Figure 5.

Inference results for the transmission rate Inline graphic of tuberculosis. The plots show the posterior distributions obtained with Algorithm 2 and 20 million simulated data sets (proposals). a) Cluster frequency as a summary statistic. b) Genetic diversity as a summary statistic.

Algorithm 2 with summary statistic Inline graphic produces results comparable to Algorithm 1 but from the computational efficiency point of view the number of simulations required to obtain the approximate posterior differs between the two algorithms. It can be seen that for a computational budget of 100,000 simulations, the posterior obtained by Algorithm 1 differs substantially from the exact posterior, while the posterior from Algorithm 2 with Inline graphic is still matching it well (Fig. 6a). The relatively poor result with Algorithm 1 is due to its low acceptance rate (here 0.2%). While the accepted samples do follow the exact posterior Inline graphic, the algorithm did not manage to produce enough accepted realizations within the computational budget available, which implies that the Monte Carlo error of the posterior approximation remains nonnegligible.

Figure 6.

Figure 6.

Comparison of the efficiency of Algorithms 1 and 2. Smaller KL divergence means more accurate inference of the posterior distribution. Note that the stopping criterion of the algorithm has here been changed to be the total number of runs of the simulator instead of the number of accepted samples. a) Results after 100,000 simulations. b) Accuracy versus computational cost.

Plotting the number of data sets simulated versus the accuracy of the inferred posterior distribution allows us to further study the trade-off between statistical and computational efficiency between the different algorithms (Fig. 6b). The accuracy is measured by the Kullback–Leibler (KL) divergence (Kullback and Leibler 1951) between the exact and the inferred posterior. Algorithm 2 with summary statistic Inline graphic features the best trade-off, whereas using summary statistic Inline graphic performs the worst. The curve of the latter one flattens out after approximately 1 million simulations, showing the approximation error introduced by using the summary statistic Inline graphic. For Algorithm 1, nonzero values of the KL divergence are due to the Monte Carlo error only and it will approach the true posterior as the number of simulations grows. When using summary statistics, nonzero values of the KL divergence are due to both the Monte Carlo error and the use of the summary statistics. In this particular example, the error caused by the summary statistic Inline graphic is, however, negligible.

Choice of the Summary Statistics

If the distance Inline graphic is computed by projecting the data to summary statistics followed by their comparison using a metric in the summary statistics space (e.g., the Euclidean distance), the quality of the inference hinges on the summary statistics chosen (Figs. 5 and 6).

For consistent performance of ABC algorithms, the summary statistics should be sufficient for the parameters, but this is often not the case. Additionally, with the increase in the number of summary statistics used, more simulations tend to be rejected so that an increasing number of simulation runs is needed to obtain a satisfactory number of accepted parameter values, making the algorithm computationally extremely inefficient. This is known as the curse of dimensionality for ABC (see also the discussion in the review paper by Beaumont 2010).

One of the main remedies to the above issue is to efficiently choose informative summary statistics. Importantly, the summary statistics that are informative for the parameters in a neighborhood of the true parameter value, and the summary statistics most informative globally, are significantly different (Nunes and Balding 2010). General intuition suggests that the set of summary statistics that are locally sufficient would be a subset of the globally sufficient ones. Therefore, a good strategy seems to first find a locality containing the true parameter with high enough probability and then choose informative statistics depending on that locality. However, this can be difficult in practice because rather different parameter values can produce summary statistics that are contained in the same locality.

In line with the above, Nunes and Balding (2010), Fearnhead and Prangle (2012), and Aeschbacher et al. (2012) first defined “locality” through a pilot ABC run and then chose the statistics in that locality. Four methods for choosing the statistics are generally used: (i) a sequential scheme based on the principle of approximate sufficiency (Joyce and Marjoram 2008); (ii) selection of a subset of the summary statistics maximizing prespecified criteria such as the Akaike information criterion (used by Blum et al. 2013) or the entropy of a distribution (used by Nunes and Balding 2010); (iii) partial least square regression which finds linear combinations of the original summary statistics that are maximally decorrelated and at the same time highly correlated with the parameters (Wegmann et al. 2009); (iv) assuming a statistical model between parameters and transformed statistics of simulated data, summary statistics are chosen by minimizing a loss function (Aeschbacher et al. 2012; Fearnhead and Prangle 2012). For comparison of the above methods in simulated and practical examples, we refer readers to the work by Blum and François (2010), Aeschbacher et al. (2012), and Blum et al. (2013).

Choice of the Threshold

Having the distance function Inline graphic specified, possibly using summary statistics, the remaining factor in the approximation of the posterior in Equation (5) is the specification of the threshold Inline graphic.

Larger values of Inline graphic result in biased approximations Inline graphic (see e.g., Fig. 5). The gain is a faster algorithm, meaning a reduced Monte Carlo error as one is able to produce more samples per unit of time. Therefore, when specifying the threshold the goal is to find a good balance between the bias and the Monte Carlo error. We illustrate this using Algorithm 2 with the full data without reduction to summary statistics [in other words, Inline graphic]. In this case, Algorithm 2 with Inline graphic is identical to Algorithm 1. Here the choice Inline graphic results in a better posterior compared to Inline graphic when using a maximal number of 100,000 simulations (Fig. 7a). This means that the gain from reduced Monte Carlo error is greater than the loss incurred by the bias. But this is no longer true for Inline graphic where the bias dominates. Eventually, the exact method will converge to the true posterior, whereas the other two continue to suffer from the bias caused by the larger threshold (Fig. 7b). However, with smaller computational budgets (less than 2 million simulations in our example), more accurate results are obtained with the nonzero threshold Inline graphic.

Figure 7.

Figure 7.

Comparison of the trade-off between Monte Carlo error and bias. Algorithm 1 is equivalent here to Algorithm 2 with Inline graphic. Smaller KL divergences mean more accurate inference of the posterior distribution. a) Results after 100,000 simulations. b) Accuracy versus computational cost.

The choice of the threshold is typically made by experimenting with a precomputed pool of simulation–parameter pairs Inline graphic. Rather than setting the threshold value by hand, it is often determined by accepting some small proportion of the simulations (e.g., 1%, see Beaumont et al. 2002). The choice between different options can be made more rigorous by using some of the simulated data sets in the role of the observed data and solving the inference problem for them using the remaining data sets. As the data-generating parameters are known for the simulated observations, different criteria, such as the mean squared error (MSE) between the mean of the approximation and the generating parameters can be used to make the choice [see e.g. Faisal et al. (2013) and the section on validation of ABC]. This also allows one to assess the reliability of the inference procedure. Prangle et al. (2014) discuss the use of the coverage property (Wegmann et al. 2009) as the criterion to choose the threshold value Inline graphic. Intuitively, the coverage property tests if the parameter values Inline graphic used to artificially generate a data set Inline graphic are covered by the credible intervals constructed from the ABC output for Inline graphic at correct rates (i.e., Inline graphic credible intervals should contain the true parameter in Inline graphic of the tests).

If one plans to increase the computational budget after initial experiments, theoretical convergence results can be used to adjust the threshold value. Barber et al. (2015) provide convergence results for an optimal Inline graphic sequence with respect to the MSE of a posterior expectation (e.g., the posterior mean). The theoretically optimal sequence for the threshold Inline graphic is achieved by making it proportional to Inline graphic as Inline graphic, where Inline graphic is the number of accepted samples. If the constant in this relation is estimated in a pilot run, one can compute the new theoretically optimal threshold based on the planned increase in the computational budget. Blum (2010) derives corresponding results using an approach based on conditional density estimation, finding that Inline graphic should optimally be proportional to Inline graphic as Inline graphic, where Inline graphic is the dimension of the parameter space and Inline graphic the total number of simulations performed [see also Fearnhead and Prangle (2012), Silk et al. (2013), and Biau et al. (2015), for similar results].

Beyond simple rejection sampling

The basic rejection ABC algorithm is essentially a trial and error scheme where the trial (proposal) values are sampled from the prior. We now review three popular algorithms that seek to improve upon the basic rejection approach. The first two aim at constructing proposal distributions that are closer to the posterior, whereas the third is a correction method that aims at adjusting samples obtained by ABC algorithms so that they are closer to the posterior.

Markov Chain Monte Carlo ABC

The Markov chain Monte Carlo (MCMC) ABC algorithm is based on the Metropolis–Hastings MCMC algorithm that is often used in Bayesian statistics (Robert and Casella 2004, Chapter 7). In order to leverage this algorithm, we write Inline graphic in Equation (5) as the marginal distribution of Inline graphic,

pd,ϵ(θ,y|y0)p(θ)p(y|θ)1[d(y,y0)ϵ], (8)

where Inline graphic denotes the probability density (mass) function of Inline graphic, and Inline graphic equals one if Inline graphic and zero otherwise. Importantly, while Inline graphic is generally unknown for simulator-based models, it is still possible to use Inline graphic as the target distribution in a Metropolis–Hastings MCMC algorithm by choosing the proposal distribution in the right way. The obtained (marginal) samples of Inline graphic then follow the approximate posterior Inline graphic.

Assuming that the Markov chain is at iteration Inline graphic in state Inline graphic where Inline graphic, the Metropolis–Hastings algorithm involves sampling candidate states Inline graphic from a proposal distribution Inline graphic and accepting the candidates with the probability Inline graphic,

A(x|x(i))=min(1,pd,ϵ(x|y0)q(x(i)|x)pd,ϵ(x(i)|y0)q(x|x(i))). (9)

Choosing the proposal distribution such that the move from Inline graphic to Inline graphic does not depend on the value of Inline graphic, and that Inline graphic is sampled from the simulator-based model with parameter value Inline graphic irrespective of Inline graphic, we have

q(x|x(i))=q(θ|θ(i))p(y|θ), (10)

where Inline graphic is a suitable proposal distribution for Inline graphic. As a result of this choice, the unknown quantities in Equation (9) cancel out,

graphic file with name syw077eq11.jpg

This means that the acceptance probability is only probabilistic in Inline graphic since a proposal Inline graphic is immediately rejected if the condition Inline graphic is not met. While the Markov chain operates in the Inline graphic space, the choice of the proposal distribution decouples the acceptance criterion into an ordinary Metropolis–Hastings criterion for Inline graphic and the previously seen ABC rejection criterion for Inline graphic. The resulting algorithm, shown in full in the Appendix, is known as MCMC ABC algorithm and was introduced by Marjoram et al. (2003).

An advantage of the MCMC ABC algorithm is that the parameter values do not need to be drawn from the prior, which most often hampers the rejection sampler by incurring a high rejection rate of the proposals. As the Markov chain converges, the proposed parameter values follow the posterior with some added noise. A potential disadvantage, however, is the continuing presence of the rejection condition Inline graphic which dominates the acceptance rate of the algorithm. Parameters in the tails of the posteriors have, by definition, a small probability to generate data Inline graphic satisfying the rejection condition, which can lead to a “sticky” Markov chain where the state tends to remain constant for many iterations.

Sequential Monte Carlo ABC

The sequential Monte Carlo (SMC) ABC algorithm can be considered as an adaptation of importance sampling which is a popular technique in statistics (see e.g., Robert and Casella 2004, Chapter 3). If one uses a general distribution Inline graphic in place of the prior Inline graphic, Algorithm 2 produces samples that follow a distribution proportional to Inline graphic. However, by weighting the accepted parameters Inline graphic with Inline graphic,

w(i)p(θ(i))ϕ(θ(i)), (12)

the resulting weighted samples follow Inline graphic. This kind of trick is used in importance sampling and can be employed in ABC to iteratively morph the prior into a posterior.

The basic idea is to use a sequence of shrinking thresholds Inline graphic and to define the proposal distribution Inline graphic at iteration Inline graphic based on the weighted samples Inline graphic from the previous iteration (Fig. 8). This is typically done by defining a mixture distribution,

ϕt(θ)=1Ni=1Nqt(θ|θt1(i))wt1(i), (13)

where Inline graphic is often a Gaussian distribution with mean Inline graphic and a covariance matrix estimated from the samples. Sampling from Inline graphic can be done by choosing Inline graphic with probability Inline graphic and then perturbing the chosen parameter according to Inline graphic. The proposed sample is then accepted or rejected as in Algorithm 2 and the weights of the accepted samples are computed with Equation (12). Such iterative algorithms were proposed by Sisson et al. (2007); Beaumont et al. (2009); Toni et al. (2009) and are called SMC ABC algorithms or population Monte Carlo ABC algorithms. The algorithm by Beaumont et al. (2009) is given in the Appendix.

Figure 8.

Figure 8.

Illustration of sequential Monte Carlo ABC using the tuberculosis example. The first proposal distribution is the prior and the threshold value used is Inline graphic. The proposal distribution in iteration Inline graphic is based on the sample of size Inline graphic from the previous iteration. The threshold value Inline graphic is decreased at every iteration as the proposal distributions become similar to the true posterior. The figure shows parameters drawn from the proposal distribution of the third iteration (Inline graphic). The red proposal is rejected because the corresponding simulation outcome is too far from the observed data Inline graphic. At iteration Inline graphic, however, it would have been accepted. After iteration Inline graphic, the accepted parameter values follow the approximate posterior Inline graphic. As long as the threshold values Inline graphic decrease, the approximation becomes more accurate at each iteration.

Similar to the MCMC ABC, the samples proposed by the SMC algorithm follow the posterior Inline graphic with some added noise. The proposed parameter values are drawn from the prior only at the first iteration after which adaptive proposal distributions Inline graphic closer to the true posterior are used (see Fig. 8 for an illustration). This reduces the running time as the number of rejections is lower compared to the basic rejection ABC algorithm. For small values of Inline graphic, however, the probability to accept a parameter value becomes very small, even if the parameter value was sampled from the true posterior. This results in long computing times in the final iterations of the algorithm without notable improvements in the approximation of the posterior.

Post-Sampling Correction Methods

We assume here that the distance Inline graphic is specified in terms of summary statistics, that is, Inline graphic, with Inline graphic and Inline graphic. As Inline graphic decreases to zero, the approximate posterior Inline graphic in Equation (5) converges to Inline graphic, where we use Inline graphic to denote the conditional distribution of Inline graphic given a value of the summary statistics Inline graphic. While small values of Inline graphic are thus preferred in theory, making them too small is not feasible in practice because of the correspondingly small acceptance rate and the resulting large Monte Carlo error. We here present two schemes that aim at adjusting Inline graphic without further sampling so that the adjusted distribution is closer to Inline graphic.

For the first scheme, we note that if we had a mechanism to sample from Inline graphic, we could sample from the limiting approximate posterior by using Inline graphic. The post-sampling correction methods in the first scheme thus estimate Inline graphic and use the estimated conditional distributions to sample from Inline graphic. In order to facilitate sampling, Inline graphic is expressed in terms of a generative (regression) model,

θ=f(t,ξ), (14)

where Inline graphic is a vector-valued function and Inline graphic a vector of random variables for the residuals. By suitably defining Inline graphic, we can assume that the random variables of the vector Inline graphic are independent, of zero mean and equal variance, and that their distribution Inline graphic does not depend on Inline graphic. Importantly, the model does not need to hold for all Inline graphic because, ultimately, we would like to sample from it using Inline graphic only. Assuming that the model Inline graphic holds for Inline graphic and that we have (weighted) samples Inline graphic available from an ABC algorithm with a threshold Inline graphic, the model Inline graphic can be estimated by regressing Inline graphic on the summary statistics Inline graphic.

In order to sample Inline graphic using the estimated model Inline graphic, we need to know the distribution of Inline graphic. For that, the residuals Inline graphic are determined by solving the regression equation,

θ~(i)=f^(t(i),ξ(i)). (15)

The residuals Inline graphic can be used to estimate Inline graphic, or as usually is the case in ABC, be directly employed in the sampling of the Inline graphic,

θ(i)=f^(t0,ξ(i)). (16)

If the original samples Inline graphic are weighted, both the Inline graphic and the new “adjusted” samples Inline graphic inherit the weights. By construction, if the relation between Inline graphic and Inline graphic is estimated correctly, the (weighted) samples Inline graphic follow Inline graphic with Inline graphic.

In most models Inline graphic employed so far, the individual components of Inline graphic are treated separately, thus not accounting for possible correlations between them. For this paragraph we thus let Inline graphic be a scalar. The first regression model used was linear (Beaumont et al. 2002),

θ=f1(t,ξ),f1(t,ξ)=α+(tt0)β+ξ, (17)

which results in the adjustment Inline graphic, where Inline graphic is the learned regression coefficient (Fig. 9). When applied to the model of the spread of tuberculosis, with summary statistic Inline graphic [see Equation (7)], the adjustment is able to correct the bias caused by the nonzero threshold Inline graphic, that is the estimated model Inline graphic is accurate (Fig. 10a). With summary statistic Inline graphic, the threshold Inline graphic is too large for accurate adjustment, although the result is still closer to the target distribution than the original (Figure 10b). Note also that here the target distribution of the adjustment is substantially different from the true posterior due to the bias incurred by summary statistic Inline graphic.

Figure 9.

Figure 9.

Illustration of the linear regression adjustment (Beaumont et al. 2002). First, the regression model Inline graphic is learned and then, based on Inline graphic, the simulations are adjusted as if they were sampled from Inline graphic with Inline graphic. Note that the residuals Inline graphic are preserved. The change in the posterior densities after the adjustment is shown on the right. Here, the black (original) and green (adjusted) curves are the same as in Figure 10(b).

Figure 10.

Figure 10.

Linear regression adjustment (Beaumont et al. 2002). applied to the example model of the spread of tuberculosis (compare to Fig. 5). The target distribution of the adjustment is the posterior Inline graphic with the threshold decreased to Inline graphic. Note that when using summary statistic Inline graphic the target distribution is substantially different from the true posterior (reference) because of the bias incurred by Inline graphic. a) Inline graphic with Inline graphic. b) Inline graphic with Inline graphic.

Also nonlinear models Inline graphic have been proposed. Blum (2010) assumed a quadratic model,

θ=f2(t,ξ),f2(t,ξ)=α+(tt0)β+12(tt0)γ(tt0)+ξ, (18)

where Inline graphic is a symmetric matrix that adds a quadratic term to the linear adjustment. A more general nonlinear model was considered by Blum and François (2010),

θ=f3(t,ξ),f3(t,ξ)=m(t)+σ(t)ξ, (19)

where Inline graphic models the conditional mean and Inline graphic the conditional standard deviation of Inline graphic. Both functions were fitted using a multi-layer neural network, and denoting the learned functions by Inline graphic and Inline graphic, the following adjustments were obtained

θ(i)=m^(t0)+σ^(t0)σ^(t(i))1(θ~(i)m^(t(i))). (20)

The term Inline graphic is an estimate of the posterior mean of Inline graphic, whereas Inline graphic is an estimate of the posterior standard deviation of the parameter. They can both be used to succinctly summarize the posterior distribution of Inline graphic.

A more complicated model Inline graphic is not necessarily better than a simpler one. It depends on the amount of training data available to fit it, that is, the amount of original samples Inline graphic that satisfy Inline graphic. The different models presented above were compared by Blum and François (2010) who also pointed out that techniques for model selection from the regression literature can be used to select among them.

While the first scheme to adjust Inline graphic consists of estimating Inline graphic, the second scheme consists of estimating Inline graphic, that is the conditional distribution of the summary statistics given a parameter value. The rationale of this approach is that knowing Inline graphic implies knowing the approximate likelihood function Inline graphic for Inline graphic, because Inline graphic when the distance Inline graphic is specified in terms of summary statistics.

Importantly, Inline graphic does not need to be known everywhere but only locally around Inline graphic, where Inline graphic.

If we use Inline graphic to denote the distribution of Inline graphic conditional on Inline graphic and Inline graphic, Leuenberger and Wegmann (2010) showed that Inline graphic takes the role of a local likelihood function and Inline graphic the role of a local prior, and that the local posterior equals the true posterior Inline graphic.

The functional form of Inline graphic is generally not known. However, as in the first scheme, running an ABC algorithm with threshold Inline graphic provides data Inline graphic that can be used to estimate a model of Inline graphic. Since the model does not need to hold for all values of the summary statistics, but only for those in the neighborhood of Inline graphic, Leuenberger and Wegmann (2010) proposed to model Inline graphic as Gaussian with constant covariance matrix and a mean depending linearly on Inline graphic. When the samples Inline graphic are used to approximate Inline graphic as a kernel density estimate, the Gaussianity assumption on Inline graphic facilitates the derivation of closed-form formulae to adjust the kernel density representation of Inline graphic so that it becomes an approximation of Inline graphic (Leuenberger and Wegmann 2010).

While Leuenberger and Wegmann (2010) modeled Inline graphic as Gaussian, other models can be used as well. Alternatively, one may make the mean of the Gaussian depend nonlinearly on Inline graphic and allow the covariance of the summary statistic depend on Inline graphic. This was done by Wood (2010) and the model was found rich enough to represent Inline graphic for all values of the summary statistics and not only for those in the neighborhood of the observed one.

Recent developments

We here present recent advances that aim to make ABC both computationally and statistically more efficient. This presentation focuses on our own work (Gutmann et al. 2014; Gutmann and Corander 2016).

Computational Efficiency

The computational cost of ABC can be attributed to two main factors:

  • (1) Most of the parameter values result in large distances between the simulated and observed data and those parameter values for which the distances tend to be small are unknown.

  • (2) Generating simulated data sets, that is, running the simulator, may be costly.

MCMC ABC and SMC ABC were partly introduced to avoid proposing parameters in regions where the distance is large. Nonetheless, typically millions of simulations are needed to infer the posterior distribution of a handful of parameters only. A key obstacle to efficiency in these algorithms is the continued presence of the rejection mechanism Inline graphic, or more generally, the online decisions about the similarity between Inline graphic and Inline graphic. In recent work, Gutmann and Corander (2016) proposed a framework called Bayesian optimization for likelihood-free inference (BOLFI) for performing ABC which overcomes this obstacle by learning a probabilistic model about the stochastic relation between the parameter values and the distance (Fig. 11). After learning, the model can be used to approximate Inline graphic, and thus Inline graphic, for any Inline graphic without requiring further runs of the simulator (Fig. 12).

Figure 11.

Figure 11.

The basic idea of BOLFI is to model the distance, and to prioritize regions of the parameter space where the distance tends to be small. The solid curves show the modeled average behavior of the distance Inline graphic, and the dashed curves its variability for the tuberculosis example. a) After initialization (30 data points). b) After active data acquisition (200 data points).

Figure 12.

Figure 12.

In BOLFI, the estimated model of Inline graphic is used to approximate Inline graphic by computing the probability that the distance is below a threshold Inline graphic. This kind of likelihood approximation leads to a model-based approximation of Inline graphic. The KL-divergence between the reference solution and the BOLFI solution with 30 data points is 0.09, and for 200 data points it is 0.01. Comparison with Figure 6 shows that BOLFI increases the computational efficiency of ABC by several orders of magnitude. a) Approximate likelihood function. b) Model-based posteriors.

Like the post-sampling correction methods presented in the previous section, BOLFI relies on a probabilistic model to make ABC more efficient. However, the quantities modeled differ, since in the post-sampling correction methods the relation between summary statistics and parameters is modeled, while BOLFI focuses on the relation between the parameters and the distance. A potential advantage of the latter approach is that the distance is a univariate quantity while the parameters and summary statistics may be multidimensional. Furthermore, BOLFI does not assume that the distance is defined via summary statistics and can be used without first running another ABC algorithm.

Learning of the model of Inline graphic requires data about the relation between Inline graphic and Inline graphic. In BOLFI, the data are actively acquired focusing on regions of the parameter space where the distance tends to be small. This is achieved by leveraging techniques from Bayesian optimization (see e.g., Jones 2001; Brochu et al. 2010), hence its name. Ultimately, the framework provided by Gutmann and Corander (2016) reduces the computational cost of ABC by addressing both of the factors mentioned above. The first point is addressed by learning from data which parameter values tend to have small distances, whereas the second problem is resolved by focusing on areas where the distance tends to be small when learning the model and by not requiring further runs of the simulator once the model is learned.

While BOLFI is not restricted to a particular model for Inline graphic, Gutmann and Corander (2016) used Gaussian processes in the applications in their paper. Gaussian processes have also been used in other work as surrogate models for quantities that are expensive to compute. Wilkinson (2014) used them to model the logarithm of Inline graphic, and the training data were constructed based on quasi-random numbers covering the parameter space. Meeds and Welling (2014) used Gaussian processes to model the empirical mean and covariances of the summary statistics as a function of Inline graphic. Instead of simulating these quantities for every Inline graphic, values from the model were used in a MCMC algorithm in approximating the likelihood. These approaches have been demonstrated to assist in speeding up ABC.

Statistical Efficiency

We have seen that the statistical efficiency of ABC algorithms depends heavily on the summary statistics chosen, the distance between them, and the locality of the inference. In a recent work, (Gutmann et al. 2014) formulated the problem of measuring the distance between simulated and observed data as a classification problem: Two data sets are judged maximally similar if they cannot be told apart significantly above chance level (50% accuracy in the classification problem). On the other hand, two data sets are maximally dissimilar if they can be told apart with 100% classification accuracy. In essence, classification is used to assess the distance between simulated and observed data.

The classification rule used to measure the distance was learned from the data, which simplifies the inference since only a function (hypothesis) space needs to be prespecified by the user. In the process, Gutmann et al. (2014) also chose a subset or weighted (nonlinear) combination of summary statistics to achieve the best classification accuracy. This choice depended on the parameter values used to generate the simulated data. While computationally more expensive than the traditional approach, the classifier approach has the advantage of being a data-driven way to measure the distance between the simulated and observed data that respects the locality of the inference.

Validation of abc

Due to the several levels of approximation, it is generally a recommendable practice to perform validatory analyses of the ABC inferences. We here discuss some of the possibilities suggested in the literature.

The ability to generate data from simulator-based models enables basic sanity checks for the feasibility of the inference with a given setting and algorithm. The general approach is to perform inference where synthetic data sets Inline graphic are generated with known parameter values Inline graphic to play the role of the observed data Inline graphic. To assess whether the posterior distribution is concentrated around the right parameter values, one may then compute the average error between the posterior mean (mode) and Inline graphic, or the expected squared distance between the posterior samples and Inline graphic (Wegmann et al. 2009). To assess whether the spread of the posterior distribution is not overly large or small, one may compute confidence (credibility) intervals and check their coverage. When the nominal confidence levels are accurate, 95% confidence intervals, for example, should contain Inline graphic in 95% of the simulation experiments (Wegmann et al. 2009; Prangle et al. 2014). Such tests can be performed a priori by sampling Inline graphic from the prior before having seen the actual data to be analyzed, or also a posteriori by sampling Inline graphic from the inferred posterior or from the prior restricted to some area of interest (Prangle et al. 2014). Corresponding techniques have also been suggested for the purpose of specifying the threshold value Inline graphic as discussed earlier in this article. It can be also beneficial here to store the generated data sets together with their parameter values so that the validations can be run without having to regenerate new data on every occasion.

The ABC framework provides a straightforward way to investigate the goodness-of-fit of the model. The distances Inline graphic indicate how close the simulated data Inline graphic are to the observed data Inline graphic. If all of the distances remain large, it may be an indication of a deficient model, as the model is unable to produce data similar to the observed data. Ratmann et al. (2009) proposed a method called ABC under model uncertainty (ABCInline graphic) where they augment the likelihood with unknown error terms for each of the different summary statistics used. The error terms are assumed to have mean zero and are sampled together with the parameters of the model. If, however, the mean of the error terms is found to deviate from 0, it may indicate a systematic error in the model.

Yet another issue is to consider identifiability of the model given the observed data. The likelihood function indicates the extent to which parameter values are congruent with the observed data. A strong curvature at its maximum indicates that the maximizing parameter value is clearly to be preferred, whereas a minor curvature means that several other parameter values are nearly equally supported by the data. More generally, if the likelihood surface is mostly flat over the parameter space, the data are not providing sufficient information to identify the model parameters. While the likelihood function is generally not available for simulator-based models, the arguments provided do also hold for the approximate likelihood function Inline graphic in Equation (6). On one hand, the approximate likelihood function can be used to investigate the identifiability of the simulator-based model. On the other hand, it allows one to assess the quality of the distance Inline graphic or threshold Inline graphic chosen. Flat approximate likelihood surfaces, for instance, indicate that Inline graphic could be too large or that the distance function Inline graphic is not able to accurately measure differences between the data sets.

The approximate likelihood Inline graphic can be obtained either by the method of Gutmann and Corander (2016) or also by any other ABC algorithm by assuming a uniform prior on a region of interest. Lintusaari et al. (2016) used such an approach to investigate the identifiability of the tuberculosis model considered as an example in the previous sections, and to compare different distance functions. Further, one may (visually) compare the (marginal) prior and the inferred (marginal) posterior (e.g., Blum 2010). Both approaches are applicable not only to the real observed data Inline graphic but also to the synthetic data Inline graphic for which the data-generating parameters Inline graphic are known. If the employed ABC algorithm is working appropriately, both Inline graphic and the posteriors should clearly change when the characteristics of the observed data change markedly. In particular, if the number of observations is increased, the approximate likelihood and posterior should in general become more concentrated around the data-generating parameter values. While failure to pass such sanity checks may be an indicator that the choice of Inline graphic and Inline graphic could be improved, it can also indicate that the model may not be fully identifiable.

Conclusion

It is possible to model complex biological phenomena in a realistic manner with the aid of simulator-based models. However, the likelihood function for such models is usually intractable and raises serious methodological challenges to perform statistical inference. ABC has become synonymous for approximate Bayesian inference for simulator-based models. We have here reviewed its foundations, the most widely considered inference algorithms, together with recent advances that increase its statistical and computational efficiency.

While the review is solely restricted to Bayesian methods, there exists a large body of literature on non-Bayesian approaches, for instance, the methods of simulated moments (McFadden 1989; Pakes and Pollard 1989) or indirect inference (Gouriéroux et al. 1993; Heggland and Frigessi 2004), both having their origin in econometrics.

We focused on the central topics related to parameter inference with ABC. Nevertheless, ABC is also applicable to model selection (see e.g., the review by Marin et al. 2012) and while we have reviewed methods making the basic ABC algorithms more efficient, we have not discussed the important topic of how to use ABC for high-dimensional inference. We point the interested readers to the work by Li et al. (2015) and also to the discussion by Gutmann and Corander (2016).

For practical purpose, there exist multiple software packages implementing the different ABC algorithms, summary statistic selection, validation methods, post processing, and ABC model selection methods. Nunes and Prangle (2015) provide a recent list of available packages with information about their implementation language, platform, and targeted field of study. In summary, ABC is currently a very active methodological research field and this activity will likely result in several advances to improve its applicability to answering important biological research questions in the near future.

Appendix

For completeness, we state below the algorithms for MCMC ABC and SMC ABC by Marjoram et al. (2003) and Beaumont et al. (2009), respectively.

graphic file with name syw077a3.jpg

graphic file with name syw077a4.jpg

Acknowledgments

We acknowledge the computational resources provided by the Aalto Science-IT project. J.L., M.U.G., R.D. and J.C. wrote the article; J.L. performed the simulations; S.K. contributed to writing and planning of the article.

Funding

This work was supported by the Academy of Finland (Finnish Centre of Excellence in Computational Inference Research COIN).

References

  1. Aeschbacher S., Beaumont M., Futschik A. 2012. A novel approach for choosing summary statistics in approximate Bayesian computation. Genetics 192:1027–1047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson R.M., May R.M. 1992. Infectious diseases of humans: dynamics and control. Oxford University Press. [Google Scholar]
  3. Barber S., Voss J., Webster M. 2015.. The rate of convergence for approximate Bayesian computation. Electron. J. Stat. 80–105. [Google Scholar]
  4. Baudet C., Donati B., Sinaimeri B., Crescenzi P., Gautier C., Matias C., Sagot M.-F. 2015.. Cophylogeny reconstruction via an approximate Bayesian computation. Syst. Biol. 64:416–431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Beaumont M.A. 2010.. Approximate Bayesian computation in evolution and ecology. Annu. Rev. Ecol. Evol. Syst. 41:379–406. [Google Scholar]
  6. Beaumont M.A., Cornuet J.-M., Marin J.-M., Robert C.P. 2009. Adaptive approximate Bayesian computation. Biometrika 96:983–990. [Google Scholar]
  7. Beaumont M.A., Zhang W., Balding D.J. 2002.. Approximate Bayesian computation in population genetics. Genetics 162:2025–2035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Biau G., Cérou F., Guyader A. 2015. New insights into approximate Bayesian computation. Ann. I. H. Poincaré B 51:376–403. [Google Scholar]
  9. Blum M., François O. 2010.. Non-linear regression models for approximate Bayesian computation. Stat. Comput. 20:63–73. [Google Scholar]
  10. Blum M.G.B. 2010. Approximate Bayesian computation: a nonparametric perspective. J. Ame. Stat. Assoc. 105:1178–1187. [Google Scholar]
  11. Blum M.G.B., Nunes M.A., Prangle D., Sisson S.A. 2013. A comparative review of dimension reduction methods in approximate Bayesian computation. Stat. Sci. 28:189–208. [Google Scholar]
  12. Brochu E., Cora V., de Freitas N. 2010. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv:1012.2599. [Google Scholar]
  13. Currat M., Excoffier L. 2004. Modern humans did not admix with neanderthals during their range expansion into europe. PLoS Biol. 2:e421.. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Diggle P.J., Gratton R.J. 1984. Monte Carlo methods of inference for implicit statistical models. J. Roy. Stat. Soc. B 46:193–227. [Google Scholar]
  15. Excoffier L.Dupanloup I. Huerta-Snchez E.Sousa V.C.Foll M.. 2013. Robust demographic inference from genomic and SNP data. PLoS Genet. 9:e1003905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Fagundes N.J.R., Ray N., Beaumont M., Neuenschwander S., Salzano F.M., Bonatto S.L., Excoffier L. 2007. Statistical evaluation of alternative models of human evolution. Proc. Natl Acad. Sci. 104:17614–17619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Faisal M., Futschik A., Hussain I. 2013. A new approach to choose acceptance cutoff for approximate Bayesian computation. J. Appl. Stat. 40:862–869. [Google Scholar]
  18. Fan H.H., Kubatko L.S. 2011. Estimating species trees using approximate Bayesian computation. Mol. Phylogenet. Evol. 59:354–363. [DOI] [PubMed] [Google Scholar]
  19. Fearnhead P., Prangle D. 2012. Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. J. Roy. Stat. Soc. B 74:419–474. [Google Scholar]
  20. Gouriéroux C., Monfort A., Renault E. 1993. Indirect inference. J. Appl. Econ. 8:S85–S118. [Google Scholar]
  21. Green P., Latuszynski K., Pereyra M., Robert C.P. 2015. Bayesian computation: a summary of the current state, and samples backwards and forwards. Stat. Comput. 25:835–862. [Google Scholar]
  22. Gutmann M.U., Corander J. 2016.. Bayesian optimization for likelihood-free inference of simulator-based statistical models. J. Mach. Learn. Res. 17 (125):1–47. [Google Scholar]
  23. Gutmann M.U., Dutta R., Kaski S., Corander J. 2014.. Statistical inference of intractable generative models via classification. arXiv:1407.4981. [Google Scholar]
  24. Heggland K., Frigessi A. 2004. Estimating functions in indirect inference. J. Roy. Stat. Soc. B 66:447–462. [Google Scholar]
  25. Hickerson M.J., Stahl E.A., Lessios H.A. 2006. Test for simultaneous divergence using approximate Bayesian computation. Evolution 60:2435–2453. doi:10.1111/j.0014-3820.2006.tb01880.x [PubMed] [Google Scholar]
  26. Itan Y., Powell A., Beaumont M.A., Burger J., Thomas M.G. (2009). The origins of lactase persistence in Europe. PLoS Comput. Biol. 5:e1000491. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Jones D. 2001. A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21:345–383. [Google Scholar]
  28. Joyce P., Marjoram P. 2008. Approximately sufficient statistics and Bayesian computation. Stat. Appl. Genet. Mol. Biol. 26. [DOI] [PubMed] [Google Scholar]
  29. Kullback S., Leibler R.A. 1951. On information and sufficiency. Ann. Math. Stat. 22:79–86. [Google Scholar]
  30. Leuenberger C., Wegmann D. 2010. Bayesian computation and model selection without likelihoods. Genetics 184:243–252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Lintusaari J., Gutmann M.U., Kaski S., Corander J. 2016. On the identifiability of transmission dynamic models for infectious diseases. Genetics 202(3):911–918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Li J., Nott D.J., Sisson S.A. Extending approximate Bayesian computation methods to high dimensions via Gaussian copula. 2015. arXiv:1504.04093. [Google Scholar]
  33. Marin J.-M., Pudlo P., Robert C., Ryder R. 2012. Approximate Bayesian computational methods. Stat. Comput. 22:1167–1180. [Google Scholar]
  34. Marjoram P., Molitor J., Plagnol V., Tavaré S. 2003.. Markov chain Monte Carlo without likelihoods. Proc. Natl Acad. Sci. 100:15324–15328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Marttinen P., Croucher N.J., Gutmann M.U., Corander J., Hanage W.P. Recombination produces coherent bacterial species clusters in both core and accessory genomes. Proc. Natl Acad. Sci. (PNAS). 2015. doi: 10.1099/mgen.0.000038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. McFadden D. 1989. A method of simulated moments for estimation of discrete response models without numerical integration. Econometrica 57:995–1026. [Google Scholar]
  37. Meeds E., Welling M. 2014. GPS-ABC: Gaussian process surrogate approximate Bayesian computation. Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI). [Google Scholar]
  38. Nunes M.A., Balding D.J. 2010. On optimal selection of summary statistics for approximate Bayesian computation. Stat. Appl. Genet. Mol. Bio. 9. [DOI] [PubMed] [Google Scholar]
  39. Nunes M.A., Prangle D. abctools: an R package for tuning approximate Bayesian computation analyses. R Journal. To appear. 2015. [Google Scholar]
  40. Pakes A., Pollard D. 1989. Simulation and the asymptotics of optimization estimators. Econometrica 57:1027–1057. [Google Scholar]
  41. Prangle D.Blum M.G.B., Popovic G.Sisson S.A.. 2014. Diagnostic tools for approximate Bayesian computation using the coverage property. Aust. NZ J. Stat. 56:309–329. [Google Scholar]
  42. Pritchard J.K., Seielstad M.T., Perez-Lezaun A., Feldman M.W. 1999. Population growth of human Y chromosomes: a study of Y chromosomemicrosatellites. Mol. Biol. Evol. 16:1791–1798. [DOI] [PubMed] [Google Scholar]
  43. Ratmann O., Andrieu C., Wiuf C., Richardson S. 2009. Model criticism based on likelihood-free inference, with an application to protein network evolution. Proc. Natl Acad. Sci. 106:10576–10581. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Ratmann O., Donker G., Meijer A., Fraser C., Koelle K. 2012. Phylodynamic inference and model assessment with approximate Bayesian computation: influenza as a case study. PLoS Comput. Biol. 8:e1002835. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Robert C., Casella G. 2004.. Monte Carlo statistical methods. 2nd ed. Springer. [Google Scholar]
  46. Rubin D.B. 1984. Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Stat. 12:1151–1172. [Google Scholar]
  47. Silk D., Filippi S., Stumpf M.P. 2013. Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems. Stat. Appl. Genet. Mol. Biol. 12:603–618. [DOI] [PubMed] [Google Scholar]
  48. Sisson S.A., Fan Y., Tanaka M.M. 2007. Sequential Monte Carlo without likelihoods. Proc. Natl Acad. Sci. 104:1760–1765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Slater G.J., Harmon L.J., Wegmann D., Joyce P., Revell L.J., Alfaro M.E. 2012. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation. Evolution 66:752–762. [DOI] [PubMed] [Google Scholar]
  50. Stadler T. 2011. Inferring epidemiological parameters on the basis of allele frequencies. Genetics 188:663–672. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Tanaka M.M., Francis A.R., Luciani F., Sisson S.A. 2006. Using approximate Bayesian computation to estimate tuberculosis transmission parameters from genotype data. Genetics 173:1511–1520. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Toni T., Welch D., Strelkowa N., Ipsen A., Stumpf M.P. 2009. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J. Roy. Soc. Interf. 6:187–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Wegmann D., Leuenberger C., Excoffier L. 2009. Efficient approximate Bayesian computation coupled with Markov chain Monte Carlo without likelihood. Genetics 182:129–141. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Wilkinson R. 2014. Accelerating ABC methods using Gaussian processes. Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS). [Google Scholar]
  55. Wood S. 2010. Statistical inference for noisy nonlinear ecological dynamic systems. Nature 466:1102–1104. [DOI] [PubMed] [Google Scholar]

Articles from Systematic Biology are provided here courtesy of Oxford University Press

RESOURCES