Abstract
In this paper we employ a novel method to find the optimal design for problems where the likelihood is not available analytically, but simulation from the likelihood is feasible. To approximate the expected utility we make use of approximate Bayesian computation methods. We detail the approach for a model on spatial extremes, where the goal is to find the optimal design for efficiently estimating the parameters determining the dependence structure. The method is applied to determine the optimal design of weather stations for modeling maximum annual summer temperatures.
Electronic supplementary material
The online version of this article (doi:10.1007/s00477-015-1067-8) contains supplementary material, which is available to authorized users.
Keywords: Simulation-based optimal design, Approximate Bayesian computation, Importance sampling, Spatial extremes, Max-stable processes
Introduction
Collecting spatial data efficiently (see eg. Müller 2007) is a problem that is frequently neglected in applied research, although there is growing literature on the subject. Various spatial sampling and monitoring situations such diverse as e.g. for stream networks (Dobbie et al. 2008), water (Harris et al. 2014) and air quality (Bayraktar and Turalioglu 2005), soil properties (Lesch 2005 and Spöck and Pilz 2010), radioactivity (Melles et al. 2011), biodiversity (Stein and Ettema 2003), or greenland coverage (Mateu and Müller 2012) are discussed therein.
Those approaches predominately follow a (parametric) model-based viewpoint. Here, the inverse of the Fisher information matrix represents the uncertainties involved and it is its minimization through a prudent choice of monitoring sites that is desired. This corresponds to the selection of inputs or settings (the design) in an experiment and can thus draw from the rich literature on optimal experimental design (see eg. Fedorov 1972 or Atkinson et al. 2007). There a so-called design criterion, usually a scalar function of the information matrix, is optimized by employing various algebraic and algorithmic techniques. Often the design criterion can be interpreted as an expected utility of the experiment outcome (the collected data), and if this expected utility is an easy to evaluate function of the design settings, the optimal design can be found analytically. In Bayesian design, the design criterion is usually some measure of the expected information gain of the experiment (see e.g. Hainy et al. 2014), which is also called the expected utility. As utility function one would typically use convex functionals of the posterior distribution, such as the Kullback-Leibler divergence between the (uninformative) prior and the posterior distribution, to measure the additional information gained by conducting the experiment (Chaloner and Verdinelli 1995).
For problems where neither maximization of the design criterion nor the integration to evaluate the expected utility can be performed, simulation-based techniques for optimal design were proposed in Müller (1999) and Müller et al. (2004). For instance, the expected utility can be approximated by Monte Carlo integration over the utility values with respect to the prior predictive distribution.
In Bayesian design problems, the utility is typically a complex functional of the posterior distribution. Hence, a strategy could be to generate values for the parameters by employing simulation methods like Markov chain Monte Carlo (MCMC) and use these to approximate the utility values. However, as one has to generate a sample from a different posterior for each utility evaluation, this can be computationally very expensive.
We will further assume that the likelihood is not available analytically. In that case it is not possible to employ standard Bayesian estimation techniques. Therefore, we propose to use approximate Bayesian computation (ABC) methods for posterior inference. It is our new approach to utilize these methods for solving optimal design problems. We will also present a solution to quickly re-evaluate the utility values for different posterior distributions by using a large pre-simulated sample from the model.
We illustrate the application of the methodology to derive optimal designs for spatial extremes models. As noted in Erhardt and Smith (2012), models specifically designed for extremes are better suited than standard spatial models to model dependence for environmental extreme events such as hurricanes, floods, droughts or heat waves. A recent overview of modeling approaches for spatial extremes data is given in Davison et al. (2012). We will focus on models for spatial extremes based on max-stable processes to derive optimal designs for the parameters characterizing spatial dependence.
Max-stable processes are useful for modeling spatial extremes as they can be characterized by spectral representations, where spatial dependence can be incorporated conveniently. A drawback of max-stable processes is that closed forms for the likelihood function are typically available only for the bivariate marginal densities. Hence, inference using ABC as in Erhardt and Smith (2012) is a natural avenue. Often the so-called Schlather model (Schlather 2002) is employed, which models the spatial dependence in terms of an unobserved Gaussian process. It usually creates a more realistic pattern of spatial dependence than the deterministic shapes engendered by the so-called Smith model (Smith 1990), which is another very popular model for spatial extremes. Moreover, simulations from the Schlather model can be obtained fairly quickly compared to more complex models, which is important when using a simulation-heavy estimation technique such as ABC.
In our application we consider optimal design for the parameters characterizing the dependence structure of maximum annual summer temperatures in the Midwest region of the United States of America. The problem is inspired by the work of Erhardt and Smith (2014), who use data from 39 sites to derive a model for pricing weather derivatives. Our aim is to rank those sites with respect to the information they provide on the unknown dependence parameters. In this the paper is comparable to Chang et al. (2007), who employ a different entropy-based technique in a similar context. Note, however, that our approach is not limited to this specific application, but could be easily adapted for other purposes.
Shortly before finalizing a first technical report on this topic (Hainy et al. 2013a), we have learned of the then unpublished paper by Drovandi and Pettitt (2013), wherein similar ideas have been developed independently. However, while the basic concept of fusing a simulation-based method with ABC is essentially the same, our approach differs in various ways, particularly on how the posterior for the utility function is generated. Furthermore, we additionally suggest ways of how the methodology can be turned sequential so as to be made useful for adaptive design situations. A very general version of our concept is introduced in Hainy et al. (2013b), whereas in the current exposition we give a detailed explanation of how to employ it in a specific practical situation.
The paper is structured as follows. Section 2 reviews the essentials of simulation-based optimal design as well as the various improvements and modifications lately suggested. Sect. 3 is the core of the paper and details our approach to likelihood-free optimal design with a brief section on essentials of approximate Bayesian computation. Section 4 provides an overview of modeling spatial extremes based on max-stable processes. These are needed in the application in Sect. 5. Finally, Sect. 6 provides a discussion and gives some directions for future research.
The programs for the application were mainly written in R. The R-programs include calls to compiled C-code for the computer-intensive sampling and criterion calculation procedures. We used and adapted routines from the R-packages evd (Stephenson 2002) and SpatialExtremes (Ribatet and Singleton 2013) to analyze and manipulate the data and to simulate from the spatial extremes model. For simulating large samples or performing independent computations, we used the parallel computing and random number generation functionalities of the R-packages snow (Tierney et al. 2013) and rlecuyer (Sevcikova and Rossini 2012). All the computer-intensive parallelizable operations were conducted on an SGI Altix 4700 symmetric multiprocessing (SMP) system with 256 Intel Itanium cores (1.6 GHz) and 1 TB of global shared memory.
Simulation-based optimal design
We consider an experiment where output values (observations) are taken at input values constituting a design . A model for these data is described by a likelihood , where denotes the model parameters.
Optimal design, see eg. Atkinson et al. (2007), generally has the goal to determine the optimal configuration with respect to a criterion ,
We adopt a Bayesian approach and assume that a prior distribution is specified to account for parameter uncertainty. The prior distribution usually does not depend on the design . If denotes a utility function and is the joint density of and , the expected utility is given as
2.1 |
For reasonable choices of utility functions and a detailed introduction into Bayesian optimal design see Chaloner and Verdinelli (1995).
In many applications, neither analytic nor numerical integration is feasible, but simulation-based design can be performed by approximating the criterion by Monte Carlo integration,
if samples from the joint distribution can be generated and the utility is easy to evaluate. Sampling from the joint distribution can typically be performed by sampling from its prior distribution and from the likelihood .
Often however, design criteria are not straightforward to evaluate as they require some integration: classical criteria, e.g. based on the Fisher information matrix such as D-optimality, are defined as expected values of some functional with respect to the likelihood, (Atkinson et al. 2007), whereas Bayesian utility functions, e.g. the popular Kullback-Leibler divergence/Shannon information, are expected values with respect to the posterior distribution of the parameters, (Chaloner and Verdinelli 1995). Thus, we can write , since the parameters are integrated out in a Bayesian utility function.
If denotes an approximation of the utility, can be approximated by
2.2 |
where is sampled from the prior predictive distribution . We will focus on this case in the rest of the paper.
A very general form of simulation-based design, which was proposed by Müller (1999), further fuses the approximation and the optimization of and could be employed here as well. However, for simplicity in this paper we consider only cases with finite design space , where is small and thus it is feasible to compute for each value and rank the results.
We further assume that neither the likelihood nor the posterior is available in closed form. Hence we will use ABC methods to sample from the posterior distribution to approximate the Bayesian design criterion, see Sect. 3 for a detailed description.
We will also consider the more general case where the prior distribution of the parameters, , is replaced by the posterior distribution, , which depends on observations previously collected at design points . Thus, information from these data about the parameter distribution can be easily incorporated into the approximation of the utility.
Likelihood-free optimal design
In this section we will elaborate on particular aspects of simulation-based optimal design without using likelihoods. The general concept was introduced in Hainy et al. (2013b) and termed “ABCD” (approximate Bayesian computation design). The first two subsections review some basic notions of ABC, whereas the last presents two variants for approximating a design criterion by . This can eventually be optimized to yield
by stochastic optimization routines (see Huan and Marzouk 2013), which are designed to deal with noisy objective functions, or—as in our example—various designs can be directly compared with respect to their approximated criterion value .
Approximate Bayesian computation (ABC)
To tackle problems where the likelihood function cannot be evaluated, likelihood-free methods, also known as approximate Bayesian computation, have been developed. These methods have been successfully employed in biogenetics (Beaumont et al. 2002), Markov process models (Toni et al. 2009), models for extremes (Bortot et al. 2007), and many other applications, see Sisson and Fan (2011) for further examples.
ABC methods rely on sampling from the prior and auxiliary data from the likelihood to obtain a sample from an approximation to the posterior distribution . This approximation is constituted from draws for where is in some sense close to the observed .
More formally, let be a discrepancy function that compares the observed and the auxiliary data (cf. Drovandi and Pettitt 2013). In most cases, for a discrepancy function defined on the space of a lower-dimensional summary statistic An ABC rejection sampler iterates the following steps:
This algorithm draws from the ABC posterior
3.1 |
where is the uniform kernel with bandwidth , i.e. . Here is the indicator function which takes the value 1 if and 0 otherwise. The ABC posterior is equal to the targeted posterior if the summary statistic is sufficient and is a point mass at the point .
If is a more general smoothing kernel, e.g. the Gaussian or the Epanechnikov kernel, the resulting ABC posterior can be sampled using importance sampling (cf. e.g. Fearnhead and Prangle 2012). Let denote a proposal density for with sufficient support (at least the support of ), then ABC importance sampling can be performed as follows:
As the likelihood terms cancel out in the weights, explicit evaluation of the likelihood function is not necessary.
Algorithm 2 produces a weighted approximation of the augmented distribution
3.3 |
and hence the marginal sample is an approximation of the marginal ABC posterior given in Eq. (3.1). Obviously, the ABC rejection sampler is a special case of the importance sampler, where the proposal distribution is the prior, i.e. , and the non-normalized importance weights are either equal to zero or one.
Accuracy of ABC
ABC estimates suffer from different sources of approximation error: first, choosing the tolerance level has the consequence that only an approximation to the targeted posterior is sampled. Second, even for the sampled distribution does not converge to the (true) posterior distribution if the summary statistic is not sufficient. Finally, sampling introduces a Monte Carlo error, which depends on sampling efficiency and sampling effort. Sampling efficiency is measured by the effective sample size (ESS), which is the number of independent draws required to obtain a parameter estimate with the same precision (see Liu 2001).
The tolerance level plays an important role as it has an impact on the quality of the ABC posterior as an approximation to the target posterior as well as on the effective sample size. For ABC rejection sampling, the effective sample size is equal to the number of accepted draws. Reducing leads to an increase of the rejection rate, and hence the sampling effort in order to maintain a desired ESS will be higher.
For importance sampling the ESS is given as
where denotes the coefficient of variation of the importance weights (see Liu 2001). It can be estimated by
3.4 |
As more imbalanced weights result in a lower effective sample size, the choice of directly affects the of the importance sample. Weights become more imbalanced with decreasing tolerance level , see Eq. (3.2), resulting in a lower . Consider e.g. , where the importance weights are . For , weights are constant, , and hence the takes its maximal value , whereas for , many weights will be close to or equal to zero. Therefore, there is a trade-off between closeness of the ABC posterior to the true posterior, which is achieved by choosing as small as possible, and a close to optimal effective sample size.
Utility function estimation using ABC methods
We consider Bayesian information criteria, where the utility function, , is a functional of the posterior distribution, . Based on information-theoretic grounds, a widely used utility function is the Kullback-Leibler (KL) divergence between the prior and the posterior distribution (see Chaloner and Verdinelli (1995) and the references given therein). Precise estimation of the KL divergence is difficult and requires large samples from the posterior distribution (for an estimation approach see Liepe et al. 2013). However, if the posterior distribution has a regular shape, i.e., if it is unimodal and does not exhibit extreme skewness and kurtosis as in our example, then the posterior precision is also a good measure of the posterior information gain (see also Drovandi and Pettitt 2013). The posterior precision utility defined as
can be efficiently estimated from the sample variance-covariance matrix. We will use it in our example in Sect. 5.
For an intractable likelihood, a sample obtained by ABC methods can be used to approximate the utility function by . The expected utility Eq. (2.1) at design point can then be approximated by
The sample from the prior predictive distribution can be generated by first drawing and then .
The major difficulty with this strategy is that it requires one to obtain the ABC posteriors for at each design point , which is typically computationally prohibitive.
Utility function estimation using ABC rejection sampling
One solution to the problem of having to quickly re-compute the ABC posteriors for each is to simulate a large sample from for a given design and to construct the ABC posterior for each as a subset of . Those parameter values where the corresponding is in a -neighborhood of , i.e. where , constitute the ABC posterior sample. Denoting the corresponding index set by , a sample from the ABC posterior can be obtained by the following rejection sampling algorithm (cf. Algorithm 1):
Compute the discrepancies for all particles .
Accept if .
Fixing in advance has the drawback that the ABC sample size cannot be controlled. Hence, for practical purposes, it is more convenient to fix , at the expense of having no direct control over the tolerance level , which then results as the smallest discrepancy .
If computer memory permits, it can be useful to pre-simulate the summary statistics for all possible designs , so that is available prior to the optimization step. This strategy may help to reduce the overall simulation effort if redundancies between different designs can be exploited. As a further advantage, pre-simulation of the summary statistics for all possible designs permits the application of simulation-based optimal design techniques such as the MCMC sampler of Müller (1999), which is pursued in Drovandi and Pettitt (2013). However, the necessity to store all summary statistics for all designs limits the number of possible candidate designs over which to optimize. The number of candidate designs which may be considered depends on the number of distinct summary statistics for each candidate design, the desired ABC accuracy, and the storage capacities.
Utility function estimation using importance weight updates
An alternative strategy to obtain a sample from the approximate posterior distribution is based on importance sampling, see Sect. 3.1. We assume that a weighted sample from the prior distribution, , is available. The goal is to update the weights such that the weighted sample approximates the ABC posterior distribution . If is an i.i.d. sample from the prior , all weights are equal to . However, the weights might also differ, e.g. when information from previous observations is used to generate an ABC importance sample from the posterior conditioning on .
Following Del Moral et al. (2012), we define the ABC target posterior as
3.5 |
where are auxiliary data, and use the importance density
Simulating from , unnormalized posterior weights of can be estimated by
It is essential to select , as otherwise most of the weights would be close or even equal to zero, leading to a very small effective sample size. As noted in Del Moral et al. (2012), the ABC posterior given in (3.5) has the advantage that
for , and hence the sampler is similar to the “marginal” sampler which samples directly from the marginal ABC posterior (3.1).
Just as for the ABC rejection strategy described above, creating the sample in advance can speed up the computations considerably, because can be re-used to compute for each sampled from . It may also be convenient to compute the summary statistics for all design points at once, see the corresponding remarks in Sect. 3.3.1.
Moreover, also similar to Sect. 3.3.1, it is preferable to fix the target instead of selecting the tolerance level , as the effective sample size may vary substantially between the ABC posterior samples for the different when the same tolerance level is used for all . Therefore, we choose a target value for the and adjust in each step to produce ABC posterior samples with an close to the target value.
For a pre-simulated sample , a fast and flexible sampling scheme targeting a specific effective sample size in each step can be implemented using a uniform kernel, . Then the weight for particle is proportional to its prior weight multiplied by the number of simulated data with a discrepancy to below , i.e.
3.6 |
To roughly keep a defined ESS for each we proceed as follows. Let denote the set of discrepancies between and and let . For each , the set can be searched for the tolerance level which yields the best approximation to the target . The weights are computed from (3.6) and the ESS results from (3.4). The advantage of using a uniform kernel is that the weight only depends on the number of elements in which are not larger than . Binary search algorithms can be applied on the sorted set to determine this number in an efficient manner.
Spatial extremes
In this section we review some basic concepts of extreme value theory which are needed in our application in Sect. 5.
Max-stable processes
The joint distribution of extreme values at given locations can be modeled as marginal distribution of max-stable processes on . Max-stable processes arise as the limiting distribution of the maxima of i.i.d. random variables on , see de Haan (2004) for a concise definition. A property of max-stable processes which allows convenient modeling is that their multivariate marginals are members of the class of multivariate extreme value distributions, and univariate marginals have a univariate generalized extreme value (GEV) distribution.
The cumulative distribution function of the univariate GEV distribution is given as
where , and are the location, scale, and shape parameters, respectively, and . The GEV distribution with parameters is called the unit Fréchet distribution. Any GEV random variable can be transformed to unit Fréchet by the transformation
4.1 |
This property allows to focus on max-stable processes with unit Fréchet margins when the dependence structure is of interest. Hence we assume that all univariate marginal distributions are unit Fréchet in what follows.
Dependence structure of max-stable processes
The multivariate distribution of a max-stable process with unit Fréchet margins at the locations has the form
4.2 |
The function is a homogeneous function of order ,
4.3 |
and is called the exponent measure (Pickands 1981). The dependence structure of a stationary max-stable process can be modeled via one of its spectral representations. These representations are useful as they often allow for an interpretation of the max-stable process in terms of maxima of underlying processes (see e.g. Smith (1990), Schlather (2002), or Davison et al. (2012)) and make it possible to devise sampling schemes for many max-stable processes.
Here we will consider the model introduced by Schlather (2002). Let be a Poisson process on with intensity and be independent replicates of a stationary process on with . Then
is a stationary max-stable process with unit Fréchet margins. In the Schlather model, is specified as a Gaussian process. If the Gaussian random field is isotropic, it has the correlation function , where is the distance between two points and and denotes the parameters of . The correlation function has to be chosen from one of the correlation families for Gaussian processes, e.g. Whittle–Matérn, Cauchy, or powered exponential. For the Schlather model, a closed form of the likelihood exists only for points.
Extremal coefficients
A useful summary measure for extremal dependence is given by the extremal coefficients, which are defined via the marginal cdfs of a max-stable process. From (4.2) and (4.3), the joint cdf of at is given as
is called the -point extremal coefficient between the locations . Though the extremal coefficients between all the sets of points () contain a substantial amount of the information on the dependence structure of the max-stable process, they are not sufficient to characterize the whole process.
Given block maxima observed at each of the points , Erhardt and Smith (2012) propose to estimate the -point extremal coefficient by the simple estimator
4.4 |
where .
Application
We illustrate our likelihood-free methodology on an application where the aim is to find the optimal design for estimating the parameters characterizing the dependence of spatial extreme values. As our example is meant to illustrate the basic methodology, we use a simple design setting.
The problem we consider is inspired by the paper of Erhardt and Smith (2014), who use data on maximum annual summer temperatures from 39 sites in the Midwest region of the USA for pricing weather derivatives. Figure 1 shows a map of the 39 weather stations. The dots (bottom left and top right) indicate the two stations with the largest mutual distance, which we will include in each design. Our goal is to determine which of the remaining 37 stations, indicated by the numbers 1–37, should be kept to allow optimal inference for the spatial dependence parameters. Thus we intend to find the optimum three-point design.
We specify the spatial extremes model as a Schlather model (Schlather 2002) with the Whittle–Matérn correlation function. The Schlather model requires us to select a correlation function, which is also part of the model choice. However, the Whittle–Matérn correlation function is a quite flexible correlation function. It is specified as
where is the modified Bessel function of the second kind of order , and , , . We fix the partial sill parameter at (which is a standard choice, see the applications of max-stable processes in Davison et al. 2012) and the smooth parameter at . The smooth parameter is fixed, since widely different values for and can result in similar values for the correlation function, making joint inference for both parameters more difficult.
As utility function we choose the posterior precision of the range parameter , which is the only parameter to be estimated, i.e.
Following Erhardt and Smith (2012), we use the tripletwise extremal coefficient for each three-point design as summary statistic for ABC inference.
For a three-point design, the gain in information from the prior to the posterior distribution will be very low unless many observations are available. Therefore, we obtain the optimal design for samples of size , so that we are able to clearly identify differences between the expected posterior precision values for different designs. For practical purposes, the three-point designs can be sequentially augmented by further design points. One can stop when the amount of data available in practice is sufficient to exceed a desired minimum expected posterior precision.
In Sect. 5.1, we compare ABC rejection and ABC importance sampling for likelihood-free optimal design for the case where a standard uniform prior distribution is specified for . In Sect. 5.2, we go one step further and additionally incorporate information from prior observations. In our case, data from 115 years collected at the 39 stations were used to estimate an ABC posterior distribution for the range parameter. This posterior distribution was then used as parameter distribution in an importance weight update algorithm to determine the optimal three-point design for future inference.
Comparison of likelihood-free design algorithms
Settings
In the case where we have no prior observations, we assumed a uniform prior for the parameter , which is similar as in Erhardt and Smith (2012). This prior is meant to cover all plausible range parameter values, since the largest inter-site distance is , the smallest is . Its density is displayed as dashed line in Fig. 3.
The goal is to find the design for which is maximal (see Eq. (2.2)), where we set , , and are samples of size from the prior predictive distribution. We now give details for both the rejection sampling algorithm and the importance weight update algorithm.
For the ABC rejection sampling algorithm (see Sect. 3.3.1), as a first step we pre-simulated samples
of size for all designs by sampling from the prior and (having size ) from the Schlather model. As a summary statistic, , we use the estimated tripletwise extremal coefficient computed according to Formula (4.4) for the simulated observations at the design coordinates .
As the next step, for each design , we simulated observations () and computed the tripletwise extremal coefficient . The ABC posterior sample was formed by those 500 (0.01 %) elements of with the lowest absolute difference . This ABC posterior sample was then used to compute for each .
For the importance weight update algorithm, we generated the pre-simulated sample as follows: a sample of size was obtained from the prior distribution. For each , a collection of samples from the Schlather model was generated and the tripletwise extremal coefficients were computed for all designs. Each consisted of observations.
In the Monte Carlo integration step, for each design , the samples () of size were generated and the normalized importance weights were computed from (3.6), where the absolute difference between the corresponding tripletwise extremal coefficients was used as discrepancy . The weighted ABC posterior sample was used to estimate . For each , we aimed to obtain samples from the ABC posterior with target ESS = 100.
Results
All computations were performed on the SGI Altix 4700 SMP system using 20 nodes in parallel. For the ABC rejection method, it took about 28 h to generate the pre-simulated sample of length , which required roughly 1.35 GB. The Monte Carlo integration procedure, where the utility functions for the samples from the prior predictive distribution are evaluated and the average is computed, needed about 2.6 h. For the importance weight update method, the pre-simulated sample of length was generated in 46 h and produced a file of size 2.06 GB. The Monte Carlo integration took about 5.5 h.
Figure 2 shows the results for both methods for one particular simulation run. Designs are indicated by circles, where the number denotes the rank of the design with respect to the expected utility criterion, and the two fixed stations are indicated by black dots. The ranking of the designs is additionally visualized by the filling intensity: the circle for the design with the highest criterion value across both methods is darkest ( for station 23 using the importance weight update method), whereas the design with the lowest criterion value across both methods is white ( for station 17 using the ABC rejection method). The gray levels of all the other circles are in between these two extreme levels in proportion to their criterion values.
The results of both methods correspond closely. There are only negligible differences with respect to the estimated design criterion values for the large majority of design points which lie in the middle between the two fixed stations, indicated by similar filling intensities in Fig. 2. On the other hand, rankings can differ considerably due to Monte Carlo error. However, we observe that differences in rankings occur for designs with approximately the same expected utility values. Therefore, all these designs are almost equally well-suited for conducting experiments, so differences in rankings are of minor interest. However, the expected utilities for the design points close to the fixed design point in the upper right corner as well as the design points in the lower right, which are far away from either fixed station, have notably lower expected utility values.
We varied the target effective sample sizes for both the ABC rejection method and the importance weight update method. The ABC rejection method was also run using increased ABC sample sizes of and . We could not observe any discernible effects on the general pattern of criterion orderings. The same can be said about the importance weight update method, where we computed the rankings for different target effective sample sizes between and . The details are provided in Section 1 of Online Resource 1.
Incorporating information from prior observations
As briefly mentioned in Sect. 3.3.2, information from prior observations can easily be incorporated to estimate the design criteria using the importance weight update algorithm. Information from prior observations can be processed by any suitable ABC algorithm to obtain an ABC posterior sample for the parameters, which serves as “input prior” sample in the importance weight update algorithm.
We illustrate the incorporation of information from prior data by using the data previously analyzed in Erhardt and Smith (2014). The data set contains maximum summer (June 1–August 31) temperature records collected at the 39 stations from 1895 to 2009 (115 observations). The daily data can be downloaded from the National Climatic Data Center (http://cdiac.ornl.gov/ftp/ushcn_daily). The block maximum for year at location is obtained by computing , where denotes the 92 maximum daily temperature observations in summer. Erhardt and Smith (2014) performed checks of the GEV and Schlather model assumptions for this data set and concluded that the Schlather model is appropriate.
Following Erhardt and Smith (2014), we transformed the original data to unit Fréchet scale at each location using Eq. (4.1), where estimates of the marginal GEV parameters , , and at location were plugged in.
We specified a uniform prior for and applied ABC rejection sampling, see Algorithm 1, to derive the ABC posterior for . As in Erhardt and Smith (2012), we used a discrepancy function based on tripletwise extremal coefficients. We note here that with data from 39 stations, there are tripletwise extremal coefficients, which requires a more sophisticated discrepancy function compared to that in Sect. 5.1. Dimension reduction was achieved by clustering the extremal coefficients according to the inter-site distances into 100 clusters. Only the average values within each cluster were used as summary statistics. Finally, the discrepancy between two vectors of summary statistics was computed by the Manhattan distance, for details see Erhardt and Smith (2012).
We generated a sample of size from for the design including all 39 points and kept only those (0.02 %) draws yielding the smallest values of the discrepancy to the original sample. The resulting posterior distribution is shown in Fig. 3 (solid line). This distribution is more informative about the parameter than the flat uniform prior used in Sect. 5.1 (dashed line).
The ABC posterior sample was then used as prior sample in the importance weight update algorithm from Sect. 3.3.2, with the same settings as in Sect. 5.1.1: for each (), we simulated samples of size taken at the 39 sites and stored the tripletwise extremal coefficients as summary statistics. To compute for each , we generated samples (also of size ) from the prior predictive distribution. The simulation times were very similar to those of the importance weight update method for the uniform prior in Sect. 5.1.
Figure 4 shows the ranking of the design points when the ABC posterior for is used as prior for the importance weight update algorithm. The gray levels correspond to the criterion values of the design points relative to the maximum value (rank 1 at station 27, dark grey) and the minimum value (rank 37 at station 17, white). The ranking exhibits the same general pattern as those in Fig. 2 for the uniform prior. Points very close to one of the fixed points and points very far away from either fixed point have a lower expected utility than the design points in the middle.
The distribution of the simulated utility values is displayed in Fig. 5, where the 37 designs are numbered as in Fig. 1. One can see, for example, that stations 15 and 17 in Minnesota, which are situated close to the top right station, have comparably low utility values.
In Section 2 of Online Resource 1, we investigate the effect of the Monte Carlo error on the design rankings in this example by performing several simulation runs. The rankings differ in particular for the designs in the middle. For these, however, the criterion values are very similar.
When we use another pre-simulated sample, only minor shifts in the resulting rankings occur, which indicates that our choice of and is sufficient. On the other hand, we observe larger differences between the results if we use different random samples from the prior predictive distribution. Hence, in our example it would be worthwhile to increase in order to improve the accuracy of the criterion estimates.
Conclusion
In this paper we presented an approach for Bayesian design of experiments when the likelihood of the statistical model is intractable and hence classical design, where the utility function is a functional of the likelihood, is not feasible. In such a situation ABC methods can be employed to approximate a Bayesian utility function, which is a functional of the posterior distribution. For a finite design space, the conceptually straightforward approach is to run ABC for each design and each data set , , but this will typically be computationally prohibitive.
As we demonstrate here, a useful strategy is to pre-simulate data for a sample of parameter values at each design. Employing ABC rejection sampling or ABC importance sampling then allows to obtain approximations of the utility function. In our application, the importance weight update method turns out to be particularly useful to incorporate information from prior observations. Both methods are also applicable to situations where the likelihood is in principle tractable, but the posterior is difficult or time-consuming to obtain.
A notorious problem of any ABC method is the choice of the summary statistics, as in problems where one will resort to ABC methods typically no sufficient statistics are available, and the quality of the ABC posterior as an approximation to the true posterior critically depends on the summary statistics. The usefulness of the tripletwise extremal coefficient was validated by Erhardt and Smith (2012). It therefore seems appropriate as ABC summary statistic in our application, where the goal is to find the optimal design consisting of three weather stations. For higher-dimensional designs different summary statistics with lower dimension might be more advantageous.
A further drawback of the presented approach is that memory space and/or computing time restrictions will only permit optimization over a rather small number of designs. For a large design space, a stochastic search algorithm, e.g. as in Müller et al. (2004), should be employed.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Acknowledgments
We are grateful to a referee for providing numerous valuable suggestions to improve the paper.
Funding
Markus Hainy has been supported by the French Science Fund (ANR) and Austrian Science Fund (FWF) bilateral Grant I-833-N18.
Contributor Information
Markus Hainy, Phone: +43-732-2468-6828, Email: markus.hainy@jku.at.
Werner G. Müller, Phone: +43-732-2468-6802, Email: werner.mueller@jku.at
Helga Wagner, Phone: +43-732-2468-6831, Email: helga.wagner@jku.at.
References
- Atkinson AC, Donev AN, Tobias RD. Optimum experimental designs, with SAS. New York: Oxford University Press; 2007. [Google Scholar]
- Bayraktar H, Turalioglu FS. A Kriging-based approach for locating a sampling site in the assessment of air quality. Stoch Environ Res Risk A. 2005;19(4):301–305. doi: 10.1007/s00477-005-0234-8. [DOI] [Google Scholar]
- Beaumont MA, Zhang W, Balding DJ. Approximate Bayesian computation in population genetics. Genetics. 2002;162(4):2025–2035. doi: 10.1093/genetics/162.4.2025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bortot P, Coles SG, Sisson SA. Inference for stereological extremes. J Am Stat Assoc. 2007;102(477):84–92. doi: 10.1198/016214506000000988. [DOI] [Google Scholar]
- Chaloner K, Verdinelli I. Bayesian experimental design: a review. Stat Sci. 1995;10(3):273–304. doi: 10.1214/ss/1177009939. [DOI] [Google Scholar]
- Chang H, Fu A, Le N, Zidek J. Designing environmental monitoring networks to measure extremes. Environ Ecol Stat. 2007;14(3):301–321. doi: 10.1007/s10651-007-0020-5. [DOI] [Google Scholar]
- Davison L, Padoan A, Ribatet M. Statistical modelling of spatial extremes. Stat Sci. 2012;27:161–186. doi: 10.1214/11-STS376. [DOI] [Google Scholar]
- de Haan L. A spectral representation for max-stable processes. Ann Probab. 2004;12:1194–1204. doi: 10.1214/aop/1176993148. [DOI] [Google Scholar]
- Del Moral P, Doucet A, Jasra A. An adaptive sequential Monte Carlo method for approximate Bayesian computation. Stat Comput. 2012;22:1009–1020. doi: 10.1007/s11222-011-9271-y. [DOI] [Google Scholar]
- Dobbie MJ, Henderson BL, Stevens DL (2008) Sparse sampling: spatial design for monitoring stream networks. Stat Surv 2:113–153. URL http://projecteuclid.org/euclid.ssu/1219930181
- Drovandi CC, Pettitt AN. Bayesian experimental design for models with intractable likelihoods. Biometrics. 2013;69(4):937–948. doi: 10.1111/biom.12081. [DOI] [PubMed] [Google Scholar]
- Erhardt RJ, Smith RL. Approximate Bayesian computing for spatial extremes. Comput Stat Data An. 2012;56(6):1468–1481. doi: 10.1016/j.csda.2011.12.003. [DOI] [Google Scholar]
- Erhardt RJ, Smith RL. Weather derivative risk measures for extreme events. N Am Actuar J. 2014;18(3):1–15. doi: 10.1080/10920277.2014.910472. [DOI] [Google Scholar]
- Fearnhead P, Prangle D. Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. J Roy Stat Soc B. 2012;74(3):419–474. doi: 10.1111/j.1467-9868.2011.01010.x. [DOI] [Google Scholar]
- Fedorov VV. Theory of optimal experiments. New York: Academic Press; 1972. [Google Scholar]
- Hainy M, Müller WG, Wagner H (2013a) Likelihood-free simulation-based optimal design. arXiv:1305.4273 [DOI] [PMC free article] [PubMed]
- Hainy M, Müller WG, Wynn HP. Approximate Bayesian computation design (ABCD), an introduction. In: Ucinsky D, Atkinson AC, Patan M, editors. mODa 10—advances in model-oriented design and analysis. Cham: Springer International Publishing; 2013. pp. 135–143. [Google Scholar]
- Hainy M, Müller WG, Wynn HP. Learning functions and approximate Bayesian computation design: ABCD. Entropy. 2014;16(8):4353–4374. doi: 10.3390/e16084353. [DOI] [Google Scholar]
- Harris P, Clarke A, Juggins S, Brunsdon C, Charlton M (2014) Geographically weighted methods and their use in network re-designs for environmental monitoring. Stoch Environ Res Risk A, pp 1–19
- Huan X, Marzouk YM. Simulation-based optimal Bayesian experimental design for nonlinear systems. J Comput Phys. 2013;232:288–317. doi: 10.1016/j.jcp.2012.08.013. [DOI] [Google Scholar]
- Lesch SM. Sensor-directed response surface sampling designs for characterizing spatial variation in soil properties. Comput Electron Agric. 2005;46(1–3):153–179. doi: 10.1016/j.compag.2004.11.004. [DOI] [Google Scholar]
- Liepe J, Filippi S, Komorowski M, Stumpf MPH. Maximizing the information content of experiments in systems biology. PLoS Comput Biol. 2013;9(1):e1002888. doi: 10.1371/journal.pcbi.1002888. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu JS. Monte Carlo strategies in scientific computing. New York: Springer; 2001. [Google Scholar]
- Mateu J, Müller WG, editors. Spatio-temporal design: advances in efficient data acquisition. Chichester: Wiley; 2012. [Google Scholar]
- Melles SJ, Heuvelink GBM, Twenhöfel CJW, van Dijk A, Hiemstra PH, Baume O, Stöhlker U. Optimizing the spatial pattern of networks for monitoring radioactive releases. Comput Geosci. 2011;37(3):280–288. doi: 10.1016/j.cageo.2010.04.007. [DOI] [Google Scholar]
- Müller P. Simulation based optimal design. In: Bernardo JM, Berger JO, Dawid AP, Smith AFM, editors. Bayesian statistics 6. New York: Oxford University Press; 1999. pp. 459–474. [Google Scholar]
- Müller P, Sansó B, De Iorio M. Optimal Bayesian design by inhomogeneous Markov chain simulation. J Am Stat Assoc. 2004;99(467):788–798. doi: 10.1198/016214504000001123. [DOI] [Google Scholar]
- Müller WG (2007) Collecting spatial data: optimum design of experiments for random fields, 3rd rev. and extended edn. Springer, Heidelberg
- Pickands J (1981) Multivariate extreme value distributions. In: Proceedings of the 43rd Session of the International Statistical Institute
- Ribatet M, Singleton R (2013) SpatialExtremes: modelling spatial extremes. URL http://spatialextremes.r-forge.r-project.org/, R package version 2.0
- Schlather M. Models for stationary max-stable random fields. Extremes. 2002;5(1):33–44. doi: 10.1023/A:1020977924878. [DOI] [Google Scholar]
- Sevcikova H, Rossini AJ (2012) rlecuyer: R interface to RNG with multiple streams. URL http://cran.r-project.org/web/packages/rlecuyer/index.html, R package version 0.3
- Sisson SA, Fan Y. Likelihood-free Markov chain Monte Carlo. In: Brooks SP, Gelman A, Jones G, Meng XL, editors. Handbook of Markov chain Monte Carlo. Handbooks of Modern statistical methods. Boca Raton: Chapman and Hall/CRC Press; 2011. pp. 319–341. [Google Scholar]
- Smith A (1990) Max-stable processes and spatial extremes. Technical report, URL http://www.stat.unc.edu/postscript/rs/spatex, downloaded 1 July 2014
- Spöck G, Pilz J. Spatial sampling design and covariance-robust minimax prediction based on convex design ideas. Stoch Environ Res Risk A. 2010;24(3):463–482. doi: 10.1007/s00477-009-0334-y. [DOI] [Google Scholar]
- Stein A, Ettema C. An overview of spatial sampling procedures and experimental design of spatial studies for ecosystem comparisons. Agric Ecosyst Environ. 2003;94(1):31–47. doi: 10.1016/S0167-8809(02)00013-0. [DOI] [Google Scholar]
- Stephenson AG (2002) evd: Extreme value distributions. R News 2(2), URL http://CRAN.R-project.org/doc/Rnews/
- Tierney L, Rossini AJ, Li N, Sevcikova H (2013) snow: Simple network of workstations. URL http://cran.r-project.org/web/packages/snow/index.html, R package version 0.3
- Toni T, Welch D, Strelkowa N, Ipsen A, Stumpf MPH. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J Roy Soc Interface. 2009;6(31):187–202. doi: 10.1098/rsif.2008.0172. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.