Skip to main content
Springer logoLink to Springer
. 2023 Mar 30;64(4):1275–1304. doi: 10.1007/s00362-023-01436-x

Discrimination between Gaussian process models: active learning and static constructions

Elham Yousefi 1, Luc Pronzato 2, Markus Hainy 1,, Werner G Müller 1, Henry P Wynn 3
PMCID: PMC10462591  PMID: 37650050

Abstract

The paper covers the design and analysis of experiments to discriminate between two Gaussian process models with different covariance kernels, such as those widely used in computer experiments, kriging, sensor location and machine learning. Two frameworks are considered. First, we study sequential constructions, where successive design (observation) points are selected, either as additional points to an existing design or from the beginning of observation. The selection relies on the maximisation of the difference between the symmetric Kullback Leibler divergences for the two models, which depends on the observations, or on the mean squared error of both models, which does not. Then, we consider static criteria, such as the familiar log-likelihood ratios and the Fréchet distance between the covariance functions of the two models. Other distance-based criteria, simpler to compute than previous ones, are also introduced, for which, considering the framework of approximate design, a necessary condition for the optimality of a design measure is provided. The paper includes a study of the mathematical links between different criteria and numerical illustrations are provided.

Keywords: Model discrimination, Gaussian random field, Kriging

Introduction

The term ‘active learning’ [cf. Hino (2020) for a recent review] has replaced the traditional (sequential or adaptive) ‘design of experiments’ in the computer science literature, typically when the response is approximated by Gaussian process regression [GPR, cf. Sauer et al. (2022)]. It refers to selecting the most suitable inputs to achieve the maximum of information from the outputs, usually with the aim of improving prediction accuracy. A good overview is given in Chapter 6 of Gramacy (2020).

Frequently the aim of an experiment—in the broad sense of any data acquisition exercise—may rather be the discrimination between two or more potential explanatory models. When data can be sequentially collected during the experimental process, the literature goes back to the classic procedure of Hunter and Reiner (1965) and has generated ongoing research [see e.g. Schwaab et al. (2008), Olofsson et al. (2018) and Heirung et al. (2019)]. When the design needs to be fixed before the experiment and thus no intermediate data will be available, the literature is less developed. While in the classical (non)linear regression case the criterion of T-optimality [cf. Atkinson and Fedorov (1975)] and the numerous papers extending it was a major step, a similar breakthrough for Gaussian process regression is lacking.

With this paper we would like to investigate various sequential/adaptive and non-sequential design schemes for discriminating between the covariance structure of GPRs and their relative properties. When the observations associated with the already collected points are available, one may base the criterion on the predictions and prediction errors (Sect. 3.1). On the one hand, one natural choice will be to put the next design point where the symmetric Kullback–Leibler divergence between those two predictive (normal) distributions differs most. On the other hand, when the associated observations are not available, the incremental construction of the designs could be based on the mean squared error (MSE) for both models, assuming in turn that either of the two models is the true one (Sect. 3.2). We theoretically investigate the asymptotic differences of the criteria with respect to their discriminatory power.

The static construction of a set of optimal designs of given size for nominal model parameters is the last mode we have considered (Sect. 4). Our first choice is to use the difference between the expected values of the log likelihood ratios, assuming in turn that either of the two models is the true one. This is actually a function of the symmetric Kullback–Leibler divergence, which also arises from Bayesian considerations. In a similar spirit, the Fréchet distance between two covariance matrices provides another natural criterion. Some further novel but simple approaches are considered in this paper as well. In particular we are interested whether complex likelihood-based criteria like the Kullback–Leibler-divergence can be effectively replaced by simpler ones based directly on the respective covariance kernels. The construction of optimal design measures for model discrimination (approximate design theory) is considered in Sect. 5, where we investigate the geometric properties for some of the newly introduced criteria.

Eventually, to compare the discriminatory power of the resulting designs from different criteria, one can compute the correct classification (hit) rates after selecting the model with the higher likelihood value. In Sect. 6, a numerical illustration is provided for two Matérn kernels with different smoothness. Furthermore, we confirm the theoretical considerations about optimal design measures from Sect. 5 on a numerical example.

Except for adaptive designs, where the parameter estimates are continuously updated as new data arrive, we assume that the parameters of the models between which we want to discriminate are known. Therefore, our results are relevant in situations where there is strong prior knowledge about the possible models, for example through previously collected data.

Notation

One of the most popular design criteria for discriminating between rival models is T-optimality (Atkinson and Fedorov 1975). This criterion is only applicable when the observations are independent and normally distributed with a constant variance. López-Fidalgo et al. (2007) generalised the normality assumption and developed an optimal discriminating design criterion to choose among non-normal models. The criterion is based on the log-likelihood ratio test under the assumption of independent observations. We denote by φ0(y,x,θ0) and φ1(y,x,θ1) the two rival probability density functions for one observation y at point x. The following system of hypotheses might be considered:

H0:φ(y,x)=φ0(y,x,θ0)H1:φ(y,x)=φ1(y,x,θ1)

where φ1(y,x,θ1) is assumed to be the true model. A common test statistic is the log-likelihood ratio given as

L=-logφ0(y,x,θ0)φ1(y,x,θ1)=logφ1(y,x,θ1)φ0(y,x,θ0),

where the null hypothesis is rejected when φ1(y,x,θ1)>φ0(y,x,θ0) or equivalently when L>0. The power of the test refers to the expected value of the log-likelihood ratio criterion under the alternative hypothesis H1. We have

EH1(L)=E1(L)=φ1(y,x,θ1)logφ1(y,x,θ1)φ0(y,x,θ0)dy=DKL(φ1φ0), 1

where DKL(φ1φ0) is the Kullback–Leibler distance between the true and the alternative model (Kullback and Leibler 1951).

Interchanging the two models in the null and the alternative hypothesis, the power of the test would be

E0(-L)=DKL(φ0φ1). 2

If it is not clear in advance which of the two models is the true model, one might consider to search for a design optimising a convex combination of (1) and (2), most commonly using weights 1/2 for each model. This would be equivalent to maximising the symmetric Kullback–Leibler distance

DKL(φ0,φ1)=12DKLφ0φ1+DKLφ1φ0.

In this paper we will consider random fields, i.e. we will allow for correlated observations. As we assume that the mean function is known and the same for all models, without loss of generality we can set the mean function equal 0 everywhere. We are solely concerned with discriminating with respect to the covariance structure of the random fields. When the random fields are Gaussian, we might still base the design strategy on the log-likelihood ratio criterion to choose among two rival models.

For a positive definite kernel K(x,x) and an n-point design Xn=(x1,,xn), kn(x) is the n-dimensional vector (K(x,x1),,K(x,xn)) and Kn is the n×n (kernel) matrix with elements {Kn}i,j=K(xi,xj). Although x is not bold, it may correspond to a point in a (compact) set XRd. Assume that Y(x) corresponds to the realisation of a random field Zx, indexed by x in X, with zero mean E{Zx}=0 for all x and covariance E{ZxZx}=K(x,x) for all (x,x)X2. Our prediction of a future observation Y(x) based on observations Yn=(Y(x1),,Y(xn)) corresponds to the best linear unbiased predictor (BLUP) η^n(x)=kn(x)Kn-1Yn. The associated prediction error is en(x)=Y(x)-η^n(x) and we have

Een2(x)=ρn2(x)=K(x,x)-kn(x)Kn-1kn(x).

The index n will often be omitted when there is no ambiguity, and in that case ki(x)=kn,i(x), Ki=Kn,i, ei(x)=en,i(x), ρi2(x)=ρn,i2(x) will refer instead to model i, with i{0,1}. We shall need to distinguish between the cases where the truth is model 0 or model 1, and following Stein (1999, p. 58) we denote by Ei the expectation computed with model i assumed to be true. We reserve the notation ρi2(x) to the case where the expectation is computed with the true model; i.e.,

ρi2(x)=Eiei2(x).

Hence we have ρ02(x)=E0{e02(x)}=K0(x,x)-k0(x)K0-1k0(x) and calculation gives

E0{e12(x)}=K0(x,x)+k1(x)K1-1K0K1-1k1(x)-2k1(x)K1-1k0(x),E0{[e1(x)-e0(x)]2}=E0{e12(x)}-E0{e02(x)}, 3

with an obvious permutation of indices 0 and 1 when assuming the model 1 is true to compute E1{·}.

If model 0 is correct, the prediction error is larger when we use model 1 for prediction than if we use the BLUP (i.e., model 0). Stein (1999, p. 58) shows that the relation

E0e12(x)E0e02(x)=1+E0[e1(x)-e0(x)]2E0e02(x)

shown above is valid more generally for models with linear trends.

Also of interest is the assumed mean squared error (MSE) E1{e12(x)} when we use model 1 for assessing the prediction error (because we think it is correct) while the truth is model 0, and in particular the ratio

E1e12(x)E0e12(x)=K1(x,x)-k1(x)K1-1k1(x)E0e12(x),

which may be larger or smaller than one.

Another important issue concerns the choice of covariance parameters in K0 and K1. Denote Ki(x,x)=σi2Ci,θi(x,x), i=0,1, (x,x)X2, where the σi2 define the variance, the θi may correspond to correlation lengths in a translation invariant model and are thus scalar in the isotropic case, and C(x,x) defines a correlation.

Prediction-based discrimination

For the incremental construction of a design for model discrimination, points are added conditionally on previous design points. We can distinguish the case where the observations associated with those previous points are available and can thus be used to construct a sequence of predictions (sequential, i.e., conditional, construction) from the unconditional case where observations are not used.

Sequential (conditional) design

Consider stage n, where n design points Xn and n observations Yn are available. Assuming that the random field is Gaussian, when model i is true we have Y(x)N(η^n,i(x),ρn,i2(x)). A rather natural choice is to choose the next design point xn+1 where the symmetric Kullback–Leibler divergence between those two normal distributions differs most; that is,

xn+1ArgmaxxXρn,02(x)ρn,12(x)+ρn,12(x)ρn,02(x)+[η^n,1(x)-η^n,0(x)]21ρn,02(x)+1ρn,12(x). 4

Other variants could be considered as well, such as

xn+1ArgmaxxX[η^n,1(x)-η^n,0(x)]2,xn+1ArgmaxxX[η^n,1(x)-η^n,0(x)]2ρn,02(x)+ρn,12(x),xn+1ArgmaxxX[η^n,1(x)-η^n,0(x)]21ρn,02(x)+1ρn,12(x).

They will not be considered in the rest of the paper.

If necessary one can use plug-in estimates σ^n,i2 and θ^n,i of σi2 and θi, for instance maximum likelihood (ML) or leave-one-out estimates based on Xn and Yn, when we choose xn+1. Note that the value of σ2 does not affect the BLUP η^n(x)=knKn-1Yn. In the paper we do not address the issues related to the estimation of σ2 or of the correlation length or smoothness parameters of the kernel; one may refer to Karvonen et al. (2020) and the recent papers Karvonen (2022), Karvonen and Oates (2022) for a detailed investigation. The connection between the notion of microergodicity, related to the consistency of the maximum-likelihood estimator, and discrimination through a KL divergence criterion is nevertheless considered in Example 1 below.

Incremental (unconditional) design

Consider stage n, where n design points Xn are available. We base the choice of the next point on the difference between the MSEs for both models, assuming that one or the other is true. For instance, assuming that model 0 is true, the difference between the MSEs is E0{e12(x)}-E0{e02(x)}=E0{[e1(x)-e0(x)]2}=E0{[η^n,1(x)-η^n,0(x)]2}.

A first, un-normalised, version is thus

ϕA(x)=E0[e1(x)-e0(x)]2+E1[e1(x)-e0(x)]2=E0e12(x)-E0e02(x)+E1e02(x)-E1e12(x). 5

A normalisation seems in order here too, such as

ϕB(x)=E0[e1(x)-e0(x)]2ρn,02(x)+E1[e1(x)-e0(x)]2ρn,12(x)=E0e12(x)E0e02(x)+E1e02(x)E1e12(x)-2. 6

A third criterion is based on the variation of the symmetric Kullback-Leibler divergence (10) of Sect. 4 when adding an (n+1)-th point x to Xn. Direct calculation, using

Kn+1,i=Kn,ikn,i(x)kn,i(x)Ki(x,x),i=0,1,

and the expression of the inverse of a block matrix, gives

ΦKL[K0,K1](Xn{x})=ΦKL[K0,K1](Xn)+12E1{e02(x)}E0{e02(x)}+E0{e12(x)}E1{e12(x)}-1.

We thus define

ϕKL(x)=12E1{e02(x)}E0{e02(x)}+E0{e12(x)}E1{e12(x)}-1, 7

to be maximised with respect to xX.

Although the σi2 do not affect predictions, Ei{ej2(x)} is proportional to σi2. Unless specific information is available, it seems reasonable to assume that σ02=σ12=1. Other parameters θi should be chosen to make the two kernels the most similar, which seems easier to consider in the approach presented in Sect. 4, see (11). In the rest of this section we suppose that the parameters of both kernels are fixed.

The un-normalised version ϕA(x) given by (5) could be used to derive a one-step (non-incremental) criterion, in the same spirit as those of Sect. 4, through integration with respect to x for a given measure μ on X. Indeed, we have

E0[e1(x)-e0(x)]2=k0(x)K0-1k0(x)+k1(x)K1-1K0K1-1k1(x)-2k1(x)K1-1k0(x),

so that

XE0[e1(x)-e0(x)]2dμ(x)=traceK0-1A0(μ)+K1-1K0K1-1A1(μ)-2K1-1A0,1(μ),

where Ai(μ)=Xki(x)ki(x)dμ(x), i=0,1, and A0,1(μ)=Xk0(x)k1(x)dμ(x). Similarly,

XE1[e1(x)-e0(x)]2dμ(x)=traceK1-1A1(μ)+K0-1K1K0-1A0(μ)-2K0-1A0,1(μ).

The matrices Ai(μ) and A0,1(μ) can be calculated explicitly for some kernels and measures μ. This happens in particular when X=[0,1]d, the two kernels Ki are separable, i.e., products of one-dimensional kernels on [0, 1], and μ is uniform on X.

Example 1: exponential covariance, no microergodic parameters

We consider Example 6 in Stein (1999, p. 74) and take Ki(x,x)=e-αi|x-x|/αi, i=0,1. The example focuses on two difficulties: first, the two kernels only differ by their parameter values; second, the particular relation between the variance and correlation length makes the parameters αi not microergodic and they cannot be estimated consistently from observations on a bounded interval; see Stein (1999, Chap. 6). It is interesting to investigate the behaviour of the criteria (5), (6) and (7) in this particular situation.

We suppose that n observations are made at xi=(i-1)/(n-1), i=1,,n2. We denote δ=δn=1/[2(n-1)] the half-distance between two design points. The particular Markovian property of random processes with kernels Ki simplifies the analysis. The prediction and MSE at a given x(0,1) only depend on the position of x relative to its two closest neighbouring design points; moreover, all other points have no influence. Therefore, due to the regular repartition of the xi, we only need to consider the behaviour in one (any) interval Ii=[ai,bi]=[xi,xi+1].

We always have ϕA(x)0 as xxiXn. Numerical calculation shows that for δn small enough, ϕA(·) has a unique maximum in Ii at the centre Ci=(xi+xi+1)/2. The next design point xn+1 that maximises ϕA(·) is then taken at Ci for one of the n-1 intervals, and we get

ϕA(Ci)=14(α1-α0)2(α1+α0)3α0α1δn4+O(δn5),n.

Similar results apply to the case where the design Xn contains the endpoints 0 and 1 and its covering radius CR(Xn)=maxx[0,1]mini=1,,n|x-xi| tends to zero, the points xi being not necessarily equally spaced: Ci is then the centre of the largest interval [xi,xi+1] and δn=CR(Xn).

When δn is large compared to the correlation lengths 1/α0 and 1/α1, there exist two maxima, symmetric with respect to Ci, that get closer to the extremities of Ii as α1 increases, and Ci corresponds to a local minimum of ϕA(·). This happens for instance when α0δn=1 and α1δn2.600455.

A similar behaviour is observed for ϕB(x) and ϕKL(x): for small enough δn they both have a unique maximum in Ii at Ci, with now

ϕB(Ci)=14(α1-α0)2(α1+α0)3α0α1δn3+O(δn4),n,ϕKL(Ci)=18(α1-α0)2(α1+α0)3α0α1δn3+O(δn4),n.

Also, ϕB(x)0 and ϕKL(x)0 as xxiXn. For large values of δn compared to the correlation lengths 1/α0 and 1/α1, there exist two maxima in Ii, symmetric with respect to Ci. When α0δn=1, this happens for instance when α1δn2.020178 for ϕB(·) and when α1δn7.251623 for ϕKL(·). However, in the second case the function is practically flat between the two maxima.

The left panel of Fig.  presents ϕA(x), ϕB(x) and ϕKL(x) for x[x1,x2]=[0,0.1] when n=11 (δn=0.05) and α0=1, α1=10. The right panel is for α0δn=1, α1δn=10.

This behaviour of ϕKL(Ci) for small δn sheds light on the fact that α is not estimable in this model. Indeed, consider a sequence of embedded nk-point designs Xnk, initialised with the design Xn=Xn0 considered above and with nk=2k(n0-1)+1, all these designs having the form xi=(i-1)/(nk-1), i=1,,nk. Then, CR(Xnk)=CR(Xj)=δj=1/[2(nk-1)] for j=nk,,nk+1-1=2nk-2. For k large enough, the increase in Kullback–Leibler divergence (10) from Xnk to Xnk+1 is thus bounded by c/(nk-1)2 for some c>0, so that the expected log-likelihood ratio E0{Lnk}-E1{Lnk} remains bounded as k.

More generally, denote by 0x1x2xn1 the ordered points of an n-point design Xn in [0, 1], n3. Let i3 be such that |xi-2-xi|=mini=3,,n|xi-2-xi|. Then necessarily |xi-2-xi|1/(n/2-1). Indeed, consider the following iterative modification of Xn that cannot decrease mini=3,,n|xi-2-xi|: first, move x1 to zero, then move x2 to x1; leave x3 unchanged, but move x4 to x3, etc. For n even, the design Xn obtained is the duplication of an (n/2)-points design; for n odd, only the right-most point xn remains single. In the fist case, the minimum distance between points of Xn is at most 1/(n/2-1), in the second case it is at most 1/(n/2-1). We then define Xn-1=Xn\{xi-1}. For n large enough, the increase in Kullback–Leibler divergence (10) from Xn-1 to Xn is thus bounded by c/(n/2-1)3 for some c>0 depending on α0 and α1. Starting from some design Xn0, we thus have, for n0 large enough,

ΦKL[K0,K1](Xn)-ΦKL[K0,K1](Xn0)ck=n0+1n1(k/2-1)3,

which implies limnΦKL[K0,K1](Xn)B for some B<. Assuming, without any loss of generality, that model 0 is correct, we have 0E0{Ln}B (we get 0E1{-Ln}B when we assume that model 1 is correct), implying in particular that Ln does not tend to infinity a.s. and the ML estimator of α is not strongly consistent.

Example 2: exponential covariance, microergodic parameters

Consider now two exponential covariance models with identical variances (which we take equal to one without any loss of generality): Ki(x,x)=e-αi|x-x|, i=0,1.

Again, ϕA(x)0 as xxiXn and ϕA(·) has a unique maximum at Ci for small enough δn, with now

ϕA(Ci)=12α12-α022δn4+Oδn5,n.

There are two maxima for ϕA(·) in Ii, symmetric with respect to Ci for large δn: when α0δn=1, this happens for instance when α1δn2.558545. Nothing is changed for ϕB(·) compared to Example 1 as the variances cancel in the ratios that define ϕB(·), see (3) and (6). The situation is quite different for ϕKL(·), with

ϕKL(Ci)=12(α1-α0)2α0α1+O(δn),n,

indicating that it is indeed possible to distinguish between the two models much more efficiently with this criterion than with the two others. Interestingly enough, the best choice for next design point is not at Ci but always as close as possible to one of the endpoints ai or bi, with however a criterion value similar to that in the centre Ci when δn is small enough, as limxxiϕKL(x)=(α1-α0)2/(2α0α1). Here, the same sequence of embedded designs as in Example 1 ensures that E0{Lnk}-E1{Lnk} as k. Figure  presents ϕA(x), ϕB(x) and ϕKL(x) in the same configuration as in Fig. 1 but for the kernels Ki(x,x)=e-αi|x-x|, i=0,1.

Fig. 2.

Fig. 2

ϕA(x), ϕB(x) and ϕKL(x), x[x1,x2], for n=11 (δn=0.05) in Example 2. Left: α0=1, α1=10; Right: α0=20, α1=200

Fig. 1.

Fig. 1

ϕA(x), ϕB(x) and ϕKL(x), x[x1,x2], for n=11 (δn=0.05) in Example 1. Left: α0=1, α1=10; Right: α0=20, α1=200

Example 3: Matérn kernels

Take K0 and K1 as the 3/2 and 5/2 Matérn kernels, respectively:

K0,θ(x,x)=(1+3θ|x-x|)exp(-3θ|x-x|)(Mate´rn 3/2), 8
K1,θ(x,x)=[1+5θ|x-x|+5θ2|x-x|2/3]exp(-5θ|x-x|)(Mate´rn 5/2). 9

We take θ=θ0=1 in K0,θ and adjust θ=θ1 in K1,θ to minimise ϕ2[K0,θ0,K1,θ1](μ) defined by Eq. (13) in Sect. 4 with μ the uniform measure on [0, 1], which gives θ11.1275. The left panel of Fig.  shows K0,θ0=1(x,0) and K1,θ(x,0) for θ=1 and θ=θ1 when x[0,1]. The right panel presents ϕB(x) and ϕKL(x) for the same n=11-point equally spaced design Xn as in Example 1 and x[0,1] for K0,1 and K1,1.1275 (the value of ϕA(x) does not exceed 0.6510-4 and is not shown). The behaviours of ϕB(x) and ϕKL(x) are now different in different intervals [xi,xi+1] (they remain symmetric with respect to 1/2, however), the maximum of ϕKL(x) is obtained at the central point x5. The behaviour of ϕKL(·) could be related to the fact that discriminating between K0 and K1 amounts to estimating the smoothness of the realisation, which requires that some design points are close to each other.

Fig. 3.

Fig. 3

Left: K0,1(x,0), K1,1(x,0) and K1,1.1275(x,0), x[0,1]. Right: ϕB(x) and ϕKL(x) for x[0,1] and the same 11-point equally spaced design Xn={0,1/10,2/10,,1} as in Example 1, with K0,1 and K1,1.1275

Distance-based discrimination

We will now consider criteria which are directly based on the discrepancies of the covariance kernels. Ideally those should be simpler to compute and still exhibit reasonable efficiencies and some similar properties. The starting point is again the use of the log-likelihood ratio criterion to choose among the two models. Assuming that the random field is Gaussian, the probability densities of observations Yn for the two models are

φn,i(Yn)=1(2π)n/2det1/2Kn,iexp-12YnKn,i-1Yn,i=0,1.

The expected value of the log-likelihood ratio Ln=logφ(Yn|0)-logφ(Yn|1) under model 0 is

E0{Ln}=12logdetKn,1Kn,0-1-n2+12traceKn,0Kn,1-1

and similarly

E1{Ln}=12logdetKn,1Kn,0-1+n2-12traceKn,1Kn,0-1.

A good discriminating design should make the difference E0{Ln}-E1{Ln} as large as possible; that is, we should choose Xn that maximises

ΦKL[K0,K1](Xn)=E0{Ln}-E1{Ln}=12traceKn,0Kn,1-1+traceKn,1Kn,0-1-n=2DKLφn,0,φn,1, 10

i.e. twice the symmetric Kullback–Leibler divergence between the normal distributions with densities φn,0 and φn,1, see, e.g., Pronzato et al. (2019).

We may enforce the normalisation σ02=σ12=1 and choose the θi to make the two kernels most similar in the sense of the criterion Φ considered; that is, maximise

minθ0Θ0,θ1Θ1ΦKL[K0,K1](Xn). 11

The choice of Θ0 and Θ1 is important; in particular, unconstrained minimisation over the θi could make both kernels completely flat or on the opposite close to Dirac distributions. It may thus be preferable to fix θ0 and minimise over θ1 without constraints. Also, the Kullback–Leibler distance is sensitive to kernel matrices being near singularity, which might happen if design points are very close to each other. Pronzato et al. (2019) suggest a family of criteria based on matrix distances derived from Bregman divergences between functions of covariance matrices from Kiefer’s φp-class of functions (Kiefer 1974). If p(0,1), these criteria are rather insensitive to eigenvalues close or equal to zero. Alternatively, they suggest criteria computed as Bregman divergences between squared volumes of random k-dimensional simplices for k{2,,d-1}, which have similar properties.

The index n is omitted in the following and we consider fixed parameters for both kernels. The Fréchet-distance criterion

ΦF[K0,K1](Xn)=traceK0+K1-2(K0K1)1/2, 12

related to the Kantorovich (Wasserstein) distance, seems of particular interest due to the absence of matrix inversion. The expression is puzzling since the two matrices do not necessarily commute, but the paper Dowson and Landau (1982) is illuminating.

Other matrix “entry-wise" distances will be considered, in particular the one based on the (squared) Frobenius norm,

Φ2[K0,K1](Xn)=traceK02+K12-2K0K1=trace(K0-K1)2,

which corresponds to the substitution of Ki2 for Ki in (12) for i=0,1. Denote more generally

Φp[K0,K1](Xn)=K1-K0pp=i,j=1n|{K1-K0}i,j|p=1n|K1-K0|p1n,p>0,

where 1n is the n-dimensional vector with all components equal to 1, the absolute value is applied entry-wise and p denotes power p applied entry-wise.

Figure  shows the values of the criteria Φi[K0,1,K1,θ], i=1,2, ΦF[K0,1,K1,θ] and ΦKL[K0,1,K1,θ] as functions of θ for the two kernels K0,θ and K1,θ given by (8) and (9) and the same regular design as in Example 1: xi=(i-1)/(n-1), i=1,,11. The criteria are re-scaled so that their maximum equals one on the interval considered for θ. Note the similarity between Φ2[K0,1,K1,θ](Xn) and ΦF[K0,1,K1,θ](Xn) and the closeness between the distance-minimising θ for Φ1, Φ2 and ΦF. Also note the good agreement with the value θ11.1275 that minimises ϕ2[K0,1,K1,θ1](μ) from Eq. (13), see Example 3. The optimal θ for ΦKL[K0,1,K1,θ](Xn) is much different, however, showing that the criteria do not necessarily agree between them.

Fig. 4.

Fig. 4

Φi[K0,1,K1,θ](Xn), i=1,2, ΦF[K0,1,K1,θ](Xn) and ΦKL[K0,1,K1,θ](Xn) as functions of θ[0.75,3] for the same 11-point equally spaced design Xn as in Example 1 and K0,θ, K1,θ given by (8) and (9), respectively

An interesting feature of the family of criteria Φp[K0,K1](·), p>0, is that they extend straightforwardly to a design measure version. Indeed, defining ξn as the empirical measure on the points in Xn, ξn=(1/n)i=1nδxi, we can write

Φp[K0,K1](Xn)=n2ϕp[K0,K1](ξn),

where we define, for any design (probability) measure on X,

ϕp(ξ)=ϕp[K0,K1](ξ)=X2|K1(x,x)-K0(x,x)|pdξ(x)dξ(x). 13

Denote by Fp[K0,K1](ξ;ν) the directional derivative of ϕp[K0,K1](·) at ξ in the direction ν,

Fp[K0,K1](ξ;ν)=limα0+ϕp[K0,K1][(1-α)ξ+αν]-ϕp[K0,K1](ξ)α.

Direct calculation gives

Fp[K0,K1](ξ;ν)=2X2|K1(x,x)-K0(x,x)|pdν(x)dξ(x)-ϕp[K0,K1](ξ),

and thus in particular

Fp[K0,K1](ξ;δx)=2X|K1(x,x)-K0(x,x)|pdξ(x)-ϕp[K0,K1](ξ).

One can easily check that the criterion is neither concave nor convex in general (as the matrix |K1-K0|p can have both positive and negative eigenvalues), but we nevertheless have a necessary condition for optimality.

Theorem 1

If the probability measure ξ on X maximises ϕp[K0,K1](ξ), then

xX,X|K1(x,x)-K0(x,x)|pdξ(x)ϕp[K0,K1](ξ). 14

Moreover, X|K1(x,x)-K0(x,x)|pdξ(x)=ϕp[K0,K1](ξ) for ξ-almost every xX.

The proof follows from the fact that Fp[K0,K1](ξ;ν)0 for every ν when ξ is optimal, which implies (14). As XX|K1(x,x)-K0(x,x)|pdξ(x)dξ(x)=ϕp[K0,K1](ξ), the inequality necessarily becomes an equality on the support of ξ.

This suggests the following simple incremental construction: at iteration n, with Xn the current design and ξn the associated empirical measure, choose xn+1ArgmaxxXFp[K0,K1](ξn;δx)=ArgmaxxX1n|kn,0(x)-kn,1(x)|p. It will be used in the numerical example of Sect. 6.2.

Optimal design measures

In this section we explain why the determination of optimal design measures maximising ϕp(ξ) is generally difficult, even when limiting ourselves to the satisfaction of the necessary condition in Theorem 1. At the same time, we can characterise measures that are approximately optimal for large p.

We assume that the two kernels are isotropic, i.e., such that Ki(x,x)=Ψi(x-x), i=0,1, and that the functions Ψi are differentiable except possibly at 0 where they only admit a right derivative. We define ψ(t)=|Ψ1(t)-Ψ0(t)|, tR+, and assume that the kernels have been normalised so that K0(x,x)=K1(x,x); that is, ψ(0)=0. Also, we only consider the case where the function ψ(·) has a unique global maximum on R+. This assumption is not very restrictive. Consider again the two Matérn kernels (8) and (9). Figure  shows the evolution of ψ2(t) for K0=K0,1 and K1=K1,θ1 with two different values of θ1: θ1=1 and θ11.1275; the latter minimises ϕ2[K0,1,K1,θ](μ) for μ being the uniform measure on [0, 1].

In the following, we shall consider normalised functions ψ(·), such that maxtR+ψ(t)=1. We denote by Δ the (unique) value such that ψ(Δ)=1. On Fig. 5, Δ0.7 when K1=K1,1.

Fig. 5.

Fig. 5

ψ2(t) for K0=K0,1 and K1=K1,θ1 with two different values of θ1

A simplified problem with an explicit optimal solution

Consider the extreme case where ψ=ψ defined by

ψ(t)=1ift=Δ,0otherwise. 15

Note that ψp(t)=ψ(t) for any p>0; we can thus restrict our attention to p=1 for the maximisation of ϕp(ξ) defined by (13); that is, we consider

ϕ1(ξ)=X2ψ(x-x)dξ(x)dξ(x).

Theorem 2

When ψ=ψ and XRd is large enough to contain a regular d simplex with edge length Δ, any measure ξ allocating weight 1/(d+1) at each vertex of such a simplex maximises ϕ1(ξ), and ϕ1(ξ)=d/(d+1).

Proof

Since ϕ1(ξ)=0 when ξ is continuous with respect to the Lebesgue measure on X, we can restrict our attention to measures without any continuous component. Assume that ξ=i=1nwiδxi, with wi0 for all i and i=1nwi=1, nN. Consider the graph G(ξ) having the xi as vertices, with an edge (ij) connecting xi and xj if and only if xi-xj=Δ. We have

ϕ1(ξ)=(i,j)G(ξ)wiwj,

and Theorem 1 of Motzkin and Straus (1965) implies that ϕ1(ξ) is maximum when ξ is uniform on the maximal complete subgraph of G(ξ). The maximal achievable order is d+1, obtained when the xi are the vertices of a regular simplex in X with edge length Δ. Motzkin and Straus (1965) also indicate in their Theorem 1 that ϕ1(ξ)=1-1/(d+1). This is easily recovered knowing that G(ξ) is fully connected with order d+1. Indeed, we then have

ϕ1(ξ)=i=1d+1wijij=1d+1wj=jii,j=1d+1wiwj=1-i=1d+1wi2,

which is maximum when all wi equal 1/(d+1).

Optimal designs for ψ(t)=|Ψ1(t)-Ψ0(t)|

The optimal designs of Theorem 2 are natural candidates for being optimal when we return to the case of interest ψ(t)=|Ψ1(t)-Ψ0(t)|. In the light of Theorem 1, for a given probability measure ξ on X, we consider the function

δξ(x)=Xψp(x-x)dξ(x)-ϕp(ξ),

which must satisfy δξ(x)0 for all xX when ξ is optimal. For an optimal measure ξ as in Theorem 2, with support x1,,xd+1 forming a regular d-simplex, we have

δξ(x)=1d+1i=1d+1ψp(x-xi)-d.

One can readily check that δξ(xi)=0 for all i (as ψ(xi-xj)=ψ(Δ)=1 for ij and ψ(0)=0). Moreover, since ψ(·) is differentiable everywhere except possibly at zero, when p>1 the gradient of δξ(x) equals zero at each xi. However, these d+1 stationary points may sometimes correspond to local minima—a situation when of course ξ is not optimal. The left panel of Fig.  shows an illustration (d=2) for p=1.5, K0(x,x)=exp(-x-x) and K1 being the Matérn 5/2 kernel K1,1. The measure ξ is supported at the vertices of the equilateral triangle (0,0),(Δ,0),(Δ/2,(3)Δ/2) (indicated in blue on the figure), with Δ0.53 (the value where ψ(·) is maximum). Here the xi correspond to local minima of δξ(x), ψ(·) is not differentiable at zero but p>1 so that δξ(·) is differentiable.

When p, ψp(·) approaches the (discontinuous) function ψ(·), suggesting that ξ may become close to being optimal for ϕp when p is large enough. However, when X is large, ξ is never truly optimal, no matter how large p is. Indeed, suppose that X contains a point x corresponding to the symmetric of a vertex xk of the simplex defining the support of ξ with respect to the opposite face of that simplex. Direct calculation gives

L=xk-x=2Δd+12d1/2.

The right panel of Fig. 6 shows an illustration for K0 and K1 being the Matérn 3/2 and Matérn 5/2 kernels K0,1 and K1,1, respectively. The measure ξ is supported at the vertices of the equilateral triangle with vertices (0,0),(Δ,0),(Δ/2,(3)Δ/2) with now Δ0.7. At the point x, symmetric to xk, indicated in red on the figure, we have

δξ(x)=1d+1iki=1d+1ψp(x-xi)+ψp(x-xk)-d=1d+1ψp(L)>0, 16

where the second equality follows from x-xi=Δ for all ik, implying that ξ is not optimal. Another, more direct, proof of the non-optimality of ξ is to consider the measure ξ^ that sets weights 1/(d+1) at all xixk and weights 1/[2(d+1)] at xk and its symmetric x. Direct calculation gives

ϕp(ξ^)=dd+11-1d+1+22(d+1)dd+1+12(d+1)ψp(L).

The first term on the right-hand side comes from the d vertices xi, ik, each one having weight 1/(d+1) and being at distance Δ of all other vertices, those having total weight 1-1/(d+1). The second term comes from the two symmetric points xk and x, each one with weight 1/[2(d+1)]. Each of these two points is at distance Δ from d vertices with weights 1/(d+1) and at distance L of the other opposite point with weight 1/[2(d+1)]. We get after simplification

ϕp(ξ^)=dd+1+ψp(L)2(d+1)2>ϕp(ξ)=dd+1,

showing that ξ is not optimal. Note that, for symmetry reasons, the design ξ^ is not optimal for large enough X. The determination of a truly optimal design seems very difficult. In the simplified problem of Sect. 5.1, where the criterion is based on the function ψ defined by (15), the measures ξ and ξ^ supported on d+1 and d+2 points, respectively, have the same criterion value ϕp(ξ)=ϕp(ξ^)=d/(d+1) for all p>0.

Fig. 6.

Fig. 6

Surface plot of δξ(x) (xR2), the support of ξ corresponds to the vertices of the equilateral triangle in blue. Left: K0(x,x)=exp(-x-x) and K1=K1,1 (Δ0.53), p=1.5; Right: K0=K0,1, K1=K1,1 (Δ0.7), p=10; the red point x is the symmetric of the origin (0, 0) with respect to the opposite side of the triangle. (Color figure online)

Although ξ is not optimal, since ψ(x-xk)<1 (as ψ(t) takes its maximum value 1 for t=Δ), (16) suggests that ξ may be only marginally suboptimal when p is large enough. Moreover, as the right panel of Fig. 6 illustrates, a design ξ supported on a regular simplex is optimal provided that X is small enough and p is large enough to make δξ(x) concave at each xi (for symmetry reasons, we only need to check concavity at one vertex). In fact, p>2 is sufficient. Indeed, assuming that p>2 and that ψ(·) is twice differentiable everywhere, with second-order derivative ψ(·), except possibly at zero, direct calculation gives

d2δξ(x)dxdx|x=x1=1d+1pψp-1(Δ)ψ(Δ)Δ2i=2d+1(x1-xi)(x1-xi),

which is negative-definite (since ψ(Δ)<0, ψ(·) being maximal at Δ). The right panel of Fig. 6 gives an illustration. Note that p<2 on the left panel, and the xi correspond to local minimas of δξ(·). Figure  shows a plot of δξ(x) for p=2 and K0 and K1 being the Matérn 3/2 and Matérn 5/2 kernels K0,1 and K1,1.07, respectively, suggesting that the form of optimal designs may be in general quite complicated.

Fig. 7.

Fig. 7

Surface plot of δξ(x) (xR2), the support of ξ corresponds to the vertices of the equilateral triangle in blue: K0=K0,1, K1=K1,1.07 (Δ1.92), p=2. (Color figure online)

A numerical example

Exact designs

In this section, we consider numerical evaluations of designs resulting from the prediction-based and distance-based criteria. Here, the rival models are the isotropic versions of the covariance kernels used in Example 3 (Sect. 3.2) for the design space X=[0,10]2, discretised at n=25 equally spaced points in each dimension. For an agreement on the setting of correlation lengths in both kernels, we have applied a minimisation procedure. Specifically, we have taken θ=θ0=1 in K0,θ(x,x) and adjusted the parameter in the second kernel minimising each of the distance-based criteria for the design X625 corresponding to the full grid. This resulted in θ1=1.0047, 1.0285, 1.0955 and 1.3403, respectively, for ΦF,Φ1,Φ2 and ΦKL. We have finally chosen θ1=1.07, which seems to be compatible with the above values.

The left panel in Fig.  shows the plot of the two Matérn covariance functions at the assumed parameter values. This plot illustrates the similarity of the kernels which we aim to discriminate. The right panel in the figure refers to the plot of the absolute difference between the covariance kernels. The red line corresponds to the distance where the absolute difference between them is maximal. This is denoted by Δ, which is equal to Δ=1.92 in this case.

The sequential approach is the only case where the observations Yn corresponding to the previous design points Xn are used in the design construction. In our example, we simulate the data according to the assumed model. We use this information to estimate the parameter setting at each step. The (box)plots of the maximum likelihood (ML) estimates θ^0 and θ^1 of the inverse correlation lengths θ0 and θ1 of K0,θ(x,x) and K1,θ(x,x), respectively, are presented in Fig. . This refers to the case where the first kernel, Matérn 3/2, is the data generator. The θ^0 estimates converge to their null value, θ0=1, drawn as a red dashed line in the left panel of Fig. 9, as expected due to the consistency of the ML estimator in this case. For the second kernel to be similar to the first one (i.e., less smooth), the θ^1 estimates have increased (see the right panel). The decrease of the correlation length causes the covariance kernel to drop faster as a function of distance. We defer from presenting the opposite case (where the Matérn 5/2 is the data generator), which is similar.

Fig. 9.

Fig. 9

Maximum likelihood estimates of the correlation lengths in Matérn kernels. (Color figure online)

Apart from the methods applied in Sect. 4, we have considered some other static approaches for discrimination. Ds-optimal design is a natural candidate that can be applied in the distance-based fashion. For Ds-optimality, we require the general form of the Matérn covariance kernel, which is based on the modified Bessel function of the second kind (denoted by Cν). It is given by

Kν(r)=21-νΓ(ν)2νrθνCν2νrθ. 17

Smoothness, ν, is considered as the parameter of interest, while the correlation length θ is assumed as nuisance. The first off-diagonal element in the 2×2 information matrix, associated with the estimation of parameters θ=(θ,ν), is

M(Xn,θ)12=12traceKν-1KνθKν-1Kνν, 18

see, e.g., Eq. (6.19) in Müller (2007). The other elements in the information matrix are calculated similarly. We have used the supplementary material of Lee et al. (2018) to compute the partial derivatives of the Matérn covariance kernel. Finally, the Ds-criterion is

ΦDs=|M(Xn,θ)|/|M(Xn,θ)11|, 19

where M(Xn,θ)11 is the element of the information matrix corresponding to the nuisance parameter (i.e., in M(Xn,θ)11 both partial derivatives are calculated with respect to θ). In the examples to follow we consider local Ds-optimal design; that is, the parameters θ and ν are set at given values.

From a Bayesian perspective, models can be discriminated optimally when the difference between the expected entropies of the prior and the posterior model probabilities is maximised. This criterion underlies a famous sequential procedure put forward by Box and Hill (1967) and Hill and Hunter (1969). Since such criteria typically cannot be computed analytically, several bounds were derived. The upper bound proposed by Box and Hill (1967) is equivalent to the symmetric Kullback-Leibler divergence ΦKL. Hoffmann (2017) derives a lower bound based on a lower bound for the Kullback–Leibler divergence between a mixture of two normals, which is given by Eq. (A3) and is denoted by ΦΓ. Here, we assume equal prior probabilities. A more detailed account of Bayesian design criteria and their bounds is given in Appendix A.

Table collects simulation results for the given example. We have included the sequential procedure (4) as a benchmark for orientation. For all other approaches the true parameter values are used in the covariance kernels. Concerning static (distance-based) designs based on maximisation of ΦF,Φ1,Φ2,ΦKL,ΦΓ,ΦDs, for each design size considered we first built a an incremental design and then used a classical exchange-type algorithm to improve it. These designs are thus not necessarily nested, i.e., XnXn for n<n.

Each design of size n was then evaluated by generating N=100 independent sets of n observations generated with the assumed true model, evaluating the likelihood functions for these sets of observations for both models, and then deciding for each set of observations which model has the higher likelihood value. The hit rate is the fraction of sets of observations where the assumed true model has the higher likelihood value. The procedure was repeated by assuming the other model to be the true one. The two hit rates are then averaged and stated in Table 1, which contains the results for all the criteria and design sizes we considered. For the special case of the sequential construction (4), the design path depends on the observations generated at the previously selected design points; that is, unlike for the other criteria, for a given design size n each random run produces a different design. To compute the hit rates for a particular n we used N=100 independent runs of the experiment.

Table 1.

Comparison of average hit rates in different methods for the first numerical example

Average hit rate
Design size 5 6 7 8 9 10 20 30 40 50
Sequential (4) 0.500 0.535 0.540 0.595 0.570 0.640 0.695 0.715 0.740 0.770
ϕA 0.505 0.500 0.530 0.525 0.505 0.510 0.520 0.535 0.585 0.635
ϕB 0.520 0.545 0.575 0.585 0.615 0.650 0.785 0.875 0.900 0.910
ϕKL 0.520 0.545 0.575 0.585 0.615 0.650 0.785 0.870 0.915 0.925
ΦF 0.580 0.625 0.620 0.625 0.670 0.715 0.795 0.900 0.925 0.950
Φ1 0.525 0.520 0.555 0.540 0.550 0.610 0.725 0.890 0.910 0.920
Φ2 0.525 0.520 0.555 0.540 0.550 0.610 0.715 0.860 0.890 0.910
ΦKL 0.580 0.625 0.620 0.625 0.670 0.715 0.795 0.895 0.925 0.955
ΦΓ 0.595 0.625 0.610 0.645 0.675 0.700 0.795 0.895 0.935 0.940
ΦDs 0.540 0.575 0.590 0.620 0.650 0.675 0.805 0.850 0.855 0.925

Bold numbers indicate the highest average hit rate achieved for each design size

The hit rates reported in Table 1 reflect the discriminatory power of the corresponding designs. One can observe that ΦF and as expected ΦKL are outperforming in terms of hit rates. The Bayesian lower bound criterion ΦΓ is similar to the symmetric ΦKL. The sequential design strategy (4) does not behave as well as the outperforming ones. It is, however, the realistic scenario that one might consider in applications as it does not assume knowledge of the kernel parameters. The effect of this knowledge can thus be partially calibrated for by comparing the first line against the other criteria.

Optimal design measure for ϕp

Theorem 1 also allows the use of approximate designs as it presents a necessary condition for optimality of the family of criteria ϕp,p>0. This is more extensively discussed in the previous section. Here we present the numerical results for two specific cases of p=2 and p=10. To reach a design which might be numerically optimal (or at least nearly optimal), we have applied the Fedorov–Wynn algorithm (Fedorov 1971; Wynn 1970) on a dense regular grid of candidate points.

Numerical results show that for very small p (e.g., p=1) explicit optimal measures are hard to derive. The left panel in Fig.  presents the measure ξ2 obtained for ϕ2. To construct ξ2, we have first calculated an optimal design on a dense grid by applying 1000 iterations of the Fedorov–Wynn algorithm (see the comment following Theorem 1); the design measure obtained is supported on 9 grid points. We then applied a continuous optimisation algorithm (library NLopt (Johnson 2021) through its R-interface nloptr) initialised at this 9-point design. The 9 support points of the resulting design measure ξ2 are independent of the grid size; they receive unequal weights, proportional to the disk areas on Fig. 10-left. Any translation or rotation of ξ2 yields the same value of ϕ2.

Fig. 10.

Fig. 10

Left: The optimal measure for ϕ2. Right: The optimal measure for ϕ10. (Color figure online)

As the order p increases, we eventually reach an optimal measure with only three support points and equal weights. The right panel in Fig. 10 corresponds to the optimal design measure computed for ϕ10. This has, similarly as before, resulted from application of a continuous optimisation initialised at an optimal 3-point design calculated with the Fedorov–Wynn algorithm on a grid. This optimal design measure ξ10 has three support points, drawn as blue dots, with equal weights 1/3 represented by the areas of the red disks. The blue line segments between every two locations have length Δ1.92, reflecting the ideal interpoint distance (see the right panel of Fig. 8), in agreement with corresponding discussions in Sect. 5. Also here the optimal designs are rotationally and translationally invariant, and thus any design of such type is optimal as long as the design region is large enough to fit it.

Fig. 8.

Fig. 8

Left: Plot of the Matérn covariance functions at the assumed parameter setting. Right: ψ(t)=|K0,θ0(t,0)-K0,θ1(t,0)|,(θ0=1,θ1=1.07). (Color figure online)

Conclusions

In this paper we have considered the design problem for the discrimination of Gaussian process regression models. This problem differs considerably from the well-treated one in standard regression models and thus offers a multitude of challenges. While the KL-divergence is a straightforward criterion, it comes with the price of being computationally demanding and lacking convenient simplifications such as design measures. We have therefore introduced a family of criteria that allow such a simplification at least in special cases and have investigated its properties. We have also compared the performance of these and other potential criteria on several examples and see that KL-divergence can be effectively replaced by simpler criteria without much loss in efficiency. In particular designs based on the Fréchet-distance between covariance kernels seem to be competitive. Results from the approximate design computations indicate that for classical isotropic kernels, designs with d+1 support points placed at the vertices of a simplex of suitable size are optimal for distance-based criteria ϕp for a large enough p when the design region is small enough and are marginally suboptimal otherwise.

As a next step, it would be interesting to investigate the properties of the discrimination designs under parameter uncertainty, for example by considering minimax or Bayesian designs.

A referee has indicated that our techniques could be used for discriminating the intricately convoluted covariances stemming from deep Gaussian processes (as defined in Damianou and Lawrence (2013)) from more conventional ones. This is an interesting issue of high relevance for computer simulation experiments that certainly needs to be explored in the future.

Acknowledgements

This work was partly supported by project INDEX (INcremental Design of EXperiments) ANR-18-CE91-0007 of the French National Research Agency (ANR) and I3903-N32 of the Austrian Science Fund (FWF). Note that refereeing for this article was organized outside of the editorial manager system by the S.I. editor such that the chief editor (one of the coauthors) was not able to identify the referees. We are grateful to the two referees for their careful reading and their suggestions, which led to an improvement of the paper.

Appendix A: Notes on Box–Hill–Hunter Bayesian criteria for model discrimination between Gaussian random fields

Chapter 5 of Hoffmann (2017) contains an overview of Bayesian design criteria for model discrimination and some useful bounds on them. We assume there are M models m0,,mM-1. The most common Bayesian design criterion for model discrimination has the following form:

ΦΛ(Xk)=-i=0M-1p(mi)log(p(mi))+YkYp(Yk)i=0M-1p(mi|Yk)log(p(mi|Yk))dYk, A1

where the data Yk=(Y1(x1),,Yk(xk)) are observed at the design Xk=(x1,,xk), p(mi) denotes the prior and p(mi|Yk) the posterior model probability of model mi and p(Yk) is the marginal distribution of Yk with respect to the models. Hence, this criterion is the (expected) difference of the model entropy and the conditional model entropy (conditional on the observations). The posterior model probability p(mi|Yk) is defined by

p(mi|Yk)p(Yk|mi)p(mi),

where p(Yk|mi) is the likelihood of model mi (marginalised over the parameters), and p(Yk) is given by

p(Yk)=i=0M-1p(Yk|mi)p(mi).

The first term in (A1) does not depend on the design and can therefore be ignored.

A common alternative formulation of criterion (A1) is the one adopted by Box and Hill (1967) and Hill and Hunter (1969), which will henceforth be called Box-Hill-Hunter (BHH) criterion:

ΦΛ(Xk)=i=0M-1p(mi)YkYp(Yk|mi)logp(Yk|mi)p(Yk)dYk. A2

In our case, if we assume point priors for the kernel parameters, we have

p(Yk|mi)=φ(Yk|ηk,i,Kk,i),

where ηk,i=(η1,i(x1),,ηk,i(xk)) is the mean vector of model i at design Xk, Kk,i is the k×k kernel matrix of model i with elements given by {Kk,i}j,l=Ki(xj,xl), and φ(·|η,K) is the normal pdf with mean vector η and variance-covariance matrix K.

For example, for a static design involving n design points, we set k=n and assume that ηn,i=0 for each design Xn. The model probabilities p(mi) would just be the prior model probabilities before having collected any observations.

In a sequential design setting, where n observations Yn have already been observed at locations Xn and we want to find the optimal design point x where to collect our next observation, we have k=1 and set ηk,i to the conditional mean η^n,i(x)=kn,i(x)Kn,i-1Yn and Kk,i to the conditional variance ρn,i2(x)=Ki(x,x)-kn,i(x)Kn,i-1kn,i(x), where kn,i(x)=(Ki(x,x1),,Ki(x,xn)), see Sect. 3.1. The prior model probabilities would have to be set to the posterior model probabilities given the already observed data:

p(mi)=p(mi|Yn)φ(Yn|0,Kn,i)p(mi).

It follows that p(Yk) is a mixture of normal distributions. The criterion representations (A1) and (A2) cannot be computed directly. However, several bounds have been developed for the criterion, the most famous being the classic upper bound derived by Box and Hill (1967).

Appendix A.1: Upper bound

The upper bound has the following form (see also Hoffmann (2017, Thm. 5.2, p. 168)):

ΦU(Xk)=12i=0M-1j=0M-1p(mi)p(mj)ηk,i-ηk,jKk,j-12+traceKk,iKk,j-1-n.

For M=2, the formula simplifies to

ΦU(Xk)=12p(m0)p(m1){ηk,0-ηk,1Kk,0-12+ηk,0-ηk,1Kk,1-12+traceKk,0Kk,1-1+traceKk,1Kk,0-1-2n}.

This is equivalent to the symmetric Kullback–Leibler divergence that we use as the criterion ΦKL (with p(m0)=p(m1)=1/2 and ηk,0=ηk,1=0).

Appendix A.2: Lower bound

Hershey and Olsen (2007, Sect. 7) derive a lower bound for the Kullback–Leibler divergence between a mixture of two normals, see also Hoffmann (2017, Thm. 5.4 and Cor. 5.5, pp. 173–174). This result is then used by Hoffmann (2017) to find a lower bound for the BHH criterion ΦΛ(Xk) (Hoffmann 2017, Thm. 5.9, p. 178). This lower bound is given by

ΦΓ(Xk)=-i=0M-1p(mi)logj=0M-1p(mj)exp-12Γ(Xk)ij,

where

Γ(Xk)ij=ηk,i-ηk,jKk,j-12+traceKk,iKk,j-1-logdetKk,iKk,j-1-n.

For M=2, which is the relevant case for our setup, we get

ΦΓ(Xk)=-p(m0)log{p(m0)+p(m1)exp(-12[ηk,0-ηk,1Kk,1-12+traceKk,0Kk,1-1-logdetKk,0Kk,1-1-n])}-p(m1)log{p(m1)+p(m0)exp(-12[ηk,0-ηk,1Kk,0-12+traceKk,1Kk,0-1-logdetKk,1Kk,0-1-n])} A3

where φi(·)=φ(·|ηk,i,Kk,i), which we are also using to compute designs in Sect. 6.1 (again with p(m0)=p(m1)=1/2 and ηk,0=ηk,1=0).

Funding Information

Open access funding provided by Austrian Science Fund (FWF).

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Elham Yousefi, Email: elham.yousefi@jku.at.

Luc Pronzato, Email: pronzato@i3s.unice.fr.

Markus Hainy, Email: markus.hainy@jku.at.

Werner G. Müller, Email: werner.mueller@jku.at

Henry P. Wynn, Email: h.wynn@lse.ac.uk

References

  1. Atkinson AC, Fedorov VV. The design of experiments for discriminating between two rival models. Biometrika. 1975;62(1):57–70. doi: 10.1093/biomet/62.1.57. [DOI] [Google Scholar]
  2. Box GEP, Hill WJ. Discrimination among mechanistic models. Technometrics. 1967;9(1):57–71. doi: 10.2307/1266318. [DOI] [Google Scholar]
  3. Damianou A, Lawrence ND (2013) Deep Gaussian Processes. In: Proceedings of the sixteenth international conference on artificial intelligence and statistics. PMLR, pp 207–215. https://proceedings.mlr.press/v31/damianou13a.html
  4. Dowson DC, Landau BV. The Fréchet distance between multivariate normal distributions. J Multivar Anal. 1982;12(3):450–455. doi: 10.1016/0047-259X(82)90077-X. [DOI] [Google Scholar]
  5. Fedorov VV. The design of experiments in the multiresponse case. Theory Probab Appl. 1971;16(2):323–332. doi: 10.1137/1116029. [DOI] [Google Scholar]
  6. Gramacy RB. Surrogates: Gaussian process modeling, design, and optimization for the applied sciences. Boca Raton: Chapman and Hall/CRC; 2020. [Google Scholar]
  7. Heirung TAN, Santos TLM, Mesbah A. Model predictive control with active learning for stochastic systems with structural model uncertainty: online model discrimination. Comput Chem Eng. 2019;128:128–140. doi: 10.1016/j.compchemeng.2019.05.012. [DOI] [Google Scholar]
  8. Hershey JR, Olsen PA (2007) Approximating the Kullback Leibler divergence between Gaussian mixture models. In: 2007 IEEE international conference on acoustics, speech and signal processing—ICASSP ’07, pp IV–317–IV–320, 10.1109/ICASSP.2007.366913
  9. Hill WJ, Hunter WG. A note on designs for model discrimination: variance unknown case. Technometrics. 1969;11(2):396–400. doi: 10.1080/00401706.1969.10490695. [DOI] [Google Scholar]
  10. Hino H (2020) Active learning: problem settings and recent developments. arxiv:2012.04225
  11. Hoffmann C (2017) Numerical aspects of uncertainty in the design of optimal experiments for model discrimination. PhD thesis, Ruprecht-Karls-Universität Heidelberg. 10.11588/heidok.00022612
  12. Hunter W, Reiner A. Designs for discriminating between two rival models. Technometrics. 1965;7(3):307–323. doi: 10.1080/00401706.1965.10490265. [DOI] [Google Scholar]
  13. Johnson SG (2021) The NLopt nonlinear-optimization package. http://github.com/stevengj/nlopt
  14. Karvonen T (2022) Asymptotic bounds for smoothness parameter estimates in Gaussian process interpolation. arxiv:2203.05400
  15. Karvonen T, Oates C (2022) Maximum likelihood estimation in Gaussian process regression is ill-posed. arxiv:2203.09179
  16. Karvonen T, Wynne G, Tronarp F, et al. Maximum likelihood estimation and uncertainty quantification for Gaussian process approximation of deterministic functions. SIAM/ASA J Uncertain Quantif. 2020;8(3):926–958. doi: 10.1137/20M1315968. [DOI] [Google Scholar]
  17. Kiefer J. General equivalence theory for optimum designs (approximate theory) Ann Stat. 1974;2(5):849–879. doi: 10.1214/aos/1176342810. [DOI] [Google Scholar]
  18. Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat. 1951;22(1):79–86. doi: 10.1214/aoms/1177729694. [DOI] [Google Scholar]
  19. Lee XJ, Hainy M, McKeone JP, et al. ABC model selection for spatial extremes models applied to South Australian maximum temperature data. Comput Stat Data Anal. 2018;128:128–144. doi: 10.1016/j.csda.2018.06.019. [DOI] [Google Scholar]
  20. López-Fidalgo J, Tommasi C, Trandafir PC. An optimal experimental design criterion for discriminating between non-normal models. J R Stat Soc. 2007;69(2):231–242. doi: 10.1111/j.1467-9868.2007.00586.x. [DOI] [Google Scholar]
  21. Motzkin TS, Straus EG. Maxima for graphs and a new proof of a theorem of Turán. Can J Math. 1965;17:533–540. doi: 10.4153/CJM-1965-053-6. [DOI] [Google Scholar]
  22. Müller WG. Collecting spatial data: optimum design of experiments for random fields. 3. Berlin: Springer; 2007. [Google Scholar]
  23. Olofsson S, Deisenroth MP, Misener R (2018) Design of experiments for model discrimination using Gaussian process surrogate models. In: Eden MR, Ierapetritou MG, Towler GP (eds) 13th International symposium on process systems engineering (PSE 2018), computer aided chemical engineering, vol 44. Elsevier, pp 847–852, 10.1016/B978-0-444-64241-7.50136-1
  24. Pronzato L, Wynn HP, Zhigljavsky A. Bregman divergences based on optimal design criteria and simplicial measures of dispersion. Stat Pap. 2019;60(2):545–564. doi: 10.1007/s00362-018-01082-8. [DOI] [Google Scholar]
  25. Sauer A, Gramacy RB, Higdon D. Active learning for deep Gaussian process surrogates. Technometrics. 2022 doi: 10.1080/00401706.2021.2008505. [DOI] [Google Scholar]
  26. Schwaab M, Luiz Monteiro J, Carlos Pinto J. Sequential experimental design for model discrimination: taking into account the posterior covariance matrix of differences between model predictions. Chem Eng Sci. 2008;63(9):2408–2419. doi: 10.1016/j.ces.2008.01.032. [DOI] [Google Scholar]
  27. Stein M. Interpolation of spatial data: some theory for kriging. Heidelberg: Springer; 1999. [Google Scholar]
  28. Wynn HP. The sequential generation of D-optimum experimental designs. Ann Math Stat. 1970;41(5):1655–1664. doi: 10.1214/aoms/1177696809. [DOI] [Google Scholar]

Articles from Statistical Papers (Berlin, Germany) are provided here courtesy of Springer

RESOURCES