Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 1.
Published in final edited form as: Pattern Recognit. 2013 Dec 3;47(6):2178–2192. doi: 10.1016/j.patcog.2013.11.022

Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model

Amin Zollanvari a,b,*, Edward R Dougherty a,c
PMCID: PMC3979595  NIHMSID: NIHMS555121  PMID: 24729636

Abstract

The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

Keywords: Linear discriminant analysis, Bayesian Minimum Mean-Square Error Estimator, Double asymptotics, Kolmogorov asymptotics, Performance metrics, RMS

1. Introduction

The most important aspect of any classifier is its error, ε, defined as the probability of misclassification, since ε quantifies the predictive capacity of the classifier. Relative to a classification rule and a given feature-label distribution, the error is a function of the sampling distribution and as such possesses its own distribution, which characterizes the true performance of the classification rule. In practice, the error must be estimated from data by some error estimation rule yielding an estimate, ε̂. If samples are large, then part of the data can be held out for error estimation; otherwise, the classification and error estimation rules are applied on the same set of training data, which is the situation that concerns us here. Like the true error, the estimated error is also a function of the sampling distribution. The performance of the error estimation rule is completely described by its joint distribution, (ε, ε̂).

Three widely-used metrics for performance of an error estimator are the bias, deviation variance, and root-mean-square (RMS), given by

Bias[ε^]=E[ε^]-E[ε],Vard[ε^]=Var(ε^-ε)=Var(ε)+Var(ε^)-2Cov(ε,ε^),RMS[ε^]=E[(ε-ε^)2]=E[ε2]+E[ε^2]-2E[εε^]=Bias[ε^]2+Vard[ε^], (1)

respectively. The RMS (square root of mean square error, MSE) is the most important because it quantifies estimation accuracy. Bias requires only the first-order moments, whereas the deviation variance and RMS require also the second-order moments.

Historically, analytic study has mainly focused on the first marginal moment of the estimated error for linear discriminant analysis (LDA) in the Gaussian model or for multinomial discrimination [1]–[12]; however, marginal knowledge does not provide the joint probabilistic knowledge required for assessing estimation accuracy, in particular, the mixed second moment. Recent work has aimed at characterizing joint behavior. For multinomial discrimination, exact representations of the second-order moments, both marginal and mixed, for the true error and the resubstitution and leave-one-out estimators have been obtained [13]. For LDA, the exact joint distributions for both resubstitution and leave-one-out have been found in the univariate Gaussian model and approximations have been found in the multivariate model with a common known covariance matrix [14, 15]. Whereas one could utilize the approximate representations to find approximate moments via integration in the multivariate model with a common known covariance matrix, more accurate approximations, including the second-order mixed moment and the RMS, can be achieved via asymptotically exact analytic expressions using a double asymptotic approach, where both sample size (n) and dimensionality (p) approach infinity at a fixed rate between the two [16]. Finite-sample approximations from the double asymptotic method have shown to be quite accurate [16, 17, 18]. There is quite a body of work on the use of double asymptotics for the analysis of LDA and its related statistics [16, 19, 20, 21, 22, 23]. Raudys and Young provide a good review of the literature on the subject [24].

Although the theoretical underpinning of both [16] and the present paper relies on double asymptotic expansions, in which n, p → ∞ at a proportional rate, our practical interest is in the finite-sample approximations corresponding to the asymptotic expansions. In [17], the accuracy of such finite-sample approximations was investigated relative to asymptotic expansions for the expected error of LDA in a Gaussian model. Several single-asymptotic expansions (n → ∞) were considered, along with double-asymptotic expansions (n, p → ∞) [19, 20]. The results of [17] show that the double-asymptotic approximations are significantly more accurate than the single-asymptotic approximations. In particular, even with n/p < 3, the double-asymptotic expansions yield “excellent approximations” while the others “falter.”

The aforementioned work is based on the assumption that a sample is drawn from a fixed feature-label distribution F, a classifier and error estimate are derived from the sample without using any knowledge concerning F, and performance is relative to F. Research dating to 1978, shows that small-sample error estimation under this paradigm tends to be inaccurate. Re-sampling methods such as cross-validation possess large deviation variance and, therefore, large RMS [9, 25]. Scientific content in the context of small-sample classification can be facilitated by prior knowledge [26, 27, 28]. There are three possibilities regarding the feature-label distribution: (1) F is known, in which case no data are needed and there is no error estimation issue; (2) nothing is known about F, there are no known RMS bounds, or those that are known are useless for small samples; and (3) F is known to belong to an uncertainty class of distributions and this knowledge can be used to either bound the RMS [16] or be used in conjunction with the training data to estimate the error of the designed classifier. If there exists a prior distribution governing the uncertainty class, then in essence we have a distributional model. Since virtually nothing can be said about the error estimate in the first two cases, for a classifier to possess scientific content we must begin with a distributional model.

Given the need for a distributional model, a natural approach is to find an optimal minimum mean-square-error (MMSE) error estimator relative to an uncertainty class Θ [27]. This results in a Bayesian approach with Θ being given a prior distribution, π(θ), θ ∈ Θ, and the sample Sn being used to construct a posterior distribution, π*(θ), from which an optimal MMSE error estimator, ε̂B, can be derived. π(θ) provides a mathematical framework for both the analysis of any error estimator and the design of estimators with desirable properties or optimal performance. π*(θ) provides a sample-conditioned distribution on the true classifier error, where randomness in the true error comes from uncertainty in the underlying feature-label distribution (given Sn). Finding the sample-conditioned MSE, MSEθ[ε̂B|Sn], of an MMSE error estimator amounts to evaluating the variance of the true error conditioned on the observed sample [29]. MSEθ[ε̂B|Sn] → 0 as n → ∞ almost surely in both the discrete and Gaussian models provided in [29, 30], where closed form expressions for the sample-conditioned MSE are available.

The sample-conditioned MSE provides a measure of performance across the uncertainty class Θ for a given sample Sn. As such, it involves various sample-conditioned moments for the error estimator: Eθ[ε̂B|Sn], Eθ[(ε̂B)2|Sn], and Eθ[εε̂B|Sn]. One could, on the other hand, consider the MSE relative to a fixed feature-label distribution in the uncertainty class and randomness relative to the sampling distribution. This would yield the feature-label-distribution-conditioned MSE, MSE Sn[ε̂B], and the corresponding moments: ESn[ε̂B], ESn[(ε̂B)2], and ESn[εε̂B]. From a classical point of view, the moments given θ are of interest as they help shed light on the performance of an estimator relative to fixed parameters of class conditional densities. Using this set of moments (i.e. given θ) we are able to compare the performance of the Bayesian MMSE error estimator to classical estimators of true error such as resubstitution and leave-one-out.

From a global perspective, to evaluate performance across both the uncertainty class and the sampling distribution requires the unconditioned MSE, MSEθSn[ε̂B], and corresponding moments EθSn[ε̂B], EθSn[(ε̂B)2], and EθSn[εε̂B]. While both MSESn[ε̂B] and MSEθSn[ε̂B] have been examined via simulation studies in [27, 28, 30] for discrete and Gaussian models, our intention in the present paper is to derive double-asymptotic representations of the feature-labeled conditioned (given θ) and unconditioned MSE, along with the corresponding moments of the Bayesian MMSE error estimator for linear discriminant analysis (LDA) in the Gaussian model.

We make three modeling assumptions. As in many analytic error analysis studies, we employ stratified sampling: n = n0 + n1 sample points are selected to constitute the sample Sn in Rp, where given n, n0 and n1 are determined, and where x1, x2, …, xn0 and xn0+1, xn0+2, …, xn0+n1 are randomly selected from distributions Π0 and Π1, respectively. Πi possesses a multivariate Gaussian distribution N(μi, Σ), for i = 0, 1. This means that the prior probabilities α0 and α1 = 1 − α0 for classes 0 and 1, respectively, cannot be estimated from the sample (see [31] for a discussion of issues surrounding lack of an estimator for α0). However, our second assumption is that α0 and α1 are known. This is a natural assumption for many medical classification problems. If we desire early or mid-term detection, then we are typically constrained to a small sample for which n0 and n1 are not random but for which α0 and α1 are known (estimated with extreme accuracy) on account of a large population of post-mortem examinations. The third assumption is that there is a known common covariance matrix for the classes, a common assumption in error analysis [32, 3, 5, 16]. The common covariance assumption is typical for small samples because it is well-known that LDA commonly performs better that quadratic discriminant analysis (QDA) for small samples, even if the actual covariances are different, owing to the estimation advantage of using the pooled sample covariance matrix. As for the assumption of known covariance, this assumption is typical simply owing to the mathematical difficulties of obtaining error representations for unknown covariance (we know of no unknown-covariance result for second-order representations). Indeed, the natural next step following this paper and [16] is to address the unknown covariance problem (although with it being outstanding for almost half a century, it may prove difficult).

Under our assumptions, the Anderson W statistic is defined by

W(x¯0,x¯1,x)=(x-x¯0+x¯12)T-1(x¯0-x¯1), (2)

where x¯0=1n0i=1n0xi and x¯1=1n1i=n0+1n0+n1xi. The corresponding linear discriminant is defined by ψn(x) = 1 if W(0, 1, x) ≤ c and ψn(x) = 0 if W(0, 1, x) > c, where c=log1-α0α0. Given sample Sn (and thus 0 and 1), for i = 0, 1, the error for ψn is given by ε = α0ε0 + α1ε1, where

εi=Φ((-1)i+1(μi-x¯0+x¯12)T-1(x¯0-x¯1)+(-1)ic(x¯0-x¯1)T-1(x¯0-x¯1)) (3)

and Φ(.) denotes the cumulative distribution function of a standard normal random variable.

Raudys proposed the following approximation to the expected LDA classification error [19, 24]:

ESn[ε0]=P(W(x¯0,x¯1,x)cxΠ0)Φ(-ESn[W(x¯0,x¯1,x)xΠ0]+cVarSn[W(x¯0,x¯1,x)xΠ0]) (4)

We provide similar approximations for error-estimation moments and prove asymptotic exactness.

2. Bayesian MMSE Error Estimator

In the Bayesian classification framework [27, 28], it is assumed that the class-0 an class-1 conditional distributions are parameterized by θ0 and θ1, respectively. Therefore, assuming known αi, the actual feature-label distribution belongs to an uncertainty class parameterized by θ = (θ0, θ1) according to a prior distribution, π(θ). Given a sample Sn, the Bayesian MMSE error estimator minimizes the MSE between the true error of a designed classifier, ψn, and an error estimate (a function of Sn and ψn). The expectation in the MSE is taken over the uncertainty class conditioned on Sn. Specifically, the MMSE error estimator is the expected true error, ε̂B(Sn) = Eθ[ε(θ)|Sn]. The expectation given the sample is over the posterior density, π*(θ). Thus, we write the Bayesian MMSE error estimator as ε̂B = Eπ*[ε]. To facilitate analytic representations, θ0 and θ1 are assumed to be independent prior to observing the data. Denote the marginal priors of θ0 and θ1 by π(θ0) and π(θ1), respectively, and the corresponding posteriors by π*(θ0) and π*(θ1), respectively. Independence is preserved, i.e., π*(θ0, θ1) = π*(θ0)π*(θ1) for i = 0, 1 [27].

Owing to the posterior independence between θ0 and θ1 and the fact that αi is known, the Bayesian MMSE error estimator can be expressed by [27]

ε^B=α0Eπ[ε0]+α1Eπ[ε1]=α0ε^0B+α1ε^1B, (5)

where, letting Θi be the parameter space of θi,

ε^iB=Eπ[εi]=Θiεi(θi)π(θi)dθi. (6)

For known Σ and the prior distribution on μi assumed to be Gaussian with mean mi and covariance matrix Σ/νI, ε^iB is given by equation (10) in [28]:

ε^iB=Φ((-1)i-(mi-x¯0+x¯12)T-1(x¯0-x¯1)+c(x¯0-x¯1)T-1(x¯0-x¯1)νiνi+1), (7)

where

mi=nix¯i+νimini+νi,νi=ni+νi. (8)

and νi > 0 is a measure of our certainty concerning the prior knowedge – the larger νi is the more localized the prior distribution is about mi. Letting μ=[μ0T,μ1T]T, the moments that interest us are of the form ESn[ε̂B|μ], ESn[(ε̂B)2|μ], and ESn[εε̂B|μ], which are used to obtain MSESn[ε̂B|μ], and Eμ,Sn[ε̂B], Eμ,Sn[(ε̂B)2], and Eμ,Sn[εε̂B], which are needed to characterize MSEμ,Sn[ε̂B].

3. Bayesian-Kolmogorov Asymptotic Conditions

The Raudys-Kolmogorov asymptotic conditions [16] are defined on a sequence of Gaussian discrimination problems with a sequence of parameters and sample sizes: (μp,0, μp,1, Σp, np,0, np,1), p = 1, 2, …, where the means and the covariance matrix are arbitrary. The common assumptions for Raudys-Kolmogorov asymptotics are n0 → ∞, n1 → ∞, p → ∞, pn0J0<,pn1J<. For notational simplicity, we denote the limit under these conditions by limp. In the analysis of classical statistics related to LDA it is commonly assumed that the Mahalanobis distance, δμ,p=(μp,0-μp,1)Tp-1(μp,0-μp,1), is finite and limpδμ,p=δ¯μ (see [22], p. 4). This condition assures existence of limits of performance metrics of the relevant statistics [16, 22].

To analyze the Bayesian MMSE error estimator, ε^iB, we modify the sequence of Gaussian discrimination problems to:

(μp,0,μp,1,p,np,0,np,1,mp,0,mp,1,νp,0,νp,1),p=1,2, (9)

In addition to the previous conditions, we assume that the following limits exist for i, j = 0, 1: limpmp,iTp-1μp,j=miT-1μj¯,limpmp,iTp-1mp,j=miT-1mj¯, and limpμp,iTp-1μp,j=μiT-1μj¯, where miT-1μj¯,miT-1mj¯, and μiT-1μj¯ are some constants to which the limits converge. In [22], fairly mild sufficient conditions are given for the existence of these limits.

We refer to all of the aforementioned conditions, along with νi → ∞, νiniγi<, as the Bayesian-Kolmogorov asymptotic conditions (b.k.a.c). We denote the limit under these conditions by limb.k.a.c., which means that, for i, j = 0, 1,

limb.k.a.c.(·)=limp,ni,νipn0J0,pn1J1,ν0n0γ0,ν1n1γ1γi<,Ji<mp,iTp-1μp,j=O(1),mp,iTp-1μp,jmiT-1μj¯mp,iTp-1mp,j=O(1),mp,iTp-1mp,jmiT-1mj¯μp,iTp-1μp,j=O(1),μp,iTp-1μp,jμiT-1μj¯(·) (10)

This limit is defined for the case where there is conditioning on a specific value of μp,i. Therefore, in this case μp,i is not a random variable, and for each p, it is a vector of constants. Absent such conditioning, the sequence of discrimination problems and the above limit reduce to

(p,np,0,np,1,mp,0,mp,1,νp,0,νp,1),p=1,2, (11)

and

limb.k.a.c.(·)=limp,ni,νipn0J0,pn1J1,ν0n0γ0,ν1n1γ1γi<,Ji<mp,iTp-1mp,j=O(1),mp,iTp-1mp,jmiT-1mj¯(·) (12)

respectively. For notational simplicity we assume clarity from the context and do not explicitly differentiate between these conditions. We denote convergence in probability under Bayesian-Kolmogorov asymptotic conditions by “ plimb.k.a.c.”.“ limb.k.a.c.” and “ K” denote ordinary convergence under Bayesian-Kolmogorov asymptotic conditions. At no risk of ambiguity, we henceforth omit the subscript “p” from the parameters and sample sizes in (9) or (11).

We define ηa1,a2,a3,a4 = (a1a2)T Σ−1(a3a4) and, for ease of notation write ηa1,a2,a1,a2 as ηa1,a2. There are two special cases: (1) the square of the Mahalanobis distance in the space of the parameters of the unknown class conditional densities, δμ2=ημ0,μ1>0; and (2) the square of the Mahalanobis distance in the space of prior knowledge, Δm2=ηm0,m1>0, where m=[m0T,m1T]T. The conditions in (10) assure existence of limb.k.a.c ηa1,a2,a3,a4, where the aj’s can be any combination of mi and μi, i = 0, 1. Consistent with our notations, we use η̄a1,a2,a3,a4, δ¯μ2, and Δ¯m2 to denote the limb.k.a.c of ηa1,a2,a3,a4, δμ2, and Δm2, respectively. Thus,

η¯a1,a2,a3,a4=(a1-a2)T-1(a3-a4)¯=a1T-1a3¯-a1T-1a4¯-a2T-1a3¯+a2T-1a4¯. (13)

The ratio p/ni is an indicator of complexity for LDA (in fact, any linear classification rule): the VC dimension in this case is p + 1 [33]. Therefore, the conditions (10) assure the existence of the asymptotic complexity of the problem. The ratio νi/ni is an indicator of relative certainty of prior knowledge to the data: the smaller νi/ni, the more we rely on the data and less on our prior knowledge. Therefore, the conditions (10) state asymptotic existence of relative certainty. In the following, we let βi=vini, so that βi=νiniγi.

4. First Moment of ε^iB

In this section we use the Bayesian-Kolmogorov asymptotic conditions to characterize the conditional and unconditional first moment of the Bayesian MMSE error estimator.

4.1. Conditional Expectation of ε^iB:ESn[ε^iBμ]

The asymptotic (in a Bayesian-Kolmogorov sense) conditional expectation of the Bayesian MMSE error estimator is characterized in the following theorem, with the proof presented in the Appendix. Note that G0B,G1B, and D depend on μ, but to ease the notation we leave this implicit.

Theorem 1

Consider the sequence of Gaussian discrimination problems defined by (9). Then

limb.k.a.c.ESn[ε^iBμ]=Φ((-1)i-GiB+cD), (14)

so that

limb.k.a.c.ESn[ε^Bμ]=α0Φ(-G0B+cD)+α1Φ(G1B-cD), (15)

where

G0B=12(1+γ0)(γ0(η¯m0,μ1-η¯m0,μ0)+δ¯μ2+(1-γ0)J0+(1+γ0)J1),G1B=-12(1+γ1)(γ1(η¯m1,μ0-η¯m1,μ1)+δ¯μ2+(1-γ1)J1+(1+γ1)J0)D=δ¯μ2+J0+J1. (16)

Theorem 1 suggests a finite-sample approximation:

ESn[ε^0Bμ]Φ(-G0B,f+cδμ2+pn0+pn1), (17)

where G0B,f is obtained by using the finite-sample parameters of the problem in (16), namely,

G0B,f=12(1+β0)(β0(ηm0,μ1-ηm0,μ0)+δμ2+(1-β0)pn0+(1+β0)pn1). (18)

To obtain the corresponding approximation for ESn[ε^1Bμ], it suffices to use (17) by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in -G0B,f.

To obtain a Raudys-type of finite-sample approximation for the expectation of ε^0B, first note that the Gaussian distribution in (7) can be rewritten as

ε^0B=P(U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,μ), (19)

where z is independent of Sn, Ψi is a multivariate Gaussian N(mi,(ni+νi+1)(ni+νi)νi2), and

Ui(x¯0,x¯1,z)=(νini+νiz+nix¯ini+νi-x¯0+x¯12)T-1(x¯0-x¯1). (20)

Taking the expectation of ε^0B relative to the sampling distribution and then applying the standard normal approximation yields the Raudys-type of approximation:

ESn[ε^0Bμ]=P(U0(x¯0,x¯1,z)czΨ0,μ)Φ(-ESn,z[U0(x¯0,x¯1,z)zΨ0,μ]+cVarSn,z[U0(x¯0,x¯1,z)zΨ0,μ]). (21)

Algebraic manipulation yields (Suppl. Section A)

ESn[ε^0Bμ]Φ(-G0B,R+cD0B,R), (22)

where

G0B,R=G0B,f, (23)

with G0B,f being presented in (18) and

D0B,R=δμ2+δμ2n0(1+β0)+δμ2n1(1+β0)+δμ2n0(1+β0)2+β0(1+β0)2[nm0,μ1-(1-β0)ηm0,μ0-δμ2n0+(1+β0)ηm0,μ1-ηm0,μ0n1]+pn0+pn1+pn02(1+β0)+pn0n1(1+β0)+(1-β0)2p2n02(1+β0)2+pn0n1(1+β0)2+p2n12. (24)

The corresponding approximation for ESn[ε^1Bμ] is

ESn[ε^1Bμ]Φ(G1B,R-cD1B,R), (25)

where D1B,R and G1B,R are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in D0B,R and -G0B,R, respectively. It is straightforward to see that

G0B,RKG0B,D0B,RKδ¯μ2+J0+J1, (26)

with G0B being defined in Theorem 1. Therefore, the approximation obtained in (22) is asymptotically exact and (17) and (22) are asymptotically equivalent.

4.2. Unconditional Expectation of ε^iB:Eμ,Sn[ε^iB]

We consider the unconditional expectation of ε^iB under Bayesian-Kolmogorov asymptotics. The proof of the following theorem is presented in the Appendix.

Theorem 2

Consider the sequence of Gaussian discrimination problems defined by (11). Then

limb.k.a.c.Eμ,Sn[ε^iB]=Φ((-1)i-Hi+cF), (27)

so that

limb.k.a.c.Eμ,Sn[ε^iB]=α0Φ(-H0+cF)+α1Φ(H1-cF), (28)

where

H0=12(Δ¯m2+J1-J0+J0γ0+J1γ1),H1=-12(Δ¯m2+J0-J1+J0γ0+J1γ1),F=Δ¯m2+J0+J1+J0γ0+J1γ1. (29)

Theorem 2 suggests the finite-sample approximation

Eμ,Sn[ε^0B]Φ(-H0R+cΔm2+pn0+pn1+pν0+pν1), (30)

where

H0R=12(Δm2+pn1-pn0+pν0+pν1). (31)

From (19) we can get the Raudys-type approximation:

Eμ,Sn[ε^0B]=Eμ[P(U0(x¯0,x¯1,z)czΨ0,μ)]Φ(-Eμ,Sn,z[U0(x¯0,x¯1,z)zΨ0]+cVarμ,Sn,z[U0(x¯0,x¯1,z)zΨ0]). (32)

Some algebraic manipulations yield (Suppl. Section B)

Eμ,Sn[ε^0B]Φ(-H0R+cF0R), (33)

where

F0R=(1+1ν0+1ν1+1n1)Δm2+p(1n0+1n1+1ν0+1ν1)+p(12n02+12n12+12ν02+12ν12)+p(1n1ν0+1η1ν1+1ν0ν1). (34)

It is straightforward to see that

H0RKH0,F0RKΔ¯m2+J0+J1+J0γ0+J1γ1, (35)

with H0 defined in Theorem 2. Hence, the approximation obtained in (33) is asymptotically exact and both (30) and (33) are asymptotically equivalent.

5. Second Moments of ε^iB

Here we employ the Bayesian-Kolmogorov asymptotic analysis to characterize the second and cross moments with the actual error, and therefore the MSE of error estimation.

5.1. Conditional Second and Cross Moments of ε^iB

Defining two i.i.d. random vectors, z and z′, yields the second moment representation

ESn[(ε^0B)2μ]=ESn[P(U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,μ)2]=ESn[P(U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,μ)P(U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,μ)]=ESn[P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,zΨ0,μ)]=P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)czΨ0,zΨ0,μ), (36)

where z and z′ are independent of Sn, and Ψi is a multivariate Gaussian, N(mi,(ni+νi+1)(ni+νi)νi2), and Ui(0, 1, z) being defined in (20).

We have the following theorem, with the proof presented in the Appendix.

Theorem 3

For the sequence of Gaussian discrimination problems in (9) and for i, j = 0, 1,

limb.k.a.c.ESn[ε^iBε^jBμ]=Φ((-1)i-GiB+cD)Φ((-1)j-GjB+cD), (37)

so that

limb.k.a.c.ESn[(ε^B)2μ]=[α0Φ(-G0B+cD)+α1Φ(G1B-cD)]2, (38)

where G0B,G1B, and D are defined in (16).

This theorem suggests the finite-sample approximation

ESn[(ε^0B)2μ][Φ(-G0B,f+cδμ2+pn0+pn1)]2, (39)

which is the square of the approximation (17). Corresponding approximations for E[ε^0Bε^1B] and E[(ε^1B)2] are obtained similarly.

Similar to the proof of Theorem 3, we obtain the conditional cross moment of

ε̂B.

Theorem 4

Consider the sequence of Gaussian discrimination problems in (9). Then for i, j = 0, 1,

limb.k.a.c.ESn[ε^iBεjμ]=Φ((-1)i-GiB+cD)Φ((-1)j-Gj+cD), (40)

so that

limb.k.a.c.ESn[ε^Bεμ]=i=01j=01[αiαjΦ((-1)i-GiB+cD)Φ((-1)j-Gj+cD)], (41)

where GiB and D are defined in (16) and Gi is defined in (47).

This theorem suggests the finite-sample approximation

ESn[ε^0Bε0μ]Φ(-G0B,f+cδμ2+pn0+pn1)Φ(-12δμ2+pn1-pn0-cδμ2+pn0+pn1). (42)

This is a product of (17) and the finite-sample approximation for ESn[ε0|μ] in [16].

A consequence of Theorems 1, 3, and 4 is that all the conditional variances and covariances are asymptotically zero:

limb.k.a.c.VarSn(ε^Bμ)=limb.k.a.c.VarSn(εμ)=limb.k.a.c.CovSn(ε,ε^Bμ)=0. (43)

Hence, the deviation variance is also asymptotically zero, limb.k.a.c. VarSnd[ε^Bμ]=0. Hence, defining the conditional bias as

BiasC,n[ε^B]=ESn[ε^B-εμ], (44)

the asymptotic RMS reduces to

limb.k.a.c.RMSSn[ε^Bμ]=limb.k.a.c.BiasC,n[ε^B]. (45)

To express the conditional bias, as proven in [16],

limb.k.a.c.ESn[εμ]=α0Φ(-G0+cD)+α1Φ(G1-cD), (46)

where

G0=12(δμ2+J1-J0),G1=-12(δμ2+J0-J1),D=δμ2+J0+J1. (47)

It follows from Theorem 1 and (46) that

limb.k.a.c.BiasC,n[ε^B]=α0[Φ(-G0B+cD)-Φ(-G0+cD)]+α1[Φ(G1B+cD)-Φ(G1-cD)]. (48)

Recall that the MMSE error estimator is unconditionally unbiased: BiasU,n[ε̂B] = Eμ,Sn[ε̂Bε] = 0.

We next obtain Raudys-type approximations corresponding to Theorems 3 and 4 by utilizing the joint distribution of U0(0, 1, z) and U0(0, 1, z′), defined in (20), with z′ and z′ being independently selected from populations Ψ0 or Ψ1. We employ the function

Φ(a,b;ρ)=-a-b12π1-ρ2exp{-(x2+y2-2ρxy)2(1-ρ)2}dxdy, (49)

which is the distribution function of a joint bivariate Gaussian vector with zero means, unit variances, and correlation coefficient ρ. Note that Φ(a, ∞; ρ) = Φ(a) and Φ(a, b; 0) = Φ(a) Φ(b). For simplicity of notation, we write Φ(a, a; ρ) as Φ(a; ρ). The rectangular-area probabilities involving any jointly Gaussian pair of variables (x, y) can be expressed as

P(xc,yd)=Φ(c-μxσx,d-μyσy;ρxy), (50)

with μx = E[x], μy = E[y], σx=Var(x),σY=Var(y), and correlation coefficient ρxy.

Using (36), we obtain the second-order extension of (21) by

ESn[(ε^0B)2μ]=P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)czΨ0,zΨ0,μ)Φ(-ESn,z[U0(x¯0,x¯1,z)zΨ0,μ]+cVarSn,z[U0(x¯0,x¯1,z)zΨ0,μ];CovSn,z[U0(x¯0,x¯1,z),U0(x¯0,x¯1,z)zΨ0,zΨ0,μ]VarSn,z[U0(x¯0,x¯1,z)zΨ0,μ]). (51)

Using (51), some algebraic manipulations yield

ESn[(ε^0B)2μ]Φ(-G0B,R+cD0B,R;C0B,RD0B,R), (52)

with G0B,R and D0B,R being presented in (23) and (24), respectively, and

C0B,R=CovSn,z[U0(x¯0,x¯1,z),U0(x¯0,x¯1,z)zΨ0,zΨ0,μ]=β0(1+β0)2[ηm0,μ1-(1-β0)ηm0,μ0-δμ2n0+(1+β0)ηm0,μ1-ηm0,μ0n1]+(1-β0)2p2n02(1+β0)2+pn0n1(1+β0)2+p2n12+δμ2n1(1+β0)+δμ2n0(1+β0)2, (53)

The proof of (53) follows by expanding U0(0, 1, z) and U0(0, 1, z′) from (20) and then using the set of identities in the proof of (33), i.e. equation (S.1) from Suppl. Section B. Similarly,

ESn[(ε^1B)2μ]=P(U1(x¯0,x¯1,z)>c,U1(x¯0,x¯1,z)>czΨ1,zΨ1,μ)Φ(G1B,R-cD1B,R;C1B,RD1B,R), (54)

where D1B,R,G1B,R, and C1B,R are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1, in (24), in -G0B,f obtained from (18), and in (53), respectively.

Having C0B,RK0 together with (26) shows that (52) is asymptotically exact, that is, asymptotically equivalent to ESn[(ε^0B)2μ] obtained in Theorem 3. Similarly, it can be shown that

ESn[ε^0Bε^1Bμ]=P(U0(x¯0,x¯1,z)c,-U1(x¯0,x¯1,z)<-czΨ0,zΨ1,μ)Φ(-G0B,R+cD0B,R,G1B,R-cD1B,R;C01B,RD0B,RD1B,R), (55)

where, after some algebraic manipulations we obtain

C01B,R=1n0(1+β0)(1+β1)[β0ηm0,μ0,μ0,μ1-β0β1ηm0,μ0,m1,μ0+β1ηm1,μ1,μ1,μ0+β1δμ2+δμ2]+1n1(1+β0)(1+β1)[β1ηm1,μ1,μ1,μ0-β0β1ηm1,μ1,m0,μ1+β0ηm0,μ0,μ0,μ1+β0δμ2+δμ2]+pn0n1(1+β0)(1+β1)+(1-β0)p2n02(1+β0)+(1-β1)p2n12(1+β1). (56)

Suppl. Section C gives the proof of (56). Since C01B,RK0, (55) is asymptotically exact, i.e. (55) becomes equivalent to the result of Theorem 3. We obtain the conditional cross moment similarly:

ESn[ε^0Bε0μ]=P(U0(x¯0,x¯1,z)c,W(x¯0,x¯1,x)czΨ0,xΠ0,μ)Φ(-ESn,z[U0(x¯0,x¯1,z)zΨ0,μ]+cVarSn,z[U0(x¯0,x¯1,z)zΨ0,μ],-ESn,x[W(x¯0,x¯1,x)xΠ0,μ]+cVarSn,x[W(x¯0,x¯1,x)xΠ0,μ];CovSn,z,x[U0(x¯0,x¯1,z),W(x¯0,x¯1,x)zΨ0,xΠ0,μ]VU0CVWC), (57)

where

VU0C=VarSn,z[U0(x¯0,x¯1,z)zΨ0,μ],VWC=VarSn,x[W(x¯0,x¯1,x)xΠ0,μ], (58)

where superscript “C” denotes conditional variance. Algebraic manipulations like those leading to (53) yield

ESn[ε^0Bε0μ]Φ(-G0B,R+cD0B,R+-G0R+cD0R;C0BT,RD0B,RD0R), (59)

where

C0BT,R=1n1(1+β0)[δμ2+β0δμ2+β0ηm0,μ0,μ0,μ1]-(1-β0)p2n02(1+β0)+p2n12, (60)

and G0R and D0R having been obtained previously in equations (49) and (50) of [16], namely,

G0R=ESn,x[W(x¯0,x¯1,x)xΠ0,μ]=12(δμ2+pn1-pn0),D0R=VarSn,z[W(x¯0,x¯1,x)xΠ0,μ]=δμ2+δμ2n1+p(1n0+1n1+12n02+12n12). (61)

Similarly, we can show that

ESn[ε^1Bε1μ]Φ(G1B,R-cD1B,R,G1R-cD1R;C1BT,RD1B,RD1R), (62)

where D1B,R and G1B,R are obtained as in (54), and D1R,G1R, and C1BT,R are obtained by exchanging n0 and n1 in D0R,-G0R, and C0BT,R, respectively. Similarly,

ESn[ε^0Bε1μ]Φ(-G0B,R+cD0B,R,G1R-cD1R;C01BT,RD0B,RD1R), (63)

where

C01BT,R=1n0(1+β0)[δμ2+β0ηm0,μ0,μ0,μ1]+(1-β0)p2n02(1+β0)-p2n12, (64)

and

ESn[ε^1Bε0μ]Φ(G1B,R-cD1B,R,-G0R+cD0R;C10BT,RD1B,RD0R), (65)

where C10BT,R is obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in C01BT,R.

We see that C0BT,RK0,C1BT,RK0, and C01BT,RK0. Therefore, from (26) and the fact that G0RKδ¯μ2+J1-J0 and D0RKδ¯μ2+J0+J1, we see that expressions (59), (62), and (63), are all asymptotically exact (compare to Theorem 4).

5.2. Unconditional Second and Cross Moments of ε^iB

Similarly to the way (36) was obtained, we can show that

Eμ,Sn[(ε^0B)2]=Eμ,Sn[P(U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,μ)2]=Eμ,Sn[P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,zΨ0,μ)]=P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)czΨ0,zΨ0). (66)

Similarly to the proofs of Theorem 3 and 4, we get the following theorems.

Theorem 5

Consider the sequence of Gaussian discrimination problems in (11). For i, j = 0, 1,

limb.k.a.c.Eμ,Sn[ε^iBε^jB]=Φ((-1)i-Hi+cF)Φ((-1)j-Hj+cF), (67)

so that

limb.k.a.c.Eμ,Sn[(ε^B)2]=[α0Φ(-H0+cF)+α1Φ(H1-cF)]2, (68)

where H0, H1, and F are defined in (29).

Theorem 6

Consider the sequence of Gaussian discrimination problems in (11). For i, j = 0, 1,

limb.k.a.c.Eμ,Sn[ε^iBεj]=limb.k.a.c.Eμ,Sn[ε^iBε^jB]=limb.k.a.c.Eμ,Sn[εiεj], (69)

so that

limb.k.a.c.Eμ,Sn[ε^Bε]=i=0j=0[αiαjΦ((-1)iHi+cF)Φ((-1)jHj+cF)], (70)

where H0, H1, and F are defined in (29).

Theorems 5 and 6 suggest the finite-sample approximation:

Eμ,Sn[ε^0Bε^0B]Eμ,Sn[ε^0Bε0]Eμ,Sn[ε0ε0][Φ(-12Δm2+pn1-pn0+pν0+pν1-cΔm2+pn0+pn1+pν0+pν1)]2. (71)

A consequence of Theorems 2, 5, and 6 is that

limb.k.a.c.Varμ,Snd[ε^B]=limb.k.a.c.BiasU,n[ε^B]=limb.k.a.c.Varμ,Sn(ε^B)=limb.k.a.c.Varμ,Sn(ε)=limb.k.a.c.Covμ,Sn(ε,ε^B)=limb.k.a.c.RMSμ,Sn[ε^B]=0. (72)

In [30], it was shown that ε̂B is strongly consistent, meaning that ε̂B(Sn) − ε(Sn) → 0 almost surely as n → ∞ under rather general conditions, in particular, for the Gaussian and discrete models considered in that paper. It was also shown that MSEμ[ε̂B|Sn] → 0 almost surely as n → ∞ under similar conditions. Here, we have shown that MSEμ,Sn[ε^B]K0 under conditions stated in (12). Some researchers refer to conditions of double asymptoticity as “comparable” dimensionality and sample size [20, 22]. Therefore, one may think of MSEμ,Sn[ε^B]K0 meaning that MSEμ,Sn [ε̂B] is close to zero for asymptotic and comparable dimensionality, sample size, and certainty parameter.

We now consider Raudys-type approximations. Analogous to the approximation used in (51), we obtain the unconditional second moment of ε^0B:

Eμ,Sn[(ε^0B)2]Φ(-Eμ,Sn,z[U0(x¯0,x¯1,z)zΨ0]+cVarμ,Sn,z[U0(x¯0,x¯1,z)zΨ0];Covμ,Sn,z[U0(x¯0,x¯1,z),U0(x¯0,x¯1,z)zΨ0,zΨ0]Varμ,Sn,z[U0(x¯0,x¯1,z)zΨ0]). (73)

Using (73) we get

Eμ,Sn[(ε^0B)2]=Φ(-H0R+cF0R;K0B,RF0R) (74)

with H0R and F0R given in (31) and (34), respectively, and

K0B,R=Covμ,Sn,z[U0(x¯0,x¯1,z),U0(x¯0,x¯1,z)zΨ0,zΨ0]=(1n0(1+β0)2+1n1+1ν0(1+β0)2+1ν1)Δm2+p2n02+p2ν02-pn0ν0+pn1ν1+p2n12+p2ν12+pn0n1(1+β0)2+pn0ν1(1+β0)2+pn1ν0(1+β0)2, (75)

Suppl. Section D presents the proof of (75). In a similar way,

Eμ,Sn[(ε^1B)2]=Φ(H1R-cF1R;K1B,RF1R), (76)

where F1R,H1R, and K1B,R are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1, in (34), in -H0B,f obtained from (31), and (75), respectively.

Having K0B,RK0 together with (35) makes (74) asymptotically exact. We similarly obtain

Eμ,Sn[ε^0Bε^1B]=Φ(-H0R+cF0R,H1R-cF1R;K01B,RF0RF1R), (77)

where

K01B,R=p(n0+ν0)(n1+ν1)+(n0-ν0)p2n02(n0+ν0)+(n1-ν1)p2n12(n1+ν1)+n0n1pν0ν1(n0+ν0)(n1+ν1)+(n0-ν0)p2ν02(n0+ν0)+(n1-ν1)p2ν12(n1+ν1)+1n0+ν0(1+n0n1+ν1-ν0n0)pν0+1n1+ν1(1+n1n0+ν0-ν1n1)pν1+(1ν0+1ν1)Δm2. (78)

Suppl. Section E presents the proof of (78). Since K01B,RK0, (77) is asymptotically exact (compare to Theorem 5). Similar to (57) and (59), where we characterized conditional cross moments, we can get the unconditional cross moments as follows:

Eμ,Sn[ε^0Bε0]=P(U0(x¯0,x¯1,z)c,W(x¯0,x¯1,x)czΨ0,xΠ0)Φ(-Eμ,Sn,z[U0(x¯0,x¯1,z)zΨ0]+cVarμ,Sn,z[U0(x¯0,x¯1,z)zΨ0],-Eμ,Sn,x[W(x¯0,x¯1,x)xΠ0]+cVarμ,Sn,x[W(x¯0,x¯1,x)xΠ0];Covμ,Sn,z,x[U0(x¯0,x¯1,z),W(x¯0,x¯1,x)zΨ0,xΠ0]VU0UVWU)=Φ(-H0R+cF0R;K0BT,RF0R), (79)

where

VU0U=Varμ,Sn,z[U0(x¯0,x¯1,z)zΨ0],VWU=Varμ,Sn,x[W(x¯0,x¯1,x)xΠ0], (80)

the superscript “U” representing the unconditional variance, H0R and F0R being presented in (31) and (34), respectively, and

K0BT,R=(n0ν0(n0+ν0)+1n1+1ν1)Δm2+p2n12+p2ν12+pn1ν1+n0pn1ν0(n0+ν0)-(n0-ν0)p2n02(n0+ν0)+(n0-ν0)p2ν02(n0+ν0)+n0pν0ν1(n0+ν0). (81)

The proof of (81) is presented in Suppl. Section F. Similarly,

Eμ,Sn[ε^0Bε1]Φ(-H0R+cF0R,H1R-cF1R;K01BT,RF0RF1R) (82)

where,

K01BT,R=(1ν0+1ν1)Δm2+p2ν02+p2ν12+pν0ν1-p2n02-p2n12. (83)

See Suppl. Section G for the proof of (83). Having K0BT,RK0 and K01BT,RK0 along with (35) makes (79) and (82) asymptotically exact (compare to Theorem 6).

5.3. Conditional and Unconditional Second Moment of εi

To complete the derivations and obtain the unconditional RMS of estimation, we need the conditional and unconditional second moment of the true error. The conditional second moment of the true error can be found from results in [16], which for completeness are represented here:

ESn[ε02μ]Φ(-G0R+cD0R;C0T,RD0R), (84)

with G0R and D0R defined in (61),

ESn[ε12μ]Φ(G1R-cD1R;C1T,RD1R), (85)

and

ESn[ε0ε1μ]Φ(-G0R+cD0R,G1R-cD1R;C01T,RD0RD1R), (86)

where

C01T,R=-p2n02-p2n12. (87)

Similar to obtaining (79), we can show that

Eμ,Sn[ε02]Φ(-H0R+cF0R;K0T,RF0R), (88)

with H0R and F0R given in (31) and (34), respectively, and

K0T,R=(1ν0+1ν1+1n1)Δm2+p2ν02+p2ν12+pν0ν1+p2n02+p2n12+pn1ν0+pn1ν1. (89)

Similarly,

Eμ,Sn[ε12]=Φ(H1R-cF1R;K1T,RF1R), (90)

with K1T,R obtained from K0T,R by exchanging n0 and n1, and ν0 and ν1. Similarly,

Eμ,Sn[ε02]Φ(-H0R+cF0R,H1R-cF1R;K01T,RF0RF1R), (91)

with H0R and F0R given in (31) and (34), respectively, and

K01T,R=(1ν0+1ν1)Δm2+p2ν02+p2ν12+pν0ν1-p2n02-p2n12. (92)

6. Monte Carlo Comparisons

In this section we compare the asymptotically exact finite-sample approximations of the first, second and mixed moments to Monte Carlo estimations in conditional and unconditional scenarios. The following steps are used to compute the Monte Carlo estimation:

  1. Define a set of hyper-parameters for the Gaussian model: m0, m1, ν0, ,ν1, and Σ. We let Σ have diagonal elements 1 and off-diagonal elements 0.1. m0 and m1 are chosen by fixing δμ2 ( δμ2=4, which corresponds to Bayes error 0.1586). Setting δμ2 and Σ fixes the means μ0 and μ1 of the class-conditional densities. The priors, π0 and π1, are defined by choosing a small deviation from μ0 and μ1, that is, by setting mi = μi + aμi, where a = 0.01.

  2. (unconditional case): Using π0 and π1, generate random realizations of μ0 and μ1.

  3. (conditional case): Use the values of μ0 and μ1 obtained from Step 1.

  4. For fixed Π0 and Π1, generate a set of training data of size ni for class i = 0, 1.

  5. Using the training sample, design the LDA classifier, ψn, using (2).

  6. Compute the Bayesian MMSE error estimator, ε̂B, using (5) and (7).

  7. Knowing μ0 and μ1, find the true error of ψn using (3).

  8. Repeat Steps 3 through 6, T1 times.

  9. Repeat Steps 2 through 7, T2 times.

In the unconditional case, we set T1 = T2 = 300 and generate 90, 000 samples. For the conditional case, we set T1 = 10, 000 and T2 = 1, the latter because μ0 and μ1 are set in Step 2.

Figure 1 treats Raudys-type finite-sample approximations, including the RMS. Figure 1(a) compares the first moments obtained from equations (22) and (33). It presents ESn[ε̂B|μ] and Eμ,Sn[ε̂B] computed by Monte Carlo estimation and the analytical expressions. The label “FSA BE Uncond” identifies the curve of Eμ,Sn[ε̂B], the unconditional expected estimated error obtained from the finite-sample approximation, which according to the basic theory is equal to Eμ,Sn[ε]. The labels “FSA BE Cond” and “FSA TE Cond” show the curves of ESn[ε̂B|μ], the conditional expected estimated error, and ESn[ε|μ], the conditional expected true error, respectively, both obtained using the analytic approximations. The curves obtained from Monte Carlo estimation are identified by “MC” labels. The analytic curves in Figure 1(a) show substantial agreement with the Monte Carlo approximation.

Figure 1.

Figure 1

Comparison of conditional and unconditional performance metrics of ε̂B using asymptotically exact finite setting approximations, with Monte Carlo estimates as a function of sample size. (a) Expectations. The case of asymptotic unconditional expectation of ε is not plotted as ε̂B is unconditionally unbiased; (b) Second and mixed moments; (c) Conditional variance of deviation from true error, i.e. VarSnd[ε^Bμ] and, unconditional variance of deviation, i.e. Varμ,Snd[ε^B]; (d) Conditional RMS of estimation, i.e. RMSSn[ε̂B|μ] and, unconditional RMS of estimation, i.e. RMSμ,Sn[ε̂B]; (a)–(d) correspond to the same scenario in which dimension, p, is 15 and 100, ν0 = ν1 = 50, mi = μi + 0.01μi with μ0 = −μ1, and Bayes error = 0.1586.

To obtain the second moments, Vard[ε̂] and RMS[ε̂B] as defined in (1), we use equations (52), (54), (55), (59), (63), (84), (85), (86) for the conditional case and (74), (76), (77), (79), (82), (88), (90), (91) for the unconditional case. Figures 1(b), 1(c), and 1(d) compare the Monte Carlo estimation to the finite-sample approximations obtained for second/mixed moments, Vard[ε̂], and RMS[ε̂B], respectively. The labels are interpreted similarly to those in Figure 1(a), but for the second/mixed moments instead. For example, “MC BE×TE Uncond” identifies the MC curve of Eμ,Sn[ε̂Bε]. The Figures 1(b), 1(c), and 1(d) show that the finite-sample approximations for the conditional and unconditional second/mixed moments, variance of deviation, and RMS are quite accurate (close to the MC value).

While Figure 1 shows the accuracy of Raudys-type of finite-sample approximations, figures in the Supplementary Materials show the the comparison between the finite-sample approximations obtained directly from Theorem 1–6, i.e. equations (29), (57), (70), (73), (76), (102), and (103), to Monte Carlo estimation.

7. Examination of the Raudys-type RMS Approximation

Equations (18), (24), (53), (56), and (63) show that RMSSn[ε̂B|μ] is a function of 14 variables: p, n0, n1, β0, β1, δμ2, ηm0,μ1, ηm0,μ0, ηm1,μ0, ηm1,μ1, ηm0,μ0,μ0,μ1, ηm0,μ0,m1,μ0, ηm1,μ1,m0,μ1, ηm1,μ1,μ1,μ0. Studying a function of this number of variables is complicated, especially because restricting some variables can constrain others. We make several simplifying assumptions to reduce the complexity. We let n0=n1=n2, β0 = β1 = β and assume very informative priors in which m0 = μ0 and m1 = μ1. Using these assumptions, RMSSn[ε̂B|μ] is only a function of p, n, β, and δμ2. We let p ∈ [4, 200], n ∈ [40, 200], β ∈ {0.5, 1, 2}, δμ2{4,16}, which means that the Bayes error is 0.158 or 0.022. Figure 2(a) shows plots of RMSSn[ε̂B|μ] as a function of p, n, β, and δμ2. These show that for smaller distance between classes, that is, for smaller δμ2 (larger Bayes error), the RMS is larger, and as the distance between classes increases, the RMS decreases. Furthermore, we see that in situations where very informative priors are available, i.e. m0 = μ0 and m1 = μ1, relying more on data can have a detrimental effect on RMS. Indeed, the plots in the top row (for β = 0.5) have larger RMS than the plots in the bottom row of the figure (for β = 2).

Figure 2.

Figure 2

(a) The conditional RMS of estimation, i.e. RMSSn[ε̂B|μ], as a function of p < 200 and n < 200. From top to bottom, the rows correspond to β = 0.5, 1, 2, respectively. From left to right, the columns correspond to δμ2=4,16, respectively. (b) The unconditional RMS of estimation, i.e. RMSμ,Sn[ε̂B], as a function of p < 1000 and n < 2000. From top to bottom, the rows correspond to β = 0.5, 1, 2, respectively. From left to right, the columns correspond to Δm2=4,16, respectively.

Using the RMS expressions enables finding the necessary sample size to insure a given RMSSn[ε̂B|μ] by using the same methodology as developed for the resubstitution and leave-one-out error estimators in [16, 26]. The plots in Figure 2(a) (as well as other unshown plots) show that, with m0 = μ0 and m1 = μ1, the RMS is a decreasing function of δμ2 . Therefore, the number of sample points that guarantees maxδμ2>0RMSSn[ε^Bμ]=limδμ20RMSSn[ε^Bμ] being less than a predetermined value τ insures that RMSSn[ε̂B|μ] < τ, for any δμ2. Let the desired bound be κε^(n,p,β)=limδμ20RMSSn[ε^Bμ]. From equations (52), (54), (55), (59), (63), (84), (85), and (86), we can find κε̂(n, p, β) and increase n until κε̂(n, p, β) < τ. Table 1 (β = 1: Conditional) shows the minimum number of sample points needed to guarantee having a predetermined conditional RMS for the whole range of δμ2 (other β shown in the Supplementary Material). A larger dimensionality, a smaller τ, and a smaller β result in a larger necessary sample size needed for having κε̂(n, p, β) < τ.

Table 1.

Minimum sample size, n, ( n0=n1=n2) to satisfy κε̂ (n, p, β) < τ.

τ p = 2 p = 4 p = 8 p = 16 p = 32 p = 64 p = 128
β = 1: Conditional
0.1 14 22 38 70 132 256 506
0.09 18 28 48 86 164 318 626
0.08 24 36 60 110 208 404 796
0.07 32 48 80 144 272 530 1044
0.06 44 64 108 196 372 722 1424
0.05 62 94 158 284 538 1044 2056

β = 1: Unconditional
0.025 108 108 106 102 92 72 2
0.02 172 170 168 164 156 138 78
0.015 308 306 304 300 292 274 236
0.01 694 694 690 686 678 662 628
0.005 2790 2786 2782 2776 2768 2752 2720

Turning to the unconditional RMS, equations (34), (75), (78), (83), (89), and (92) show that RMSμ,Sn[ε̂B] is a function of 6 variables: p, n0, n1, ν01, Δm2. Figure 2(b) shows plots of RMSμ,Sn[ε̂B] as a function of p, n, β, and Δm2, assuming n0=n1=n2, β0 = β1 = β. Note that setting the values of n and β fixes the value of ν0 = ν1 = ν in the corresponding expressions for RMSμ,Sn[ε̂B]. Due to the complex shape of RMSμ,Sn[ε̂B], we consider a large range of n and p. The plots show that a smaller distance between prior distributions (smaller Δm2) corresponds to a larger unconditional RMS of estimation. In addition, as the distance between classes increases, the RMS decreases. The plots in Figure 2(b) show that, as Δm2 increases, RMS decreases. Furthermore, Figure 2(b) (and other unshown plots) demonstrate an interesting phenomenon in the shape of the RMS. In regions defined by pairs of (p, n), for each p, RMS first increases as a function of sample size and then decreases. We further observe that with fixed p, for smaller β, this “peaking phenomenon” happens for larger n. On the other hand, with fixed β, for larger p, peaking happens for larger n. These observations are presented in Figure 3, which shows curves obtained by cutting the 3D plots in the left column of Fig. 2(b) at a few dimensions. This figure shows that, for p = 900 and β = 2, adding more sample points increases RMS abruptly at first to reach a maximum value of RMS at n = 140, the point after which the RMS starts to decrease.

Figure 3.

Figure 3

RMSμ,Sn[ε̂B]-peaking phenomenon as a function of sample size. These plots are obtained by cutting the 3D plots in the left column of Fig. 2(b) at few dimensionality (i.e. Δm2=4). From top to bottom the rows correspond to β = 0.5, 1, 2, respectively. The solid-black curves indicate RMSμ,Sn[ε̂B] computed from the analytical results and the red-dashed curves show the same results computed by means of Monte Carlo simulations. Due to computational burden of estimating the curves by means of Monte Carlo studies, the simulations are limited to n < 500 and p = 10, 70.

One may use the unconditional scenario to determine the the minimum necessary sample size for a desired RMSμ,Sn[ε̂B]. In fact, this is the more practical way to go because in practice one does not know μ. Since the unconditional RMS shows a decreasing trend in terms of Δm2, we use the previous technique to find the minimum necessary sample size to guarantee a desired unconditional RMS. Table 1 (β = 1: Unconditional) shows the minimum sample size that guarantees maxΔm2>0RMSμ,Sn[ε^B]=limΔm20RMSμ,Sn[ε^B] being less than a predetermined value τ, i.e. insures that RMSμ,Sn[ε̂B] < τ for any Δm2 (other β shown in the Supplementary Material).

To examine the accuracy of the required sample size that satisfies κε̂(n, p, β) < τ for both conditional and unconditional settings, we have performed a set of experiments (see Supplementary Material). The results of these experiments confirm the efficacy of Table 1 in determining the minimum sample size required to insure the RMS is less than a predetermined value τ.

8. Conclusion

Using realistic assumptions about sample size and dimensionality, standard statistical techniques are generally incapable of estimating the error of a classifier in small-sample classification. Bayesian MMSE error estimation facilitates more accurate estimation by incorporating prior knowledge. In this paper, we have characterized two sets of performance metrics for Bayesian MMSE error estimation in the case of LDA in a Gaussian model: (1) the first, second, and cross moments of the estimated and actual errors conditioned on a fixed feature-label distribution, which in turn gives us knowledge of the conditional RMSSn[ε̂B|θ]; and (2) the unconditional moments and, therefore, the unconditional RMS, RMSθ,Sn[ε̂B]. We set up a series of conditions, called the Bayesian-Kolmogorov asymptotic conditions, that allow us to characterize the performance metrics of Bayesian MMSE error estimation in an asymptotic sense. The Bayesian-Kolmogorov asymptotic conditions are set up based on the assumption of increasing n, p, and certainty parameter ν, with an arbitrary constant limiting ratio between n and p, and n and ν. To our knowledge, these conditions permit, for the first time, application of Kolmogorov-type of asymptotics in a Bayesian setting. The asymptotic expressions proposed in this paper result directly in finite-sample approximations of the performance metrics. Improved finite-sample accuracy is achieved via newly proposed Raudys-type approximations. The asymptotic theory is used to prove that these approximations are, in fact, asymptotically exact under the Bayesian-Kolmogorov asymptotic conditions. Using the derived analytical expressions, we have examined performance of the Bayesian MMSE error estimator in relation to feature-label distributions, prior knowledge, sample size, and dimensionality. We have used the results to determine the minimum sample size guaranteeing a desired level of error estimation accuracy.

As noted in the Introduction, a natural next step in error estimation theory is to remove the known-covariance condition, but as also noted, this may prove to be difficult.

Supplementary Material

01

Acknowledgments

This work was partially supported by the NIH grants 2R25CA090301 (Nutrition, Biostatistics, and Bioinformatics) from the National Cancer Institute.

Appendix

Proof of Theorem 1

We explain this proof in detail as some steps will be used in later proofs. Let

G^0B=(m0-x¯0+x¯12)T-1(x¯0-x¯1), (93)

where m0 is defined in (8). Then

G^0B=ν0m0Tn0+ν0-1(x¯0-x¯1)+n0-ν02(n0+ν0)(x¯0T-1x¯0-x¯0T-1x¯1)+12(x¯1T-1x¯1-x¯0T-1x¯1). (94)

For i, j = 0, 1 and ij, define the following random variables:

yi=miT-1(x¯0-x¯1),zi=x¯iT-1x¯i,zij=x¯iT-1x¯j. (95)

The variance of yi given μ does not depend on μ. Therefore, under the Bayesian-Kolmogorov conditions stated in (10), miT-1μj¯ and μiT-1μj¯ do not appear in the limit. Only miT-1mi¯ matters, which vanishes in the limit as follows:

VarSn[yiμ]=miT(-1n0+-1n1)miKlimn0miT-1mi¯n0+limn1miT-1mi¯n1=0. (96)

To find the variance of zi and zij we can first transform zi and zij to quadratic forms and then use the results of [34] to find the variance of quadratic functions of Gaussian random variables. Specifically, from [34], for y ~ N(μ, Σ) and A being a symmetric positive definite matrix, Var[yT Ay] = 2tr()2 + 4μT AΣAμ′, with tr being the trace operator. Therefore, after some algebraic manipulations, we obtain

VarSn[ziμ]=2pni2+4μiT-1μiniK2limniJini+4limniμiT-1μi¯ni=0,VarSn[zijμ]=pninj+μiT-1μinj+μiT-1μjniKlimnjJinj+limnjμiT-1μi¯nj+limniμjT-1μj¯ni=0. (97)

From the Cauchy-Schwarz inequality (Cov[x,y]Var[x]Var[y]),CovSn[yi,zkμ]K0,CovSn[yi,zijμ]K0, and CovSn[zi,zijμ]K0 for i, j, k = 0, 1, ij, Furthermore, ni-νi2(ni+νi)K1-γi2(1+γi) and νini+νiKγi1+γi. Putting this together and following the same approach for G^1B yields VarSn[G^1Bμ]K0. In general (via Chebyshev’s inequality), limn→∞Var[Xn] = 0 implies convergence in probability of Xn to limn→∞ E[Xn]. Hence, since VarSn[G^1Bμ]K0, for i, j = 0, 1 and ij,

plimb.k.a.c.G^iBμ=limb.k.a.c.ESn[G^iBμ]=(-1)i[12(μjT-1μj¯+Jj)+γi(miT-1μi¯-miT-1μj¯)1+γi+1-γi2(1+γi)(μiT-1μi¯+Ji)-μiT-1μj¯(1-γi2(1+γi)+12)]=GiB. (98)

Now let

D^i=νi+1νi(x¯0-x¯1)T-1(x¯0-x¯1)=νi+1νiδ^2, (99)

where δ̂2 = (01)T Σ−1(01). Similar to deriving (97) via the variance of quadratic forms of Gaussian variables, we can show

VarSn[δ^2μ]=4δμ2(1n0+1n1)+2p(1n0+1n1)2. (100)

Thus,

VarSn[D^iμ]=(νi+1νi)2VarSn[δ^2μ]K0. (101)

As before, from Chebyshev’s inequality it follows that

plimb.k.a.c.D^iμ=limb.k.a.c.ESn[D^iμ]=D. (102)

By the Continuous Mapping Theorem (continuous functions preserve convergence in probability),

plimb.k.a.c.ε^iBμ=plimb.k.a.c.Φ((-1)i-G^iB+cD^i)μ=Φ(plimb.k.a.c.(-1)i-G^iB+cD^iμ)=Φ((-1)i-GiB+cD). (103)

The Dominated Convergence Theorem (|Xn| ≤ B, for some B > 0 and XnX in probability implies E[Xn] → E[X])), via the boundedness of ϕ(.) leads to completion of the proof:

limb.k.a.c.ESn[ε^iBμ]=limb.k.a.c.ESn[Φ((-1)i-G^iB+cD^i)μ]=ESn[Φ(plimb.k.a.c.(-1)i-G^iB+cD^iμ)]=plimb.k.a.c.ε^iBμ. (104)

Proof of Theorem 2

We first prove that VarSn(G^0B)K0 with G^0B defined in (94). To do so we use

Varμ,Sn[G^0B]=Varμ[ESn[G^0Bμ]]+Eμ[VarSn[G^0Bμ]]. (105)

To compute the first term on the right hand side, we have

ESn[G^0Bμ]=ν0m0Tn0+ν0-1(μ0-μ1)+n0-ν02(n0+ν0)(μ0T-1μ0+pn0)+12(μ1T-1μ1+pn1)-μ0T-1μ1(n0n0+ν0). (106)

For i, j = 0, 1 and ij define the following random variables:

yi=miT-1(μ0-μ1),zi=μiT-1μi,zij=μiT-1μj. (107)

The variables defined in (107) can be obtained by replacing i’s with μi’s in (95) and i ~ N(μi, Σ/ni) and μi ~ N(mi, Σ/νi). Replacing μi with mi and ni with νi in (96) and (97) yields

Varμ(yi)=miT(-1ν0+-1ν1)miK0,Varμ(zi)=2pνi2+4miT-1miνiK0,Varμ(zij)=pνiνj+miT-1miνj+mjT-1mjνiK0. (108)

By Cauchy-Schwarz, Covμ(yi,zk)K0,Covμ(yi,zij)K0, and Covμ(zi,zij)K0. Hence, Varμ[ESn[G^0Bμ]]K0

Now consider the second term on the right hand side of (105). The covariance of a function of Gaussian random variables can be computed from results of [35]. For instance,

CovSn[aTx¯i,x¯iT-1x¯iμ]=2niaTμi. (109)

From (109) and the independence of 0 and 1,

CovSn[x¯iT-1x¯i,x¯iT-1x¯iμ]=2niμjT-1μi,ij (110)

Via (108), (109), and (110), the inner variance in the second term on the right hand side of (105) is

VarSn[G^0Bμ]=(n0-ν0)2p2n02(n0+ν0)2=n0pn1(n0+ν0)2+p2n12+((n0-ν0)2n0(n0+ν0)2+n02n1(n0+ν0)2)μ0T-1μ0-2(n0-ν0(n0+ν0)2+n0n1(n0+ν0))μ0T-1μ1+2(ν0(n0-ν0)n0(n0+ν0)2+ν0n0n1(n0+ν0)2)m0T-1μ0-2(ν0(n0+ν0)2+ν0n1(n0+ν0))m0T-1μ1+(n0(n0+ν0)2+1n1)μ1T-1μ1+ν02(n0+ν0)2(1n0+1n1)m0T-1m0. (111)

Now, again from the results of [35],

Eμ[μiT-1μi]=miT-1mi+pνi,Eμ[μiT-1μj]=miT-1mj,ij. (112)

From (111) and (112), some algebraic manipulations yield

Eμ[VarSn[G^0Bμ]]=(n0-ν0)2p2n02(n0+ν0)2+n0pn1(n0+ν0)2+p2n12+((n0-ν0)2n0(n0+ν0)2+n02n1(n0+ν0)2)pν0+(n0(n0+ν0)2+1n1)pν1+(n0(n0+ν0)2+1n1)Δm2. (113)

From (113) we see that Eμ[VarSn[G^0Bμ]]K0. In sum, Varμ,Sn[G^0B]K0 and similar to the use Chebyshev’s inequality in the proof of Theorem 1, we get

plimb.k.a.c.G^iB=limb.k.a.c.Eμ,Sn[G^iB]Hi, (114)

with Hi defined in (29).

On the other hand, for i defined in (99) we can write

Varμ,Sn[D^i]=Varμ[ESn[D^iμ]]+Eμ[VarSn[D^iμ]]. (115)

From similar expressions as in (112) for x¯iT-1x¯j, we get ESn[δ^2]=δμ2+pn0+pn1. Moreover, Varμ[δμ2] is obtained from (100) by replacing ni with νi, and δμ2 with Δm2. Thus, from (99),

Varμ[ESn[D^iμ]]=(νi+1νi)2[4Δm2(1ν0+1ν1)+2p(1ν0+1ν0)2]K0. (116)

Furthermore, since Eμ[δμ2]=Δm2+pν0+pν1, from (101),

Eμ[VarSn[D^iμ]]=(νi+1νi)2[4(Δm2+pν0+pν1)(1n0+1n1)+2p(1n0+1n1)2]K0. (117)

Hence, Varμ,Sn[D^i]K0 and, similar to (114), we obtain

plimb.k.a.c.D^i=limb.k.a.c.Eμ,Sn[D^0]=limb.k.a.c.Eμ,Sn[D^1]F, (118)

with F defined in (29). Similar to the proof of Theorem 1, by using the Continuous Mapping Theorem and the Dominated Convergence Theorem, we can show that

limb.k.a.c.Eμ,Sn[ε^iB]=limb.k.a.c.Eμ,Sn[Φ((-1)i-G^iB+cD^)]=Eμ,Sn[Φ(plimb.k.a.c.(-1)i+1-G^iB+cD^)]=Φ((-1)i-Hi+cF), (119)

and the result follows.

Proof of Theorem 3

We start from

ESn[(ε^0B)2μ]=ESn[P(U0(x¯0,x¯1,z)c,U0(x¯0,x¯1,z)cx¯0,x¯1,zΨ0,zΨ0,μ)], (120)

which was shown in (36). Here we characterize the conditional probability inside ESn [.]. From the independence of z, z′, 0, and 1,

[U0(x¯0,x¯1,z)U0(x¯0,x¯1,z)]x¯0,x¯1,zΨ0,zΨ0,μ~N([G^0BG^0B],[D^00D^]), (121)

where here N (. , . ) denotes the bivariate Gaussian density function and G^0B and are defined in (94) and (99). Thus,

ESn[(ε^0B)2μ]=[Φ(-G^0B+cD^)]2μ. (122)

Similar to the proof of Theorem 1, we get

limb.k.a.c.ESn[(ε^iB)2μ]=plimb.k.a.c.(ε^iB)2μ=(limb.k.a.c.ESn[ε^iBμ])2=[Φ((-1)i-GiB+cD)]2. (123)

Similarly, we obtain limb.k.a.c.E[ε^0Bε^1B]=limb.k.a.c.ε^0Bε1B, and the results follow.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Hills M. Allocation rules and their error rates. J Royal Statist Soc Ser B (Methodological) 1966;28:1–31. [Google Scholar]
  • 2.Foley D. Considerations of sample and feature size. IEEE Trans Inf Theory. 1972;IT-18:618–626. [Google Scholar]
  • 3.Sorum MJ. Estimating the conditional probability of misclassification. Technometrics. 1971;13:333–343. [Google Scholar]
  • 4.McLachlan GJ. An asymptotic expansion of the expectation of the estimated error rate in discriminant analysis. Australian Journal of Statistics. 1973;15:210–214. [Google Scholar]
  • 5.Moran M. On the expectation of errors of allocation associated with a linear discriminant function. Biometrika. 1975;62:141–148. [Google Scholar]
  • 6.Berikov VB. A priori estimates of recognition quality for discrete features. Pattern Recogn and Image Analysis. 2002;12:235–242. [Google Scholar]
  • 7.Berikov VB, Litvinenko AG. The influence of prior knowledge on the expected performance of a classifier. Pattern Recogn Lett. 2003;24:2537–2548. [Google Scholar]
  • 8.Braga-Neto U, Dougherty ER. Exact performance measures and distributions of error estimators for discrete classifiers. Pattern Recogn. 2005;38:1799–1814. [Google Scholar]
  • 9.Glick N. Additive estimators for probabilities of correct classification. Pattern Recogn. 1978;10:211–222. [Google Scholar]
  • 10.Fukunaga K, Hayes RR. Estimation of classifier performance. IEEE Trans Pattern Anal Mach Intell. 1989;11:1087–1101. [Google Scholar]
  • 11.Raudys S. An Integrated Approach to Design. Springer-Verlag; London: 2001. Statistical and Neural Classifiers. [Google Scholar]
  • 12.Zollanvari A, Braga-Neto UM, Dougherty ER. On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers. Pattern Recogn. 2009;42:2705–2723. [Google Scholar]
  • 13.Braga-Neto UM, Dougherty ER. Exact correlation between actual and estimated errors in discrete classification. Pattern Recogn Lett. 2010;31:407–413. [Google Scholar]
  • 14.Zollanvari A, Braga-Neto UM, Dougherty ER. Exact representation of the second-order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate heteroskedastic gaussian model. Pattern Recogn. 2012;45:908–917. [Google Scholar]
  • 15.Zollanvari A, Braga-Neto UM, Dougherty ER. Joint sampling distribution between actual and estimated classification errors for linear discriminant analysis. IEEE Trans Inf Theory. 2010;56:784–804. [Google Scholar]
  • 16.Zollanvari A, Braga-Neto UM, Dougherty ER. Analytic study of performance of error estimators for linear discriminant analysis. IEEE Trans Sig Proc. 2011;59:4238–4255. [Google Scholar]
  • 17.Wyman F, Young D, Turner D. A comparison of asymptotic error rate expansions for the sample linear discriminant function. Pattern Recogn. 1990;23:775–783. [Google Scholar]
  • 18.Pikelis V. Comparison of methods of computing the expected classification errors. Automat Remote Control. 1976;5:59–63. [Google Scholar]
  • 19.Raudys S. On the amount of a priori information in designing the classification algorithm. Technical Cybernetics. 1972;4:168–174. In Russian. [Google Scholar]
  • 20.Deev A. Representation of statistics of discriminant analysis and asymptotic expansion when space dimensions are comparable with sample size. Dokl Akad Nauk SSSR. 1970;195:759–762. In Russian. [Google Scholar]
  • 21.Fujikoshi Y. Error bounds for asymptotic approximations of the linear discriminant function when the sample size and dimensionality are large. J Multivariate Anal. 2000;73:1–17. [Google Scholar]
  • 22.Serdobolskii V. Multivariate Statistical Analysis: A High-Dimensional Approach. Springer; 2000. [Google Scholar]
  • 23.Bickel PJ, Levina E. Some theory for fisher’s linear discriminant function, ‘naive bayes’, and some alternatives when there are many more variables than observations. Bernoulli. 2004;10:989–1010. [Google Scholar]
  • 24.Raudys S, Young DM. Results in statistical discriminant analysis: A review of the former soviet union literature. J Multivariate Anal. 2004;89:1–35. [Google Scholar]
  • 25.Dougherty E, Sima C, Hua J, Hanczar B, Braga-Neto U. Performance of error estimators for classification. Current Bioinformatics. 2010;5:53–67. [Google Scholar]
  • 26.Dougherty ER, Zollanvari A, Braga-Neto UM. The illusion of distribution-free small-sample classification in genomics. Current Genomics. 2011;12:333–341. doi: 10.2174/138920211796429763. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Dalton L, Dougherty ER. Bayesian minimum mean-square error estimation for classification error– Part I: Definition and the Bayesian MMSE error estimator for discrete classification. IEEE Trans Sig Proc. 2011;59:115–129. [Google Scholar]
  • 28.Dalton L, Dougherty ER. Bayesian minimum mean-square error estimation for classification error–Part II: Linear classification of Gaussian models. IEEE Trans Sig Proc. 2011;59:130–144. [Google Scholar]
  • 29.Dalton L, Dougherty ER. Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–Part I: Representation. IEEE Trans Sig Proc. 2012;60:2575–2587. [Google Scholar]
  • 30.Dalton L, Dougherty ER. Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–Part II: Consistency and performance analysis. IEEE Trans Sig Proc. 2012;60:2588–2603. [Google Scholar]
  • 31.Anderson T. Classification by multivariate analysis. Psychometrika. 1951;16:31–50. [Google Scholar]
  • 32.John S. Errors in discrimination. Ann Math Statist. 1961;32:1125–1144. [Google Scholar]
  • 33.Devroye L, Gyorfi L, Lugosi G. A Probabilistic Theory of Pattern Recognition. Springer; New York: 1996. [Google Scholar]
  • 34.Kan R. From moments of sum to moments of product. J Multivariate Anal. 2008;99:542– 554. [Google Scholar]
  • 35.Bao Y, Ullah A. Expectation of quadratic forms in normal and nonnormal variables with econometric applications. J Statist Plann Inference. 2010;140:1193–1205. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

01

RESOURCES