Abstract
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Keywords: Linear discriminant analysis, Bayesian Minimum Mean-Square Error Estimator, Double asymptotics, Kolmogorov asymptotics, Performance metrics, RMS
1. Introduction
The most important aspect of any classifier is its error, ε, defined as the probability of misclassification, since ε quantifies the predictive capacity of the classifier. Relative to a classification rule and a given feature-label distribution, the error is a function of the sampling distribution and as such possesses its own distribution, which characterizes the true performance of the classification rule. In practice, the error must be estimated from data by some error estimation rule yielding an estimate, ε̂. If samples are large, then part of the data can be held out for error estimation; otherwise, the classification and error estimation rules are applied on the same set of training data, which is the situation that concerns us here. Like the true error, the estimated error is also a function of the sampling distribution. The performance of the error estimation rule is completely described by its joint distribution, (ε, ε̂).
Three widely-used metrics for performance of an error estimator are the bias, deviation variance, and root-mean-square (RMS), given by
| (1) |
respectively. The RMS (square root of mean square error, MSE) is the most important because it quantifies estimation accuracy. Bias requires only the first-order moments, whereas the deviation variance and RMS require also the second-order moments.
Historically, analytic study has mainly focused on the first marginal moment of the estimated error for linear discriminant analysis (LDA) in the Gaussian model or for multinomial discrimination [1]–[12]; however, marginal knowledge does not provide the joint probabilistic knowledge required for assessing estimation accuracy, in particular, the mixed second moment. Recent work has aimed at characterizing joint behavior. For multinomial discrimination, exact representations of the second-order moments, both marginal and mixed, for the true error and the resubstitution and leave-one-out estimators have been obtained [13]. For LDA, the exact joint distributions for both resubstitution and leave-one-out have been found in the univariate Gaussian model and approximations have been found in the multivariate model with a common known covariance matrix [14, 15]. Whereas one could utilize the approximate representations to find approximate moments via integration in the multivariate model with a common known covariance matrix, more accurate approximations, including the second-order mixed moment and the RMS, can be achieved via asymptotically exact analytic expressions using a double asymptotic approach, where both sample size (n) and dimensionality (p) approach infinity at a fixed rate between the two [16]. Finite-sample approximations from the double asymptotic method have shown to be quite accurate [16, 17, 18]. There is quite a body of work on the use of double asymptotics for the analysis of LDA and its related statistics [16, 19, 20, 21, 22, 23]. Raudys and Young provide a good review of the literature on the subject [24].
Although the theoretical underpinning of both [16] and the present paper relies on double asymptotic expansions, in which n, p → ∞ at a proportional rate, our practical interest is in the finite-sample approximations corresponding to the asymptotic expansions. In [17], the accuracy of such finite-sample approximations was investigated relative to asymptotic expansions for the expected error of LDA in a Gaussian model. Several single-asymptotic expansions (n → ∞) were considered, along with double-asymptotic expansions (n, p → ∞) [19, 20]. The results of [17] show that the double-asymptotic approximations are significantly more accurate than the single-asymptotic approximations. In particular, even with n/p < 3, the double-asymptotic expansions yield “excellent approximations” while the others “falter.”
The aforementioned work is based on the assumption that a sample is drawn from a fixed feature-label distribution F, a classifier and error estimate are derived from the sample without using any knowledge concerning F, and performance is relative to F. Research dating to 1978, shows that small-sample error estimation under this paradigm tends to be inaccurate. Re-sampling methods such as cross-validation possess large deviation variance and, therefore, large RMS [9, 25]. Scientific content in the context of small-sample classification can be facilitated by prior knowledge [26, 27, 28]. There are three possibilities regarding the feature-label distribution: (1) F is known, in which case no data are needed and there is no error estimation issue; (2) nothing is known about F, there are no known RMS bounds, or those that are known are useless for small samples; and (3) F is known to belong to an uncertainty class of distributions and this knowledge can be used to either bound the RMS [16] or be used in conjunction with the training data to estimate the error of the designed classifier. If there exists a prior distribution governing the uncertainty class, then in essence we have a distributional model. Since virtually nothing can be said about the error estimate in the first two cases, for a classifier to possess scientific content we must begin with a distributional model.
Given the need for a distributional model, a natural approach is to find an optimal minimum mean-square-error (MMSE) error estimator relative to an uncertainty class Θ [27]. This results in a Bayesian approach with Θ being given a prior distribution, π(θ), θ ∈ Θ, and the sample Sn being used to construct a posterior distribution, π*(θ), from which an optimal MMSE error estimator, ε̂B, can be derived. π(θ) provides a mathematical framework for both the analysis of any error estimator and the design of estimators with desirable properties or optimal performance. π*(θ) provides a sample-conditioned distribution on the true classifier error, where randomness in the true error comes from uncertainty in the underlying feature-label distribution (given Sn). Finding the sample-conditioned MSE, MSEθ[ε̂B|Sn], of an MMSE error estimator amounts to evaluating the variance of the true error conditioned on the observed sample [29]. MSEθ[ε̂B|Sn] → 0 as n → ∞ almost surely in both the discrete and Gaussian models provided in [29, 30], where closed form expressions for the sample-conditioned MSE are available.
The sample-conditioned MSE provides a measure of performance across the uncertainty class Θ for a given sample Sn. As such, it involves various sample-conditioned moments for the error estimator: Eθ[ε̂B|Sn], Eθ[(ε̂B)2|Sn], and Eθ[εε̂B|Sn]. One could, on the other hand, consider the MSE relative to a fixed feature-label distribution in the uncertainty class and randomness relative to the sampling distribution. This would yield the feature-label-distribution-conditioned MSE, MSE Sn[ε̂B|θ], and the corresponding moments: ESn[ε̂B|θ], ESn[(ε̂B)2|θ], and ESn[εε̂B|θ]. From a classical point of view, the moments given θ are of interest as they help shed light on the performance of an estimator relative to fixed parameters of class conditional densities. Using this set of moments (i.e. given θ) we are able to compare the performance of the Bayesian MMSE error estimator to classical estimators of true error such as resubstitution and leave-one-out.
From a global perspective, to evaluate performance across both the uncertainty class and the sampling distribution requires the unconditioned MSE, MSEθSn[ε̂B], and corresponding moments EθSn[ε̂B], EθSn[(ε̂B)2], and EθSn[εε̂B]. While both MSESn[ε̂B|θ] and MSEθSn[ε̂B] have been examined via simulation studies in [27, 28, 30] for discrete and Gaussian models, our intention in the present paper is to derive double-asymptotic representations of the feature-labeled conditioned (given θ) and unconditioned MSE, along with the corresponding moments of the Bayesian MMSE error estimator for linear discriminant analysis (LDA) in the Gaussian model.
We make three modeling assumptions. As in many analytic error analysis studies, we employ stratified sampling: n = n0 + n1 sample points are selected to constitute the sample Sn in Rp, where given n, n0 and n1 are determined, and where x1, x2, …, xn0 and xn0+1, xn0+2, …, xn0+n1 are randomly selected from distributions Π0 and Π1, respectively. Πi possesses a multivariate Gaussian distribution N(μi, Σ), for i = 0, 1. This means that the prior probabilities α0 and α1 = 1 − α0 for classes 0 and 1, respectively, cannot be estimated from the sample (see [31] for a discussion of issues surrounding lack of an estimator for α0). However, our second assumption is that α0 and α1 are known. This is a natural assumption for many medical classification problems. If we desire early or mid-term detection, then we are typically constrained to a small sample for which n0 and n1 are not random but for which α0 and α1 are known (estimated with extreme accuracy) on account of a large population of post-mortem examinations. The third assumption is that there is a known common covariance matrix for the classes, a common assumption in error analysis [32, 3, 5, 16]. The common covariance assumption is typical for small samples because it is well-known that LDA commonly performs better that quadratic discriminant analysis (QDA) for small samples, even if the actual covariances are different, owing to the estimation advantage of using the pooled sample covariance matrix. As for the assumption of known covariance, this assumption is typical simply owing to the mathematical difficulties of obtaining error representations for unknown covariance (we know of no unknown-covariance result for second-order representations). Indeed, the natural next step following this paper and [16] is to address the unknown covariance problem (although with it being outstanding for almost half a century, it may prove difficult).
Under our assumptions, the Anderson W statistic is defined by
| (2) |
where and . The corresponding linear discriminant is defined by ψn(x) = 1 if W(x̄0, x̄1, x) ≤ c and ψn(x) = 0 if W(x̄0, x̄1, x) > c, where . Given sample Sn (and thus x̄0 and x̄1), for i = 0, 1, the error for ψn is given by ε = α0ε0 + α1ε1, where
| (3) |
and Φ(.) denotes the cumulative distribution function of a standard normal random variable.
Raudys proposed the following approximation to the expected LDA classification error [19, 24]:
| (4) |
We provide similar approximations for error-estimation moments and prove asymptotic exactness.
2. Bayesian MMSE Error Estimator
In the Bayesian classification framework [27, 28], it is assumed that the class-0 an class-1 conditional distributions are parameterized by θ0 and θ1, respectively. Therefore, assuming known αi, the actual feature-label distribution belongs to an uncertainty class parameterized by θ = (θ0, θ1) according to a prior distribution, π(θ). Given a sample Sn, the Bayesian MMSE error estimator minimizes the MSE between the true error of a designed classifier, ψn, and an error estimate (a function of Sn and ψn). The expectation in the MSE is taken over the uncertainty class conditioned on Sn. Specifically, the MMSE error estimator is the expected true error, ε̂B(Sn) = Eθ[ε(θ)|Sn]. The expectation given the sample is over the posterior density, π*(θ). Thus, we write the Bayesian MMSE error estimator as ε̂B = Eπ*[ε]. To facilitate analytic representations, θ0 and θ1 are assumed to be independent prior to observing the data. Denote the marginal priors of θ0 and θ1 by π(θ0) and π(θ1), respectively, and the corresponding posteriors by π*(θ0) and π*(θ1), respectively. Independence is preserved, i.e., π*(θ0, θ1) = π*(θ0)π*(θ1) for i = 0, 1 [27].
Owing to the posterior independence between θ0 and θ1 and the fact that αi is known, the Bayesian MMSE error estimator can be expressed by [27]
| (5) |
where, letting Θi be the parameter space of θi,
| (6) |
For known Σ and the prior distribution on μi assumed to be Gaussian with mean mi and covariance matrix Σ/νI, is given by equation (10) in [28]:
| (7) |
where
| (8) |
and νi > 0 is a measure of our certainty concerning the prior knowedge – the larger νi is the more localized the prior distribution is about mi. Letting , the moments that interest us are of the form ESn[ε̂B|μ], ESn[(ε̂B)2|μ], and ESn[εε̂B|μ], which are used to obtain MSESn[ε̂B|μ], and Eμ,Sn[ε̂B], Eμ,Sn[(ε̂B)2], and Eμ,Sn[εε̂B], which are needed to characterize MSEμ,Sn[ε̂B].
3. Bayesian-Kolmogorov Asymptotic Conditions
The Raudys-Kolmogorov asymptotic conditions [16] are defined on a sequence of Gaussian discrimination problems with a sequence of parameters and sample sizes: (μp,0, μp,1, Σp, np,0, np,1), p = 1, 2, …, where the means and the covariance matrix are arbitrary. The common assumptions for Raudys-Kolmogorov asymptotics are n0 → ∞, n1 → ∞, p → ∞, . For notational simplicity, we denote the limit under these conditions by . In the analysis of classical statistics related to LDA it is commonly assumed that the Mahalanobis distance, , is finite and (see [22], p. 4). This condition assures existence of limits of performance metrics of the relevant statistics [16, 22].
To analyze the Bayesian MMSE error estimator, , we modify the sequence of Gaussian discrimination problems to:
| (9) |
In addition to the previous conditions, we assume that the following limits exist for i, j = 0, 1: , and , where , and are some constants to which the limits converge. In [22], fairly mild sufficient conditions are given for the existence of these limits.
We refer to all of the aforementioned conditions, along with νi → ∞, , as the Bayesian-Kolmogorov asymptotic conditions (b.k.a.c). We denote the limit under these conditions by limb.k.a.c., which means that, for i, j = 0, 1,
| (10) |
This limit is defined for the case where there is conditioning on a specific value of μp,i. Therefore, in this case μp,i is not a random variable, and for each p, it is a vector of constants. Absent such conditioning, the sequence of discrimination problems and the above limit reduce to
| (11) |
and
| (12) |
respectively. For notational simplicity we assume clarity from the context and do not explicitly differentiate between these conditions. We denote convergence in probability under Bayesian-Kolmogorov asymptotic conditions by “ ”.“ ” and “ ” denote ordinary convergence under Bayesian-Kolmogorov asymptotic conditions. At no risk of ambiguity, we henceforth omit the subscript “p” from the parameters and sample sizes in (9) or (11).
We define ηa1,a2,a3,a4 = (a1 − a2)T Σ−1(a3 − a4) and, for ease of notation write ηa1,a2,a1,a2 as ηa1,a2. There are two special cases: (1) the square of the Mahalanobis distance in the space of the parameters of the unknown class conditional densities, ; and (2) the square of the Mahalanobis distance in the space of prior knowledge, , where . The conditions in (10) assure existence of limb.k.a.c ηa1,a2,a3,a4, where the aj’s can be any combination of mi and μi, i = 0, 1. Consistent with our notations, we use η̄a1,a2,a3,a4, , and to denote the limb.k.a.c of ηa1,a2,a3,a4, , and , respectively. Thus,
| (13) |
The ratio p/ni is an indicator of complexity for LDA (in fact, any linear classification rule): the VC dimension in this case is p + 1 [33]. Therefore, the conditions (10) assure the existence of the asymptotic complexity of the problem. The ratio νi/ni is an indicator of relative certainty of prior knowledge to the data: the smaller νi/ni, the more we rely on the data and less on our prior knowledge. Therefore, the conditions (10) state asymptotic existence of relative certainty. In the following, we let , so that .
4. First Moment of
In this section we use the Bayesian-Kolmogorov asymptotic conditions to characterize the conditional and unconditional first moment of the Bayesian MMSE error estimator.
4.1. Conditional Expectation of
The asymptotic (in a Bayesian-Kolmogorov sense) conditional expectation of the Bayesian MMSE error estimator is characterized in the following theorem, with the proof presented in the Appendix. Note that , and D depend on μ, but to ease the notation we leave this implicit.
Theorem 1
Consider the sequence of Gaussian discrimination problems defined by (9). Then
| (14) |
so that
| (15) |
where
| (16) |
Theorem 1 suggests a finite-sample approximation:
| (17) |
where is obtained by using the finite-sample parameters of the problem in (16), namely,
| (18) |
To obtain the corresponding approximation for , it suffices to use (17) by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in .
To obtain a Raudys-type of finite-sample approximation for the expectation of , first note that the Gaussian distribution in (7) can be rewritten as
| (19) |
where z is independent of Sn, Ψi is a multivariate Gaussian , and
| (20) |
Taking the expectation of relative to the sampling distribution and then applying the standard normal approximation yields the Raudys-type of approximation:
| (21) |
Algebraic manipulation yields (Suppl. Section A)
| (22) |
where
| (23) |
with being presented in (18) and
| (24) |
The corresponding approximation for is
| (25) |
where and are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in and , respectively. It is straightforward to see that
| (26) |
with being defined in Theorem 1. Therefore, the approximation obtained in (22) is asymptotically exact and (17) and (22) are asymptotically equivalent.
4.2. Unconditional Expectation of
We consider the unconditional expectation of under Bayesian-Kolmogorov asymptotics. The proof of the following theorem is presented in the Appendix.
Theorem 2
Consider the sequence of Gaussian discrimination problems defined by (11). Then
| (27) |
so that
| (28) |
where
| (29) |
Theorem 2 suggests the finite-sample approximation
| (30) |
where
| (31) |
From (19) we can get the Raudys-type approximation:
| (32) |
Some algebraic manipulations yield (Suppl. Section B)
| (33) |
where
| (34) |
It is straightforward to see that
| (35) |
with H0 defined in Theorem 2. Hence, the approximation obtained in (33) is asymptotically exact and both (30) and (33) are asymptotically equivalent.
5. Second Moments of
Here we employ the Bayesian-Kolmogorov asymptotic analysis to characterize the second and cross moments with the actual error, and therefore the MSE of error estimation.
5.1. Conditional Second and Cross Moments of
Defining two i.i.d. random vectors, z and z′, yields the second moment representation
| (36) |
where z and z′ are independent of Sn, and Ψi is a multivariate Gaussian, , and Ui(x̄0, x̄1, z) being defined in (20).
We have the following theorem, with the proof presented in the Appendix.
Theorem 3
For the sequence of Gaussian discrimination problems in (9) and for i, j = 0, 1,
| (37) |
so that
| (38) |
where , and D are defined in (16).
This theorem suggests the finite-sample approximation
| (39) |
which is the square of the approximation (17). Corresponding approximations for and are obtained similarly.
Similar to the proof of Theorem 3, we obtain the conditional cross moment of
ε̂B.
Theorem 4
Consider the sequence of Gaussian discrimination problems in (9). Then for i, j = 0, 1,
| (40) |
so that
| (41) |
where and D are defined in (16) and Gi is defined in (47).
This theorem suggests the finite-sample approximation
| (42) |
This is a product of (17) and the finite-sample approximation for ESn[ε0|μ] in [16].
A consequence of Theorems 1, 3, and 4 is that all the conditional variances and covariances are asymptotically zero:
| (43) |
Hence, the deviation variance is also asymptotically zero, limb.k.a.c. . Hence, defining the conditional bias as
| (44) |
the asymptotic RMS reduces to
| (45) |
To express the conditional bias, as proven in [16],
| (46) |
where
| (47) |
It follows from Theorem 1 and (46) that
| (48) |
Recall that the MMSE error estimator is unconditionally unbiased: BiasU,n[ε̂B] = Eμ,Sn[ε̂B − ε] = 0.
We next obtain Raudys-type approximations corresponding to Theorems 3 and 4 by utilizing the joint distribution of U0(x̄0, x̄1, z) and U0(x̄0, x̄1, z′), defined in (20), with z′ and z′ being independently selected from populations Ψ0 or Ψ1. We employ the function
| (49) |
which is the distribution function of a joint bivariate Gaussian vector with zero means, unit variances, and correlation coefficient ρ. Note that Φ(a, ∞; ρ) = Φ(a) and Φ(a, b; 0) = Φ(a) Φ(b). For simplicity of notation, we write Φ(a, a; ρ) as Φ(a; ρ). The rectangular-area probabilities involving any jointly Gaussian pair of variables (x, y) can be expressed as
| (50) |
with μx = E[x], μy = E[y], , and correlation coefficient ρxy.
Using (36), we obtain the second-order extension of (21) by
| (51) |
Using (51), some algebraic manipulations yield
| (52) |
with and being presented in (23) and (24), respectively, and
| (53) |
The proof of (53) follows by expanding U0(x̄0, x̄1, z) and U0(x̄0, x̄1, z′) from (20) and then using the set of identities in the proof of (33), i.e. equation (S.1) from Suppl. Section B. Similarly,
| (54) |
where , and are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1, in (24), in obtained from (18), and in (53), respectively.
Having together with (26) shows that (52) is asymptotically exact, that is, asymptotically equivalent to obtained in Theorem 3. Similarly, it can be shown that
| (55) |
where, after some algebraic manipulations we obtain
| (56) |
Suppl. Section C gives the proof of (56). Since , (55) is asymptotically exact, i.e. (55) becomes equivalent to the result of Theorem 3. We obtain the conditional cross moment similarly:
| (57) |
where
| (58) |
where superscript “C” denotes conditional variance. Algebraic manipulations like those leading to (53) yield
| (59) |
where
| (60) |
and and having been obtained previously in equations (49) and (50) of [16], namely,
| (61) |
Similarly, we can show that
| (62) |
where and are obtained as in (54), and , and are obtained by exchanging n0 and n1 in , and , respectively. Similarly,
| (63) |
where
| (64) |
and
| (65) |
where is obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1 in .
We see that , and . Therefore, from (26) and the fact that and , we see that expressions (59), (62), and (63), are all asymptotically exact (compare to Theorem 4).
5.2. Unconditional Second and Cross Moments of
Similarly to the way (36) was obtained, we can show that
| (66) |
Similarly to the proofs of Theorem 3 and 4, we get the following theorems.
Theorem 5
Consider the sequence of Gaussian discrimination problems in (11). For i, j = 0, 1,
| (67) |
so that
| (68) |
where H0, H1, and F are defined in (29).
Theorem 6
Consider the sequence of Gaussian discrimination problems in (11). For i, j = 0, 1,
| (69) |
so that
| (70) |
where H0, H1, and F are defined in (29).
Theorems 5 and 6 suggest the finite-sample approximation:
| (71) |
A consequence of Theorems 2, 5, and 6 is that
| (72) |
In [30], it was shown that ε̂B is strongly consistent, meaning that ε̂B(Sn) − ε(Sn) → 0 almost surely as n → ∞ under rather general conditions, in particular, for the Gaussian and discrete models considered in that paper. It was also shown that MSEμ[ε̂B|Sn] → 0 almost surely as n → ∞ under similar conditions. Here, we have shown that under conditions stated in (12). Some researchers refer to conditions of double asymptoticity as “comparable” dimensionality and sample size [20, 22]. Therefore, one may think of meaning that MSEμ,Sn [ε̂B] is close to zero for asymptotic and comparable dimensionality, sample size, and certainty parameter.
We now consider Raudys-type approximations. Analogous to the approximation used in (51), we obtain the unconditional second moment of :
| (73) |
Using (73) we get
| (74) |
with and given in (31) and (34), respectively, and
| (75) |
Suppl. Section D presents the proof of (75). In a similar way,
| (76) |
where , and are obtained by exchanging n0 and n1, ν0 and ν1, m0 and m1, and μ0 and μ1, in (34), in obtained from (31), and (75), respectively.
Having together with (35) makes (74) asymptotically exact. We similarly obtain
| (77) |
where
| (78) |
Suppl. Section E presents the proof of (78). Since , (77) is asymptotically exact (compare to Theorem 5). Similar to (57) and (59), where we characterized conditional cross moments, we can get the unconditional cross moments as follows:
| (79) |
where
| (80) |
the superscript “U” representing the unconditional variance, and being presented in (31) and (34), respectively, and
| (81) |
The proof of (81) is presented in Suppl. Section F. Similarly,
| (82) |
where,
| (83) |
See Suppl. Section G for the proof of (83). Having and along with (35) makes (79) and (82) asymptotically exact (compare to Theorem 6).
5.3. Conditional and Unconditional Second Moment of εi
To complete the derivations and obtain the unconditional RMS of estimation, we need the conditional and unconditional second moment of the true error. The conditional second moment of the true error can be found from results in [16], which for completeness are represented here:
| (84) |
with and defined in (61),
| (85) |
and
| (86) |
where
| (87) |
Similar to obtaining (79), we can show that
| (88) |
with and given in (31) and (34), respectively, and
| (89) |
Similarly,
| (90) |
with obtained from by exchanging n0 and n1, and ν0 and ν1. Similarly,
| (91) |
with and given in (31) and (34), respectively, and
| (92) |
6. Monte Carlo Comparisons
In this section we compare the asymptotically exact finite-sample approximations of the first, second and mixed moments to Monte Carlo estimations in conditional and unconditional scenarios. The following steps are used to compute the Monte Carlo estimation:
Define a set of hyper-parameters for the Gaussian model: m0, m1, ν0, ,ν1, and Σ. We let Σ have diagonal elements 1 and off-diagonal elements 0.1. m0 and m1 are chosen by fixing ( , which corresponds to Bayes error 0.1586). Setting and Σ fixes the means μ0 and μ1 of the class-conditional densities. The priors, π0 and π1, are defined by choosing a small deviation from μ0 and μ1, that is, by setting mi = μi + aμi, where a = 0.01.
(unconditional case): Using π0 and π1, generate random realizations of μ0 and μ1.
(conditional case): Use the values of μ0 and μ1 obtained from Step 1.
For fixed Π0 and Π1, generate a set of training data of size ni for class i = 0, 1.
Using the training sample, design the LDA classifier, ψn, using (2).
Compute the Bayesian MMSE error estimator, ε̂B, using (5) and (7).
Knowing μ0 and μ1, find the true error of ψn using (3).
Repeat Steps 3 through 6, T1 times.
Repeat Steps 2 through 7, T2 times.
In the unconditional case, we set T1 = T2 = 300 and generate 90, 000 samples. For the conditional case, we set T1 = 10, 000 and T2 = 1, the latter because μ0 and μ1 are set in Step 2.
Figure 1 treats Raudys-type finite-sample approximations, including the RMS. Figure 1(a) compares the first moments obtained from equations (22) and (33). It presents ESn[ε̂B|μ] and Eμ,Sn[ε̂B] computed by Monte Carlo estimation and the analytical expressions. The label “FSA BE Uncond” identifies the curve of Eμ,Sn[ε̂B], the unconditional expected estimated error obtained from the finite-sample approximation, which according to the basic theory is equal to Eμ,Sn[ε]. The labels “FSA BE Cond” and “FSA TE Cond” show the curves of ESn[ε̂B|μ], the conditional expected estimated error, and ESn[ε|μ], the conditional expected true error, respectively, both obtained using the analytic approximations. The curves obtained from Monte Carlo estimation are identified by “MC” labels. The analytic curves in Figure 1(a) show substantial agreement with the Monte Carlo approximation.
Figure 1.
Comparison of conditional and unconditional performance metrics of ε̂B using asymptotically exact finite setting approximations, with Monte Carlo estimates as a function of sample size. (a) Expectations. The case of asymptotic unconditional expectation of ε is not plotted as ε̂B is unconditionally unbiased; (b) Second and mixed moments; (c) Conditional variance of deviation from true error, i.e. and, unconditional variance of deviation, i.e. ; (d) Conditional RMS of estimation, i.e. RMSSn[ε̂B|μ] and, unconditional RMS of estimation, i.e. RMSμ,Sn[ε̂B]; (a)–(d) correspond to the same scenario in which dimension, p, is 15 and 100, ν0 = ν1 = 50, mi = μi + 0.01μi with μ0 = −μ1, and Bayes error = 0.1586.
To obtain the second moments, Vard[ε̂] and RMS[ε̂B] as defined in (1), we use equations (52), (54), (55), (59), (63), (84), (85), (86) for the conditional case and (74), (76), (77), (79), (82), (88), (90), (91) for the unconditional case. Figures 1(b), 1(c), and 1(d) compare the Monte Carlo estimation to the finite-sample approximations obtained for second/mixed moments, Vard[ε̂], and RMS[ε̂B], respectively. The labels are interpreted similarly to those in Figure 1(a), but for the second/mixed moments instead. For example, “MC BE×TE Uncond” identifies the MC curve of Eμ,Sn[ε̂Bε]. The Figures 1(b), 1(c), and 1(d) show that the finite-sample approximations for the conditional and unconditional second/mixed moments, variance of deviation, and RMS are quite accurate (close to the MC value).
While Figure 1 shows the accuracy of Raudys-type of finite-sample approximations, figures in the Supplementary Materials show the the comparison between the finite-sample approximations obtained directly from Theorem 1–6, i.e. equations (29), (57), (70), (73), (76), (102), and (103), to Monte Carlo estimation.
7. Examination of the Raudys-type RMS Approximation
Equations (18), (24), (53), (56), and (63) show that RMSSn[ε̂B|μ] is a function of 14 variables: p, n0, n1, β0, β1, , ηm0,μ1, ηm0,μ0, ηm1,μ0, ηm1,μ1, ηm0,μ0,μ0,μ1, ηm0,μ0,m1,μ0, ηm1,μ1,m0,μ1, ηm1,μ1,μ1,μ0. Studying a function of this number of variables is complicated, especially because restricting some variables can constrain others. We make several simplifying assumptions to reduce the complexity. We let , β0 = β1 = β and assume very informative priors in which m0 = μ0 and m1 = μ1. Using these assumptions, RMSSn[ε̂B|μ] is only a function of p, n, β, and . We let p ∈ [4, 200], n ∈ [40, 200], β ∈ {0.5, 1, 2}, , which means that the Bayes error is 0.158 or 0.022. Figure 2(a) shows plots of RMSSn[ε̂B|μ] as a function of p, n, β, and . These show that for smaller distance between classes, that is, for smaller (larger Bayes error), the RMS is larger, and as the distance between classes increases, the RMS decreases. Furthermore, we see that in situations where very informative priors are available, i.e. m0 = μ0 and m1 = μ1, relying more on data can have a detrimental effect on RMS. Indeed, the plots in the top row (for β = 0.5) have larger RMS than the plots in the bottom row of the figure (for β = 2).
Figure 2.
(a) The conditional RMS of estimation, i.e. RMSSn[ε̂B|μ], as a function of p < 200 and n < 200. From top to bottom, the rows correspond to β = 0.5, 1, 2, respectively. From left to right, the columns correspond to , respectively. (b) The unconditional RMS of estimation, i.e. RMSμ,Sn[ε̂B], as a function of p < 1000 and n < 2000. From top to bottom, the rows correspond to β = 0.5, 1, 2, respectively. From left to right, the columns correspond to , respectively.
Using the RMS expressions enables finding the necessary sample size to insure a given RMSSn[ε̂B|μ] by using the same methodology as developed for the resubstitution and leave-one-out error estimators in [16, 26]. The plots in Figure 2(a) (as well as other unshown plots) show that, with m0 = μ0 and m1 = μ1, the RMS is a decreasing function of . Therefore, the number of sample points that guarantees being less than a predetermined value τ insures that RMSSn[ε̂B|μ] < τ, for any . Let the desired bound be . From equations (52), (54), (55), (59), (63), (84), (85), and (86), we can find κε̂(n, p, β) and increase n until κε̂(n, p, β) < τ. Table 1 (β = 1: Conditional) shows the minimum number of sample points needed to guarantee having a predetermined conditional RMS for the whole range of (other β shown in the Supplementary Material). A larger dimensionality, a smaller τ, and a smaller β result in a larger necessary sample size needed for having κε̂(n, p, β) < τ.
Table 1.
Minimum sample size, n, ( ) to satisfy κε̂ (n, p, β) < τ.
| τ | p = 2 | p = 4 | p = 8 | p = 16 | p = 32 | p = 64 | p = 128 |
|---|---|---|---|---|---|---|---|
| β = 1: Conditional | |||||||
| 0.1 | 14 | 22 | 38 | 70 | 132 | 256 | 506 |
| 0.09 | 18 | 28 | 48 | 86 | 164 | 318 | 626 |
| 0.08 | 24 | 36 | 60 | 110 | 208 | 404 | 796 |
| 0.07 | 32 | 48 | 80 | 144 | 272 | 530 | 1044 |
| 0.06 | 44 | 64 | 108 | 196 | 372 | 722 | 1424 |
| 0.05 | 62 | 94 | 158 | 284 | 538 | 1044 | 2056 |
|
| |||||||
| β = 1: Unconditional | |||||||
| 0.025 | 108 | 108 | 106 | 102 | 92 | 72 | 2 |
| 0.02 | 172 | 170 | 168 | 164 | 156 | 138 | 78 |
| 0.015 | 308 | 306 | 304 | 300 | 292 | 274 | 236 |
| 0.01 | 694 | 694 | 690 | 686 | 678 | 662 | 628 |
| 0.005 | 2790 | 2786 | 2782 | 2776 | 2768 | 2752 | 2720 |
Turning to the unconditional RMS, equations (34), (75), (78), (83), (89), and (92) show that RMSμ,Sn[ε̂B] is a function of 6 variables: p, n0, n1, ν0,ν1, . Figure 2(b) shows plots of RMSμ,Sn[ε̂B] as a function of p, n, β, and , assuming , β0 = β1 = β. Note that setting the values of n and β fixes the value of ν0 = ν1 = ν in the corresponding expressions for RMSμ,Sn[ε̂B]. Due to the complex shape of RMSμ,Sn[ε̂B], we consider a large range of n and p. The plots show that a smaller distance between prior distributions (smaller ) corresponds to a larger unconditional RMS of estimation. In addition, as the distance between classes increases, the RMS decreases. The plots in Figure 2(b) show that, as increases, RMS decreases. Furthermore, Figure 2(b) (and other unshown plots) demonstrate an interesting phenomenon in the shape of the RMS. In regions defined by pairs of (p, n), for each p, RMS first increases as a function of sample size and then decreases. We further observe that with fixed p, for smaller β, this “peaking phenomenon” happens for larger n. On the other hand, with fixed β, for larger p, peaking happens for larger n. These observations are presented in Figure 3, which shows curves obtained by cutting the 3D plots in the left column of Fig. 2(b) at a few dimensions. This figure shows that, for p = 900 and β = 2, adding more sample points increases RMS abruptly at first to reach a maximum value of RMS at n = 140, the point after which the RMS starts to decrease.
Figure 3.
RMSμ,Sn[ε̂B]-peaking phenomenon as a function of sample size. These plots are obtained by cutting the 3D plots in the left column of Fig. 2(b) at few dimensionality (i.e. ). From top to bottom the rows correspond to β = 0.5, 1, 2, respectively. The solid-black curves indicate RMSμ,Sn[ε̂B] computed from the analytical results and the red-dashed curves show the same results computed by means of Monte Carlo simulations. Due to computational burden of estimating the curves by means of Monte Carlo studies, the simulations are limited to n < 500 and p = 10, 70.
One may use the unconditional scenario to determine the the minimum necessary sample size for a desired RMSμ,Sn[ε̂B]. In fact, this is the more practical way to go because in practice one does not know μ. Since the unconditional RMS shows a decreasing trend in terms of , we use the previous technique to find the minimum necessary sample size to guarantee a desired unconditional RMS. Table 1 (β = 1: Unconditional) shows the minimum sample size that guarantees being less than a predetermined value τ, i.e. insures that RMSμ,Sn[ε̂B] < τ for any (other β shown in the Supplementary Material).
To examine the accuracy of the required sample size that satisfies κε̂(n, p, β) < τ for both conditional and unconditional settings, we have performed a set of experiments (see Supplementary Material). The results of these experiments confirm the efficacy of Table 1 in determining the minimum sample size required to insure the RMS is less than a predetermined value τ.
8. Conclusion
Using realistic assumptions about sample size and dimensionality, standard statistical techniques are generally incapable of estimating the error of a classifier in small-sample classification. Bayesian MMSE error estimation facilitates more accurate estimation by incorporating prior knowledge. In this paper, we have characterized two sets of performance metrics for Bayesian MMSE error estimation in the case of LDA in a Gaussian model: (1) the first, second, and cross moments of the estimated and actual errors conditioned on a fixed feature-label distribution, which in turn gives us knowledge of the conditional RMSSn[ε̂B|θ]; and (2) the unconditional moments and, therefore, the unconditional RMS, RMSθ,Sn[ε̂B]. We set up a series of conditions, called the Bayesian-Kolmogorov asymptotic conditions, that allow us to characterize the performance metrics of Bayesian MMSE error estimation in an asymptotic sense. The Bayesian-Kolmogorov asymptotic conditions are set up based on the assumption of increasing n, p, and certainty parameter ν, with an arbitrary constant limiting ratio between n and p, and n and ν. To our knowledge, these conditions permit, for the first time, application of Kolmogorov-type of asymptotics in a Bayesian setting. The asymptotic expressions proposed in this paper result directly in finite-sample approximations of the performance metrics. Improved finite-sample accuracy is achieved via newly proposed Raudys-type approximations. The asymptotic theory is used to prove that these approximations are, in fact, asymptotically exact under the Bayesian-Kolmogorov asymptotic conditions. Using the derived analytical expressions, we have examined performance of the Bayesian MMSE error estimator in relation to feature-label distributions, prior knowledge, sample size, and dimensionality. We have used the results to determine the minimum sample size guaranteeing a desired level of error estimation accuracy.
As noted in the Introduction, a natural next step in error estimation theory is to remove the known-covariance condition, but as also noted, this may prove to be difficult.
Supplementary Material
Acknowledgments
This work was partially supported by the NIH grants 2R25CA090301 (Nutrition, Biostatistics, and Bioinformatics) from the National Cancer Institute.
Appendix
Proof of Theorem 1
We explain this proof in detail as some steps will be used in later proofs. Let
| (93) |
where is defined in (8). Then
| (94) |
For i, j = 0, 1 and i ≠ j, define the following random variables:
| (95) |
The variance of yi given μ does not depend on μ. Therefore, under the Bayesian-Kolmogorov conditions stated in (10), and do not appear in the limit. Only matters, which vanishes in the limit as follows:
| (96) |
To find the variance of zi and zij we can first transform zi and zij to quadratic forms and then use the results of [34] to find the variance of quadratic functions of Gaussian random variables. Specifically, from [34], for y ~ N(μ, Σ) and A being a symmetric positive definite matrix, Var[yT Ay] = 2tr(AΣ)2 + 4μT AΣAμ′, with tr being the trace operator. Therefore, after some algebraic manipulations, we obtain
| (97) |
From the Cauchy-Schwarz inequality , and for i, j, k = 0, 1, i ≠ j, Furthermore, and . Putting this together and following the same approach for yields . In general (via Chebyshev’s inequality), limn→∞Var[Xn] = 0 implies convergence in probability of Xn to limn→∞ E[Xn]. Hence, since , for i, j = 0, 1 and i ≠ j,
| (98) |
Now let
| (99) |
where δ̂2 = (x̄0 − x̄1)T Σ−1(x̄0 − x̄1). Similar to deriving (97) via the variance of quadratic forms of Gaussian variables, we can show
| (100) |
Thus,
| (101) |
As before, from Chebyshev’s inequality it follows that
| (102) |
By the Continuous Mapping Theorem (continuous functions preserve convergence in probability),
| (103) |
The Dominated Convergence Theorem (|Xn| ≤ B, for some B > 0 and Xn → X in probability implies E[Xn] → E[X])), via the boundedness of ϕ(.) leads to completion of the proof:
| (104) |
Proof of Theorem 2
We first prove that with defined in (94). To do so we use
| (105) |
To compute the first term on the right hand side, we have
| (106) |
For i, j = 0, 1 and i ≠ j define the following random variables:
| (107) |
The variables defined in (107) can be obtained by replacing x̄i’s with μi’s in (95) and x̄i ~ N(μi, Σ/ni) and μi ~ N(mi, Σ/νi). Replacing μi with mi and ni with νi in (96) and (97) yields
| (108) |
By Cauchy-Schwarz, , and . Hence,
Now consider the second term on the right hand side of (105). The covariance of a function of Gaussian random variables can be computed from results of [35]. For instance,
| (109) |
From (109) and the independence of x̄0 and x̄1,
| (110) |
Via (108), (109), and (110), the inner variance in the second term on the right hand side of (105) is
| (111) |
Now, again from the results of [35],
| (112) |
From (111) and (112), some algebraic manipulations yield
| (113) |
From (113) we see that . In sum, and similar to the use Chebyshev’s inequality in the proof of Theorem 1, we get
| (114) |
with Hi defined in (29).
On the other hand, for D̂i defined in (99) we can write
| (115) |
From similar expressions as in (112) for , we get . Moreover, is obtained from (100) by replacing ni with νi, and with . Thus, from (99),
| (116) |
Furthermore, since , from (101),
| (117) |
Hence, and, similar to (114), we obtain
| (118) |
with F defined in (29). Similar to the proof of Theorem 1, by using the Continuous Mapping Theorem and the Dominated Convergence Theorem, we can show that
| (119) |
and the result follows.
Proof of Theorem 3
We start from
| (120) |
which was shown in (36). Here we characterize the conditional probability inside ESn [.]. From the independence of z, z′, x̄0, and x̄1,
| (121) |
where here N (. , . ) denotes the bivariate Gaussian density function and and D̂ are defined in (94) and (99). Thus,
| (122) |
Similar to the proof of Theorem 1, we get
| (123) |
Similarly, we obtain , and the results follow.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1.Hills M. Allocation rules and their error rates. J Royal Statist Soc Ser B (Methodological) 1966;28:1–31. [Google Scholar]
- 2.Foley D. Considerations of sample and feature size. IEEE Trans Inf Theory. 1972;IT-18:618–626. [Google Scholar]
- 3.Sorum MJ. Estimating the conditional probability of misclassification. Technometrics. 1971;13:333–343. [Google Scholar]
- 4.McLachlan GJ. An asymptotic expansion of the expectation of the estimated error rate in discriminant analysis. Australian Journal of Statistics. 1973;15:210–214. [Google Scholar]
- 5.Moran M. On the expectation of errors of allocation associated with a linear discriminant function. Biometrika. 1975;62:141–148. [Google Scholar]
- 6.Berikov VB. A priori estimates of recognition quality for discrete features. Pattern Recogn and Image Analysis. 2002;12:235–242. [Google Scholar]
- 7.Berikov VB, Litvinenko AG. The influence of prior knowledge on the expected performance of a classifier. Pattern Recogn Lett. 2003;24:2537–2548. [Google Scholar]
- 8.Braga-Neto U, Dougherty ER. Exact performance measures and distributions of error estimators for discrete classifiers. Pattern Recogn. 2005;38:1799–1814. [Google Scholar]
- 9.Glick N. Additive estimators for probabilities of correct classification. Pattern Recogn. 1978;10:211–222. [Google Scholar]
- 10.Fukunaga K, Hayes RR. Estimation of classifier performance. IEEE Trans Pattern Anal Mach Intell. 1989;11:1087–1101. [Google Scholar]
- 11.Raudys S. An Integrated Approach to Design. Springer-Verlag; London: 2001. Statistical and Neural Classifiers. [Google Scholar]
- 12.Zollanvari A, Braga-Neto UM, Dougherty ER. On the sampling distribution of resubstitution and leave-one-out error estimators for linear classifiers. Pattern Recogn. 2009;42:2705–2723. [Google Scholar]
- 13.Braga-Neto UM, Dougherty ER. Exact correlation between actual and estimated errors in discrete classification. Pattern Recogn Lett. 2010;31:407–413. [Google Scholar]
- 14.Zollanvari A, Braga-Neto UM, Dougherty ER. Exact representation of the second-order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate heteroskedastic gaussian model. Pattern Recogn. 2012;45:908–917. [Google Scholar]
- 15.Zollanvari A, Braga-Neto UM, Dougherty ER. Joint sampling distribution between actual and estimated classification errors for linear discriminant analysis. IEEE Trans Inf Theory. 2010;56:784–804. [Google Scholar]
- 16.Zollanvari A, Braga-Neto UM, Dougherty ER. Analytic study of performance of error estimators for linear discriminant analysis. IEEE Trans Sig Proc. 2011;59:4238–4255. [Google Scholar]
- 17.Wyman F, Young D, Turner D. A comparison of asymptotic error rate expansions for the sample linear discriminant function. Pattern Recogn. 1990;23:775–783. [Google Scholar]
- 18.Pikelis V. Comparison of methods of computing the expected classification errors. Automat Remote Control. 1976;5:59–63. [Google Scholar]
- 19.Raudys S. On the amount of a priori information in designing the classification algorithm. Technical Cybernetics. 1972;4:168–174. In Russian. [Google Scholar]
- 20.Deev A. Representation of statistics of discriminant analysis and asymptotic expansion when space dimensions are comparable with sample size. Dokl Akad Nauk SSSR. 1970;195:759–762. In Russian. [Google Scholar]
- 21.Fujikoshi Y. Error bounds for asymptotic approximations of the linear discriminant function when the sample size and dimensionality are large. J Multivariate Anal. 2000;73:1–17. [Google Scholar]
- 22.Serdobolskii V. Multivariate Statistical Analysis: A High-Dimensional Approach. Springer; 2000. [Google Scholar]
- 23.Bickel PJ, Levina E. Some theory for fisher’s linear discriminant function, ‘naive bayes’, and some alternatives when there are many more variables than observations. Bernoulli. 2004;10:989–1010. [Google Scholar]
- 24.Raudys S, Young DM. Results in statistical discriminant analysis: A review of the former soviet union literature. J Multivariate Anal. 2004;89:1–35. [Google Scholar]
- 25.Dougherty E, Sima C, Hua J, Hanczar B, Braga-Neto U. Performance of error estimators for classification. Current Bioinformatics. 2010;5:53–67. [Google Scholar]
- 26.Dougherty ER, Zollanvari A, Braga-Neto UM. The illusion of distribution-free small-sample classification in genomics. Current Genomics. 2011;12:333–341. doi: 10.2174/138920211796429763. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Dalton L, Dougherty ER. Bayesian minimum mean-square error estimation for classification error– Part I: Definition and the Bayesian MMSE error estimator for discrete classification. IEEE Trans Sig Proc. 2011;59:115–129. [Google Scholar]
- 28.Dalton L, Dougherty ER. Bayesian minimum mean-square error estimation for classification error–Part II: Linear classification of Gaussian models. IEEE Trans Sig Proc. 2011;59:130–144. [Google Scholar]
- 29.Dalton L, Dougherty ER. Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–Part I: Representation. IEEE Trans Sig Proc. 2012;60:2575–2587. [Google Scholar]
- 30.Dalton L, Dougherty ER. Exact sample conditioned MSE performance of the Bayesian MMSE estimator for classification error–Part II: Consistency and performance analysis. IEEE Trans Sig Proc. 2012;60:2588–2603. [Google Scholar]
- 31.Anderson T. Classification by multivariate analysis. Psychometrika. 1951;16:31–50. [Google Scholar]
- 32.John S. Errors in discrimination. Ann Math Statist. 1961;32:1125–1144. [Google Scholar]
- 33.Devroye L, Gyorfi L, Lugosi G. A Probabilistic Theory of Pattern Recognition. Springer; New York: 1996. [Google Scholar]
- 34.Kan R. From moments of sum to moments of product. J Multivariate Anal. 2008;99:542– 554. [Google Scholar]
- 35.Bao Y, Ullah A. Expectation of quadratic forms in normal and nonnormal variables with econometric applications. J Statist Plann Inference. 2010;140:1193–1205. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.



