Summary
For high-dimensional classification, it is well known that naively performing the Fisher discriminant rule leads to poor results due to diverging spectra and noise accumulation. Therefore, researchers proposed independence rules to circumvent the diverging spectra, and sparse independence rules to mitigate the issue of noise accumulation. However, in biological applications, there are often a group of correlated genes responsible for clinical outcomes, and the use of the covariance information can significantly reduce misclassification rates. In theory the extent of such error rate reductions is unveiled by comparing the misclassification rates of the Fisher discriminant rule and the independence rule. To materialize the gain based on finite samples, a Regularized Optimal Affine Discriminant (ROAD) is proposed. ROAD selects an increasing number of features as the regularization relaxes. Further benefits can be achieved when a screening method is employed to narrow the feature pool before hitting the ROAD. An efficient Constrained Coordinate Descent algorithm (CCD) is also developed to solve the associated optimization problems. Sampling properties of oracle type are established. Simulation studies and real data analysis support our theoretical results and demonstrate the advantages of the new classification procedure under a variety of correlation structures. A delicate result on continuous piecewise linear solution path for the ROAD optimization problem at the population level justifies the linear interpolation of the CCD algorithm.
Keywords: High Dimensional Classification, LDA, Regularized Optimal Affine Discriminant, Fisher Discriminant, Independence Rule
1. Introduction
Technological innovations have had deep impact on society and on various areas of scientific research. High-throughput data from microarray and proteomics technologies are frequently used in many contemporary statistical studies. In the case of microarray data, the dimensionality is frequently in thousands or beyond, while the sample size is typically in the order of tens. The large-p-small-n scenario poses challenges for the classification problems. We refer to Fan and Lv (2010) for an overview of statistical challenges associated with high dimensionality.
When the feature space dimension p is very high compared to the sample size n, the Fisher discriminant rule performs poorly due to diverging spectra as demonstrated by Bickel and Levina (2004). These authors showed that the independence rule in which the covariance structure is ignored performs better than the naive Fisher rule (NFR) in the high dimensional setting. Fan and Fan (2008) demonstrated further that even for the independence rules, a procedure using all the features can be as poor as random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. As a result, Fan and Fan (2008) proposed the Features Annealed Independence Rule (FAIR) that selects a subset of important features for classification. Dudoit et al. (2002) reported that for microarray data, ignoring correlations between genes leads to better classification results. Tibshirani et al. (2002) proposed the Nearest Shrunken Centroid (NSC) which likewise employs the working independence structure. Similar problems are also studied in the machine learning community such as Domingos and Pazzani (1997) and Lewis (1998).
In microarray studies, correlation among different genes is an essential characteristic of the data and usually not negligible. Other examples include proteomics, and metabolomics data where correlation among biomarkers is commonplace. More details can be found in Ackermann and Strimmer (2009). Intuitively, the independence assumption among genes leads to loss of critical information and hence is suboptimal. We believe that in many cases, the crucial point is not whether to consider correlations, but how we can incorporate the covariance structure into the analysis with a bullet proof vest against diverging spectra and significant noise accumulation effect.
The setup of the objective classification problem is now introduced. We assume in the following that the variability of data under consideration can be described reasonably well by the means and variances. To be more precise, suppose that random variables representing two classes 𝒞1 and 𝒞2 follow p-variate normal distributions: X|Y = 1 ~ 𝒩p(μ1,Σ) and X|Y = 2 ~ 𝒩p(μ2,Σ) respectively. Moreover, assume ℙ(Y = 1) = 1/2. This Gaussian discriminant analysis setup is known for its good performance despite its rigid model structure. For any linear discriminant rule
(1) |
where μa = (μ2 + μ1)/2, and 𝕀 denotes the indicator function with value 1 corresponds to assigning X to class 𝒞2 and 0 class 𝒞1, the misclassification rate of the (pseudo) classifier δw is
(2) |
where μd = (μ2 − μ1)/2, and Pi is the conditional distribution of X given its class label i. We will focus on such linear classifier δw(·), and the mission is to find a good data projection direction w. Note that the Fisher discriminant
(3) |
is the Bayes rule. There is an equivalent derivation of the Fisher discriminant, which does not involve Gaussian assumptions. We would skip it for now, and come back to this point when we extend our method to multi-class learning scenarios. There are two fundamental difficulties in applying the Fisher discriminant whose missclassification rate is
(4) |
The first difficulty arises from the noise accumulation effect in estimating the population centroids (Fan and Fan, 2008) when p is large. The second challenge is more severe: estimating the inverse of covariance matrix Σ when p > n (Bickel and Levina, 2004). As a result, much previous researches focus on the independence rules, which act as if Σ is diagonal. However, correlation matters!
To illustrate this point, consider a case when p = 2. These two features can be selected from the original thousands of features, and we can estimate the correlation between two variables with reasonable accuracy. Let
where ρ ∈ [0, 1) and μd = (μ1, μ2)T. Without loss of generality, assume |μ1| ≥ |μ2| > 0. The misclassification rate of Fisher discriminant depends on
(5) |
Note that
Therefore, when μ1μ2 < 0, for all ρ ∈ [0, 1). On the other hand, when μ1μ2 > 0, Δp(ρ) decreases on , and increases on . Notice that when ρ → 1, Δp → ∞ regardless of signs for μ1μ2, which in turn leads to vanishing classification error. On the other hand, if we use independence rule (also called naive Bayes rule), the optimal misclassification rate
(6) |
depends on , which is monotonically decreasing for ρ ∈ [0, 1), with the limit that is smaller than unity when μ1 and μ2 have the same sign. Hence, the optimal classification error using the independence rule actually increases as correlation among features increases.
The above simple example shows that by incorporating correlation information, the gain in terms of classification error can be substantial. Elaboration on this point in more realistic scenarios is provided in Section 2. Now it seems wise to use at least a part of covariance structure to improve the performance of a classifier. So there is a need to estimate the covariance matrix Σ. Without structural assumptions on Σ, the pooled sample covariance Σ̂ is one natural estimate. But for p > n, it is not considered as a good estimate of Σ in general. We are lucky here because our mission is not constructing a good estimate of the covariance matrix, but finding a good direction w that leads to a good classifier. To mimic the optimal data projection direction Σ−1μd, we do not adopt a direct plug-in approach, simply because it is unlikely that a product is a good estimate when at least one of its components is not. Instead, we find the data projection direction w by directly minimizing the classification error subject to a capacity constraint on w. From a broad spectrum of simulated and real data analysis, we are convinced that this approach leads to a robust and efficient sparse linear classifier.
Admittedly, our work is far from the first to use covariance for classification; support vector machines (Vapnik, 1995), for example, implicitly utilize covariance between covariates. Another notable work is “shrunken centroids regularized discriminant analysis” (SCRDA) (Guo et al., 2005), which calls for a version of regularized sample covariance matrix Σ̂reg, and soft-thresholds on . Shao et al. (2011) consider a sparse linear discriminant analysis, assuming the sparsity on both the covariance matrix and the mean difference vector so that they can be regularized. They show that such a regularized estimator is asymptotically optimal under some conditions. However, to the best of our knowledge, this work is the first to select features by directly optimizing the misclassification rates, to explicitly use un-regularized sample covariance information, and to establish the oracle inequality and risk approximation theory.
There is a huge literature on high dimensional classification. Examples include principal component analysis in Bair et al. (2006) and Zou et al. (2006), partial least squares in Nguyen and Rocke (2002), Huang (2003) and Boulesteix (2004), and sliced inverse regression in Li (1991) and Antoniadis et al. (2003).
The rest of our paper is organized as follows. Section 2 provides some insights on the performances of naive Bayes, Fisher discriminant and restricted Fisher discriminants. In Section 3, we propose the Regularized Optimal Affine Discriminant (ROAD) and variants of ROAD. An efficient algorithm Constrained Coordinate Descent (CCD) is constructed in Section 4. Main risk approximation results and continuous piecewise linear property of the solution path are established in Section 5. We conduct simulation and empirical studies in Section 6. A short discussion is given in Section 7, and all proofs are relegated to the appendix.
2. Naive Bayes and Fisher Discriminant
To compare the naive Bayes and Fisher discriminant at the population level, we assume without loss of generality that variables have been marginally standardized so that Σ is a correlation matrix. Recall that the naive Bayes discriminant has error rate (6) and the Fisher discriminant has error rate (4). Let . Denote by the eigenvalues and eigenvectors of the matrix Σ. Decompose
(7) |
where are the coefficients of μd in this new orthonormal basis . Using the decomposition (7), we have
(8) |
The relative efficiency of Fisher discriminant over naive Bayes is characterized by Δp/Γp. By the Cauchy-Schwartz inequality,
The naive Bayes method performs as well as the Fisher discriminant only when μd is an eigenvector of Σ.
In general, Δp/Γp can be much larger than unity. Since Σ is the correlation matrix, . If μd is equally loaded on ξj, then the ratio
(9) |
More generally, if are realizations from a distribution with the second moment σ2, then by the law of large numbers,
Hence, (9) holds approximately in this case. In other words, the right hand side of (9) is approximately the relative efficiency of the Fisher discriminant over the naive Bayes. Now suppose further that half of the eigenvalues of Σ are c and the other half are 2−c. Then, the right hand side of (9) is (c−1+(2−c)−1)/2. For example when the condition number is 10, this ratio is about 3. A high ratio translates into a large difference in error rates: for independence rule is much larger than for Fisher discriminant. For example, when , we have 30.9% and 6.7% error rates respectively for the naive Bayes and Fisher discriminant.
To put the above arguments under a visual inspection, consider a case in which p = 1000, with μs = (1, 1, 1, 1, 1, 2, 2, 2, 2, 2)T and Σ equals the equi-correlation matrix with pairwise correlation ρ. The vector μd simulates the case in which 10 genes out of 1000 express mean differences. Figure 1 depicts the theoretical error rates of the Fisher discriminant and the naive Bayes rule as functions of ρ.
It is not surprising that the Fisher discriminant rule performs significantly better than the naive Bayes as ρ deviates away from zero. The error rate of the naive Bayes actually increases with ρ, whereas the error rate of the Fisher discriminant tends to zero as ρ approaches 1. This phenomenon is the same as what was shown analytically through the toy example in Section 1. To mimic Fisher discriminant by a plug-in estimator, we need to estimate Σ−1μd with reasonable accuracy. This mission is difficult if not impossible. On the other hand, imitating a weaker oracle is more manageable. For example, when the samples are of reasonable size, we can select the 10 variables with differences in means by applying a two-sample t-test. Restricting to the best linear classifiers based on these s = 10 variables, we have the optimal error rate
where the classification rule is δwR and . The performance of this oracle classifier is depicted by the sub-Fisher (10 features) in Figure 1. It performs much better than the naive Bayes method. One can also employ the naive Bayes rule to the restricted feature space, but this method has exactly the same performance as the naive Bayes method in the whole space. Thus, the restricted Fisher discriminant outperforms both the naive Bayes method with restricted features and the naive Bayes method using all features.
Mimicking the performance of the restricted Fisher discriminant is feasible. Instead of estimating a 1000 × 1000 covariance matrix, we only need to gauge a 10 × 10 submatrix. However, this restricted Fisher rule is not powerful enough, as shown in Figure 1. We can improve its performance by including 10 most correlated variables to each of those selected features to further account for the correlation effect, giving rise to a 20-dimensional feature space. Since the variables are equally correlated in this example, we are free to choose any 10 variables among the other 990. The performance of such an enlarged restricted Fisher discriminant is represented by sub-Fisher (20 features) in Figure 1. It performs closely to the Fisher discriminant which uses the whole feature space, and it is feasible to implement with finite samples.
3. Regularized Optimal Affine Discriminant
The misclassification rate of Fisher discriminant is , where . However, for high dimensional data, it is impossible to achieve such a performance empirically. Among other reasons, the estimated covariance matrix Σ̂ is ill-conditioned or not invertible. One solution is to focus only on the s(≪ p) most important features for classification. Ideally, the best s features should be the ones with the largest Δs among all possibilities, where Δs is the counterpart of Δp when only s variables are considered. Naive search for the best subset of size s is NP-hard. Thus, we develop a regularized method to circumvent these two problems.
3.1. ROAD
Recall that by (2), minimizing the classification error W(δw) is the same as maximizing wTμd/(wTΣw)1/2, which is equivalent to minimizing wTΣw subject to wTμd = 1. We would like to add a penalty function for capacity control. There are many ways to do regularization; for the literature on penalized methods, refer to LASSO (Tibshirani, 1996), SCAD (Fan and Li, 2001), Elastic net (Zou and Hastie, 2005), MCP (Zhang, 2010) and related methods (Zou, 2006; Zou and Li, 2008). As our primary interest is classification error (the risk of the procedure), an L1 constraint ‖w‖1 ≤ c is added for regularization, so the problem can be recast as
(10) |
We name the classifier δwc (·) the Regularized Optimal Affine Discriminant(ROAD). The existence of a feasible solution in (10) dictates
(11) |
When c is small, we obtain a sparse solution and achieve feature selection using covariance information. When , the L1 constraint is no longer binding and δwc reduces to the Fisher discriminant, which can be denoted by δw∞ (= δF). Therefore we have provided a family of linear discriminants, indexed by c, using from only one feature to all features. In some applications such as portfolio selection, the choice of c reflects the investor’s tolerance upper bound on gross exposure. In other applications, when the user does not have a such a preference, the choice of c can be data-driven. To accommodate both application scenarios, we propose a coordinate descent algorithm (Section 4) to implement our ROAD proposal.
3.2. Variants of ROAD
At the sample level, NSC (Tibshirani et al., 2002) and FAIR (Fan and Fan, 2008) both use shrunken versions of standardized mean difference to find the s features. In the same spirit, we consider the following Diagonal Regularized Optimal Affine Discriminant(D-ROAD) , where
(12) |
The D-ROAD will be compared with NSC (Tibshirani et al., 2002) and FAIR (Fan and Fan, 2008) in the simulation studies, and all these independence based rules will be compared with ROAD and its two variants defined below.
A screening-based variant (to be proposed) of ROAD aims at mimicking the performance of sub-Fisher (10 features) in Figure 1. A fast way to select features is the independence screening, which uses the marginal information such as the two-sample t-test. We can also enlarge the selected feature subspace by incorporating the features which are most correlated to what have been chosen. This additional variant of ROAD tracks the performance of sub-Fisher (20 features) in Figure 1. We will refer to the two variants of ROAD as S-ROAD1 and S-ROAD2. More description of these procedures, along with their theoretical properties and numerical investigations, will be detailed in Sections 5 and 6.
A hint of the rationale behind including correlated features that do not show a difference in means between the two classes, is revealed through the two-feature example in the introduction. Suppose μ2 = 0. Then, by (5), the power of the discriminant using two features is , whereas with the first feature alone the misclassification rate is . Therefore when the correlation |ρ| is large, using two correlated features is far more powerful than employing only one feature, even though the second feature has no marginal discrimination power. More intuition is granted by this observation: at the population level, the best s features are not necessarily those with largest standardized mean differences. In other words, with the two class Gaussian model in mind, when Σ is the correlation matrix, the most powerful s features for classification are not necessarily the coordinates of μd with largest absolute values. This is illustrated by the next stylized example.
Let X|Y = 0 ~ 𝒩 (μ1,Σ) and X|Y = 1 ~ 𝒩 (μ2,Σ), where μ1 = (0, 0, 0)T, μ2 = (4, 0.5, 1)T, and
Suppose the objective is to choose 2 out of 3 variables for classification. If we rank features by marginal information, for example by the absolute value of standardized mean differences, then we would choose the 1st and 3rd features. On the other hand, denote μd,ij the mean difference vector for features i and j, Σij the covariance matrix of features i and j, then the classification power using features i and j depends on . Simple calculation leads to
Hence the most powerful two features for classification are not the 1st and 3rd.
3.3. Extension to Multi-Class
In this section, we outline an extension of ROAD to multi-class classification problems. Suppose that there are K classes, and for j = 1,⋯, K, the jth class has mean μj and common covariance Σ. Denote the overall mean of features by . Fisher’s reduced rank approach to multi-class classification is a minimum distance classifier in some lower dimensional projection space. The first step is to find s ≤ K−1 discriminant coordinates that separate the population centroids the most in the projected space . Then the population centroids μj ’s and new observation X are both projected onto 𝒮. The observation X will be assigned to the class whose projected centroid is closest to the projection of X onto 𝒮. Note that it is usually not necessary to compute all K −1 discriminant coordinates whose span is that of all K population centroids; the process can stop as long as the projected population centroids are well spread out in 𝒮.
We adopt the above procedure for multi-class classification. However, the large-p-small-n scenario demands regularization in selecting discriminant coordinates. Indeed, in the Fisher’s proposal the first discriminant coordinate is the solution of
(13) |
where B = ΨTΨ, and the jth column of ΨT is (μj − μa). Note that a multiple of B is the between-class variance matrix. The second discriminant coordinate is the maximizer of wTBw/(wTΣw) with constraint , and the subsequent discriminant coordinates are determined analogously.
Since solving (13) is the same as looking for the eigenvector of Σ−1/2BΣ−1/2 corresponding to the largest eigenvalue, diverging spectrum and noise accumulation have to be considered when we work on the sample. To address these issues, we regularize w as in the binary case,
(14) |
whose solution is the first regularized discriminant coordinate . The second regularized discriminant coordinate is obtained by solving (14) with additional constraint . Other regularized discriminant coordinates can be found similarly. With these s (≤ K − 1) regularized discriminant coordinates, the classifier is now based on the minimum distance to the projected centroids in the s-dimensional space spanned by .
4. Constrained Coordinate Descent
With a Lagrangian argument, we reformulate problem (10) as
(15) |
In this section, we propose a Constrained Coordinate Descent (CCD) algorithm that is tailored for solving our minimization problem with linear constraints. Optimization (15) is a constrained quadratic programming problem and can be solved by existing softwares such as MOSEK. Although these softwares are well regarded in practice, they are slow for our application. The structure of (15) could be exploited in order to obtain a more efficient algorithm. In line with the LARS algorithm, we will exploit the fact that the solution path has a piecewise-linear property.
In the compressed sensing literature, it is common to replace an affine constraint by a quadratic penalty. We borrow this idea and consider the following approximation to (15):
(16) |
In practice, we replace Σ by the pooled sample covariance Σ̂ and μ by the sample mean difference vector μ̂d. By Theorem 6.7 in Ruszczynski (2006), we have
Note that we do not have to enforce the affine constraint strictly, because it only serves to normalize our problem. In the optimization problem (16), when λ = 0, the solution w̃0,γ is always in the direction of Σ−1μd, the Fisher discriminant, regardless of the value of γ. In addition, this observation is confirmed in the data analysis (Section 6.2) by the insensitivity of choice for γ. Therefore we hold γ as a constant in practice.
We solve (16) by coordinate descent. Non-gradient algorithms seem to be less popular for convex optimization. For instance, the popular textbook Convex Optimization by Boyd and Vandenberghe (2004) does not even have a section on these methods. Coordinate descent method is an algorithm, in which the p search directions are just unit vectors e1, ⋯, ep, where ei denotes the ith element in the standard basis of ℝp. These unit vectors are used as search directions in each search cycle until some convergence criterion is met. If the objective function is convex but non-differentiable, the coordinate descent algorithm might gets trapped in a nonstationary point. However, this is not a problem in our case. Although the objective function is not strictly convex, it is strictly convex in each of the coordinates. Combining with the fact that the non-differentiable part of the objective function is separable, either Theorem 4.1 or Theorem 5.1 of Tseng (2001) guarantees that coordinate descent algorithms converge to local minima. Moreover, since since all directional derivatives exist, every coordinate-wise minimum is also a local minimum. A similar study on the convergence of the coordinate descent algorithm can be found in Breheny and Huang (2011).
What makes the coordinate-descent algorithmparticularly attractive for (16) is that there is an explicit formula for each coordinate update. For a given γ, fix τ and K, then do the optimization on a grid (of log-scale) of λ values: τλmax = λK < λK−1 < ⋯ < λ1 = λmax. The λmax is the minimum λ value such that no variables enter the model; this is analogous to the minimum requirement on c in (11). In our implementation, we take τ = 0.001 and K = 100. The problem is solved backwards from λmax. When λ = λi+1, we use the solution from λ = λi as the initial value. This kind of “warm start” is very effective in improving computational efficiency.
Consider a coordinate descent step to solve (16). Without loss of generality, suppose that w̃j for all j ≥ 2 are given, and we need to optimize with respect to w1. The objective function now becomes
When w1 ≠ 0, we have
By simple calculation (Donoho and Johnstone, 1994), the coordinate-wise update has the form
where S(z, λ) = sign(z)(|z| − λ)+ is the soft-thresholding operator.
In each coordinate update, the computational complexity is 𝒪(p). A complete cycle through all p variables costs 𝒪(p2) operations. From our experience, CCD converges quickly after a few cycles if “warm start” is used for the initial solution. Let C denote the average number of cycles until convergence for each λ. Then our algorithm CCD enjoys computational complexity 𝒪(CKp2). This is compared with the Fisher discriminant, where matrix inversion alone costs at least 𝒪(p2.376) operations (the Coppersmith-Winograd algorithm), though we should emphasize here that our algorithm has no ambition to fully recover the Fisher discriminant (this task is infeasible anyway). The D-ROAD can be similarly implemented by replacing the covariance matrix with its diagonal.
5. Asymptotic Property
5.1. Risk Approximation
Let ŵc be a sample version of wc in (10),
(17) |
The fact that Σ̂ is only positive semi-definite leads to potential non-uniqueness of ŵc. Now, we have three different classifiers: . The first two are oracle classifiers, requiring the knowledge of unknown parameters μ1, μ2 and Σ, while the third one is the feasible classifier, ROAD, based on the sample. Their classification errors are given by (2). Explicitly, the error rates are respectively W(δw∞) [see (4)], W(δwc), and W(δ̂wc). By (2), an obvious estimator of the misclassification rate of δ̂wc is
(18) |
Two questions arise naturally:
how close is W(δ̂wc), the misclassification error of δ̂wc, to that of its oracle W(δwc)?
does Wn(δ̂wc) estimate W(δ̂wc) well?
Theorem 1 addresses these two questions. We introduce an intermediate optimization problem for convenience:
Theorem 1. Let sc = ‖wc‖0, , and ŝc = ‖ŵc‖0. Assume that , ‖Σ̂ − Σ‖∞ = Op(an) and ‖μ̂d − μd‖∞ = Op(an) for a given sequence an → 0. Then, we have
and
where and dn = bn ∨ (ŝcan).
Remark 1. In Theorem 1, ‖·‖∞ is the element wise super-norm. When Σ̂ is the sample covariance, by Bickel and Levina (2004), ; hence we can take . The first result in Theorem 1 shows the difference between the misclassification rate of δ̂wc and its oracle version δwc; the second result says about the error in estimating the true misclassification rate of ROAD.
Remark 2. In view of (2), one intends to choose a w that makes wTΣw small and wTμd large. A compromise of these dual objectives leads to a utility function
as a proxy of the objective function (2) for a fixed ξ. For any ξ > 0, the optimal choice w* ∈ argmin U(w) leads to the Fisher discriminant rule. Consider also the regularized versions
where Û (w) is the utility function with Σ and μd estimated by Σ̂ and μ̂d. Then, it is easy to see the following utility approximation: for any ‖w‖1 ≤ c
and
Remark 3. The most prominent technical challenge of our original problem (10) is due to different dualities of penalization problems. For the population version (10), it can be reduced, by the Lagrange multiplier method, to the utility U(w) optimization problem in Remark 2 with a given ξ > 0, while for the sample version (17), it can be reduced to the utility Û (w) optimization problem with a different ξ̂. Therefore, the problem is not the same as the utility optimization problem in Remark 2: ξ̂ is hard to bound. In fact, it is much harder and yields more complicated results.
We now show how different the data projection direction in the regularized oracle can be from that in the Fisher discriminant. To gain better insight, we reformulate the L1 constraint problem as the following penalized version:
(19) |
The following characterizes its convergence to the Fisher discriminant weight w∞ as λ → 0.
Theorem 2. Let s be the size of the set {k : (Σ−1μd)k ≠ 0}. Then, we have
where is the normalized Fisher discriminant, optimizing (19) with λ = 0.
5.2. Screening-based ROAD (S-ROAD)
Following the idea of Sure Independence Screening in Fan and Lv (2008), we pre-screen all the features before hitting the ROAD. The advantage of this two-step procedure is that we have a control on the total number of features used in the final classification rule. A popular method for independent feature selection is the two-sample t-test (Tibshirani et al., 2002; Fan and Fan, 2008), which is a specific case of marginal screening in Fan and Lv (2008). The sure screening property of such a method was demonstrated in Fan and Fan (2008), which selects consistently the features with different means in the same settings as ours.
Once the features are selected, we can hit the ROAD, producing the vanilla Screening-based Regularized Optimal Affine Discriminant (S-ROAD1):
Employ a screening method to get k features.
Apply ROAD to the k selected features.
In the first step, we use the t-statistics as the screening criteria and determine a data-driven threshold. This idea is motivated by a FDR criterion for choosing marginal screening threshold in Zhao and Li (2010). A random permutation π of {1, ⋯, n} is used to decouple Xi and Yi so that the resulting data (Xπ(i), Yi) follow a null model, by which we mean that features have no prediction power for the class label. More specifically, the screening step is carried out as follows:
Calculate the t-statistic tj for each feature j, where j = 1, ⋯, p.
For the permuted data pairs (Xπ(i), Yi), recalculate the t-statistic , for j = 1, ⋯, p. (Intuitively, if j is the index of an important feature, |tj | should be larger than most of , because the random permutation is meant to eliminate the prediction power of features.)
- For q ∈ [0, 1], let ω(q) be the qth quantile of . Then, the selected set 𝒜 is defined as
The choice of threshold is made to retain the features whose t-statistics are significant in the two sample t-test. Alternatively, if the user knows his k, (due to budget constraints, etc.), then he can just rank |tj|’s and choose the threshold accordingly.
The S-ROAD1 tracks the performance of oracle procedures like sub-Fisher (10 features) in Figure 1. The feature space gotten by step (1) can be expanded by including those features which are most correlated with what have already been selected. This additional variant, S-ROAD2, aims at achieving the performance of sub-Fisher (20 features) type of procedure in Figure 1.
To elaborate on the theoretical properties of S-ROAD1, assume with no loss of generality that the first k variables are selected in the screening step. Denote by Σk the upper left k × k block of Σ and μk the first k coordinates of μd. Let
The quantities β̂c and are defined similarly to ŵc and (defined right before Theorem 1). Then denote by . The next two theorems can be verified along lines similar to Theorems 1 and 2. Hence, the proofs are omitted.
Theorem 3. If , and λmin(Σk) ≥ δ0 > 0, then we have
and
where .
This result is cleaner than Theorem 1, as the rate does not involve sc and ŝc: they are simply replaced by the upper bound k. Accurate bounds for sc and ŝc are of interest for future exploration, but they are beyond the scope of this paper.
Theorem 4. Let where Mk is the subspace in Rp with the last p − k components being zero, and . Then we have
5.3. Continuous Piecewise Linear Solution Path
We use the word “linear” when referring to “affine”, in line with the status quo in the statistical community. Continuous piecewise linear paths are of much interest to statisticians, as the property reduces the computational complexity of solutions and justifies the linear interpolations of solutions at discrete points. Previous well known investigations include Efron et al. (2004) and Rosset and Zhu (2007). Our setup differs from others mainly in that in addition to a complexity penalty, there is also an affine constraint. Our proof calls in point set topology, and is purely geometrical, in a spirit very different from the existing ones. In particular, we stress that the continuity property is intuitively correct, but it is far from a trivial consequence of the assumptions. The authors also believe that the claim holds true even if the p−1 dimensional affine subspace constraint is replaced by more generic ones, though the technicality of the proof must be more involved.
Theorem 5. Let μd ∈ ℝp be a constant, and Σ be a positive definite matrix of dimension p × p. Let
then wc is a continuous piecewise linear function in c.
Proposition 1. W(δwc) is a Lipschitz function in c.
Proof. Recall that
By Theorem 5 and the fact that composition of Lipschitz functions is again Lipschitz, the conclusion holds.
6. Numerical Investigation
In this section, several simulation and real data studies are conducted. We compare ROAD and its variants S-ROAD1 (Screening-based ROAD version 1), S-ROAD2 (Screening-based ROAD version 2) and D-ROAD (Diagonal ROAD) with NSC (Nearest Shrunken Centroid), SCRDA (Shrunken Centroids Regularized Discriminant Analysis), FAIR (Feature Annealed Independence Rule), NB (Naive Bayes), NFR (Naive Fisher Rule, which uses the generalized inverse of the sample covariance matrix), as well as the Oracle.
In all simulation studies, the number of variables is p = 1000, and the sample size of the training and testing data is n = 300 for each class. Each simulation is repeated 100 times to test the stability of the method. Without loss of generality, the mean vector of the first class μ1 is set to be 0. We use five-fold cross-validation to choose the penalty parameter λ.
6.1. Equal Correlation Setting, Sparse Fixed Signal
In this subsection, we consider the setting where Σi,i = 1 for all i = 1, ⋯, p and Σi,j = ρ for all i, j = 1, ⋯, p and i ≠ j, and take μ2 to be a sparse vector: , where 1d is a length d vector with all entries 1, 0d is a length d vector with all entries 0, where the sparsity size is s0 = 10. Also, we fix γ = 10 in (16) for this simulation. Sensitivity of the performance due to the choice of γ will be investigated in the next subsection.
The solution paths for ROAD and D-ROAD of one realization are rendered in Figure 2. It is clear from the figure that, as the penalty parameter decreases (index increases), both ROAD and D-ROAD use more features. Also, the cutoff point for D-ROAD, where the number of features starts to increase dramatically, tends to come later than that for ROAD.
The simulation results for the pairwise correlations ranging from 0 to 0.9 are shown in Tables 1 and 2. We would like to mention that the results for NFR (Naive Fisher Rule) are not included in these (and the subsequent) tables because the test classification error is always around 50%, i.e., it is about the same as random guess. Also in the tables are the screening-based versions of the ROAD. S-ROAD1 refers to the vanilla version where we first apply the two-sample t-test to select any features with the corresponding t-test statistic with absolute value larger than the maximum absolute t-test statistic value calculated on the permuted data. S-ROAD2 does the same except for each variable in S-ROAD1’s pre-screened set, it adds an additional variable which is most correlated with that variable. Figure 3, a graphical summary of Table 1, presents the median test errors for different methods. We can see from Table 1 and Figure 3 that the oracle classification error decreases as ρ increases. This phenomenon is due to a similar reason to the two-dimensional showcase in the introduction. When ρ goes to 1, all the variables contribute in the same way to boost the classification power. ROAD performs reasonably close to the Oracle, while working independence based method such as D-ROAD, NSC, FAIR and NB fail when ρ is large. The huge discrepancy shows the advantage of employing the correlation structure. Since SCRDA also employ the correlation structure, it does not fail when ρ is large. However, ROAD still outperforms SCRDA in all the correlation settings. S-ROAD1 and S-ROAD2 both have misclassification rates similar to that of ROAD. It is worth to emphasize that the merits of the screening based ROADs mainly lie in the computation cost, which is reduced significantly by the pre-screening step.
Table 1.
ρ | ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR | NB | Oracle |
---|---|---|---|---|---|---|---|---|---|
0 | 6.0(1.2) | 6.0(1.1) | 6.0(1.2) | 5.7(1.1) | 6.3(1.0) | 5.9(1.0) | 5.7(1.0) | 11.2(1.4) | 5.5(1.1) |
0.1 | 6.3(2.5) | 12.2(5.0) | 8.8(2.4) | 11.6(5.1) | 10.3(1.4) | 11.1(3.0) | 12.4(1.4) | 26.8(10.1) | 5.0(0.9) |
0.2 | 5.3(1.0) | 16.0(6.3) | 8.7(2.5) | 16.1(7.5) | 8.5(1.2) | 14.5(4.3) | 17.3(1.7) | 34.8(11.6) | 4.0(0.8) |
0.3 | 4.2(0.9) | 19.1(7.9) | 7.8(2.6) | 19.1(9.4) | 6.6(1.1) | 17.1(5.5) | 20.8(1.7) | 39.3(12.3) | 3.2(0.7) |
0.4 | 3.2(0.8) | 22.8(9.4) | 6.5(2.6) | 22.2(9.9) | 4.8(1.0) | 20.5(6.1) | 23.2(1.8) | 41.6(11.3) | 2.0(0.6) |
0.5 | 2.0(0.6) | 25.8(11.0) | 4.8(1.4) | 25.2(10.2) | 2.9(0.7) | 23.2(6.0) | 25.3(1.6) | 43.5(11.1) | 1.3(0.5) |
0.6 | 1.0(0.4) | 18.3(12.4) | 3.3(1.3) | 28.1(10.3) | 1.5(0.5) | 25.8(5.7) | 26.8(1.8) | 44.4(12.1) | 0.7(0.3) |
0.7 | 0.3(0.2) | 15.5(13.6) | 1.7(1.0) | 29.1(10.1) | 0.5(0.3) | 27.0(8.2) | 28.2(2.0) | 45.2(12.3) | 0.2(0.2) |
0.8 | 0.0(0.1) | 5.0(14.0) | 0.3(0.4) | 29.5(9.9) | 0.0(0.1) | 28.3(8.7) | 29.2(2.0) | 46.2(10.3) | 0.0(0.1) |
0.9 | 0.0(0.0) | 0.6(14.8) | 0.0(0.1) | 30.3(7.6) | 0.0(0.2) | 29.9(8.0) | 30.2(1.9) | 46.8(8.8) | 0.0(0.0) |
Table 2.
ρ | ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR |
---|---|---|---|---|---|---|---|
0 | 16.00(24.16) | 10.00(1.31) | 17.00(4.31) | 29.50(58.54) | 10.00(13.25) | 10.00(44.86) | 11.00(1.62) |
0.1 | 117.50(30.50) | 11.00(3.32) | 21.00(4.15) | 14.00(122.02) | 1000.00(345.48) | 35.50(117.32) | 10.00(0.27) |
0.2 | 130.50(33.33) | 11.00(6.99) | 22.00(8.98) | 15.50(111.42) | 1000.00(0.00) | 95.00(120.17) | 10.00(0.69) |
0.3 | 136.50(36.16) | 11.00(11.56) | 22.00(10.38) | 17.50(106.16) | 1000.00(0.00) | 103.50(117.52) | 9.00(1.19) |
0.4 | 135.00(34.43) | 10.00(14.21) | 22.00(17.07) | 10.00(98.10) | 1000.00(0.00) | 70.00(131.65) | 8.00(1.33) |
0.5 | 138.50(38.17) | 9.00(21.71) | 22.00(21.56) | 10.00(105.33) | 1000.00(0.00) | 65.00(137.97) | 7.00(1.30) |
0.6 | 148.00(49.74) | 10.50(27.92) | 22.00(31.88) | 10.00(110.23) | 1000.00(0.00) | 38.00(141.91) | 6.00(1.30) |
0.7 | 170.50(52.29) | 11.00(37.37) | 22.00(41.76) | 1.00(118.43) | 1000.00(0.00) | 27.50(140.10) | 5.00(1.20) |
0.8 | 203.00(27.72) | 12.00(50.36) | 24.00(59.23) | 1.00(143.83) | 1000.00(10.92) | 15.00(157.98) | 5.00(1.29) |
0.9 | 151.50(8.02) | 14.00(55.32) | 28.00(50.45) | 1.00(153.27) | 1000.00(56.30) | 14.00(225.38) | 3.00(1.08) |
The ROAD is a very robust estimator. It performs well even when all the variables are independent, in which case there could be a lot of noise for fitting the covariance matrix. Table 1 indicates that ROAD has almost the same performance as D-ROAD, NSC and FAIR under the independence assumption, i.e. ρ = 0. As ρ increases, the edge of ROAD becomes more substantial. In general, the ROAD is recommended on the grounds that even with pairwise correlation of about 0.1 (which is quite common in microarray data as well as financial data), the gain is substantial.
Another interesting observation is that the D-ROAD performs similarly to NSC and FAIR in terms of classification error. An intuitive explanation is that they are all “sparse” independence rules. NSC uses soft-thresholding on the standardized sample mean difference, and its equivalent LASSO derivation can be found in Wang and Zhu (2007). FAIR selects features with large marginal t-statistics in absolute values, while D-ROAD is another L1 penalized independence rule, whose implementation is different from NSC.
Table 2 summarizes the number of features selected by different classifiers. Note that ROAD mimics Fisher discriminant coordinate Σ−1μd, which has p = 1000 nonzero entries under our simulated model. Therefore, the large number of features selected by ROAD is not out of expectation.
6.2. The Effect of γ
Under the settings of the previous subsection, we look into the variation of the ROAD performance as γ changes. In Table 3, the number of active variables varies; however, the median classification error remains about the same for a broad range of γ values. The reason is that the cross validation step chooses the “best” λ according to a specific γ. Therefore, the final performance remains almost unchanged. Since our primary concern is the classification error, we fix γ = 10 for simplicity in the subsequent simulations and in the real data analysis.
Table 3.
ρ = 0 | ρ = 0.5 | ρ = 0.9 | ||
---|---|---|---|---|
Median classification error (in percentage) | ROADγ=0.01 | 5.8(1.2) | 2.7(0.6) | 0.2(0.2) |
ROADγ=0.1 | 6.0(1.2) | 2.0(0.6) | 0.2(0.1) | |
ROADγ=1 | 6.0(1.3) | 2.0(0.6) | 0.0(0.1) | |
ROADγ=10 | 6.0(1.2) | 2.0(0.6) | 0.0(0.0) | |
ROADγ=100 | 6.2(1.2) | 2.3(0.6) | 0.0(0.1) | |
ρ = 0 | ρ = 0.5 | ρ = 0.9 | ||
Median number of nonzeros | ROADγ=0.01 | 14.0(19.2) | 129.5(42.5) | 657.0(179.6) |
ROADγ=0.1 | 14.0(19.6) | 137.0(37.6) | 773.5(103.2) | |
ROADγ=1 | 16.5(22.9) | 139.0(37.9) | 514.0(39.7) | |
ROADγ=10 | 16.0(24.2) | 138.5(38.2) | 151.5(8.0) | |
ROADγ=100 | 22.0(16.1) | 114.5(9.4) | 94.0(9.6) |
6.3. Block Diagonal Correlation Setting, Sparse Fixed Signal
In this subsection, we follow the same setup as in Section 6.1 except that the covariance matrix Σ is taken to be block diagonal. The first block is a 20 × 20 equi-correlated matrix and the second block is a (p − 20) × (p − 20) equi-correlated matrix, both with pairwise correlation ρ. In other words, Σi,i = 1 for all i = 1, ⋯, p, Σi,j = ρ for all i, j = 1, ⋯, 20 and i ≠ j, Σi,j = ρ for all i, j = 21, ⋯, p and i ≠ j, and the rest elements are zeros. As before, we examine the performances of various estimators when ρ varies. The percentage for testing error and the number of selected features in the estimators are shown in Tables 4 and 5, respectively.
Table 4.
ρ | ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR | NB | Oracle |
---|---|---|---|---|---|---|---|---|---|
0 | 6.0(1.2) | 6.0(1.1) | 6.0(1.2) | 5.7(1.1) | 6.0(0.1) | 5.5(0.3) | 5.7(1.0) | 11.2(1.4) | 5.5(1.1) |
0.1 | 10.8(3.6) | 13.0(4.8) | 10.3(3.0) | 12.8(4.4) | 13.0(0.3) | 12.5(0.8) | 12.7(1.5) | 25.7(7.6) | 8.8(1.2) |
0.2 | 10.7(4.1) | 18.0(5.7) | 9.7(3.6) | 17.7(5.9) | 14.2(1.1) | 17.2(0.4) | 17.7(1.6) | 34.4(7.9) | 8.8(1.2) |
0.3 | 9.5(3.8) | 23.2(5.5) | 8.8(4.0) | 23.2(5.6) | 12.7(0.9) | 20.0(0.8) | 20.4(1.6) | 38.3(7.5) | 7.7(1.0) |
0.4 | 8.0(3.3) | 29.7(4.2) | 7.5(4.2) | 29.3(4.1) | 11.0(1.2) | 23.8(1.3) | 23.2(1.8) | 41.0(6.9) | 6.6(1.1) |
0.5 | 6.2(2.6) | 30.1(3.9) | 5.7(0.9) | 30.0(3.1) | 8.7(0.4) | 26.2(1.7) | 25.1(1.7) | 42.2(6.6) | 5.0(1.0) |
0.6 | 4.2(0.9) | 30.3(4.2) | 4.0(0.8) | 30.3(2.2) | 6.4(0.1) | 26.5(1.2) | 26.8(1.8) | 43.6(7.0) | 3.5(0.7) |
0.7 | 2.3(0.7) | 30.0(6.4) | 2.2(0.7) | 30.6(2.1) | 2.5(0.7) | 28.1(3.2) | 28.2(2.0) | 44.2(6.5) | 1.8(0.6) |
0.8 | 0.8(0.4) | 29.8(9.8) | 0.7(0.4) | 30.6(2.1) | 0.6(0.4) | 29.2(1.6) | 29.2(2.0) | 44.8(5.7) | 0.7(0.3) |
0.9 | 0.0(0.1) | 29.8(12.8) | 0.0(0.1) | 30.6(1.9) | 0.2(0.2) | 29.2(1.2) | 30.2(1.9) | 45.2(4.9) | 0.0(0.1) |
Table 5.
ρ | ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR |
---|---|---|---|---|---|---|---|
0 | 16.00(24.16) | 10.00(1.31) | 17.00(4.31) | 29.50(58.54) | 10.00(1.15) | 10.00(1.73) | 11.00(1.62) |
0.1 | 48.50(35.99) | 10.00(2.73) | 20.00(3.77) | 14.00(26.73) | 33.00(17.79) | 65.00(38.84) | 18.00(2.67) |
0.2 | 48.00(31.48) | 10.00(4.59) | 20.00(5.84) | 10.00(18.23) | 38.00(117.54) | 10.00(16.17) | 18.00(2.77) |
0.3 | 47.50(42.75) | 9.00(5.28) | 20.00(6.03) | 10.00(11.80) | 208.00(103.94) | 10.00(13.58) | 18.00(3.91) |
0.4 | 40.50(32.42) | 1.00(4.82) | 20.00(10.08) | 1.00(9.25) | 27.00(90.95) | 33.00(14.22) | 17.00(5.43) |
0.5 | 40.50(33.23) | 1.00(4.88) | 20.00(10.10) | 1.00(8.51) | 24.00(76.79) | 10.00(1.15) | 7.00(5.98) |
0.6 | 39.50(30.03) | 1.00(3.74) | 20.00(14.53) | 1.00(5.92) | 127.50(6.36) | 6.50(2.12) | 6.00(5.98) |
0.7 | 40.00(41.35) | 1.00(4.71) | 20.00(8.07) | 1.00(2.49) | 94.50(2.12) | 9.50(0.71) | 5.00(5.52) |
0.8 | 55.00(58.67) | 1.00(6.20) | 20.00(18.32) | 1.00(0.93) | 58.00(2.83) | 6.00(5.66) | 5.00(4.84) |
0.9 | 120.00(30.66) | 1.00(21.29) | 20.00(30.46) | 1.00(0.35) | 20.00(0.00) | 8.00(2.83) | 3.00(3.81) |
In this block-diagonal setting, we have observed similar results to those in Section 6.1: ROAD and S-ROAD2 perform significantly better than the other methods. One interesting phenomenon is that S-ROAD1 does not perform well when ρ is large. The reason is that the current true model has 20 important features, and by looking only at marginal contribution, S-ROAD1 misses some important variables, as shown in Table 4. Indeed, because those features have no expressed mean differences, it does not fully take advantage of highly correlated features. In contrast, S-ROAD2 is able to pick up all the important variables, takes advantage of correlation structure, and leads to a sparser model than the vanilla ROAD. In view of the results from this simulation setting and the previous one, we recommend S-ROAD2 over S-ROAD1.
6.4. Block-Diagonal Negative Correlation Setting, Sparse Fixed Signal
In this subsection, we again follow a similar setup as in Section 6.1. Here, the covariance matrix Σ is taken to be block diagonal with each block size equals to 10. Each block is an equi-correlated matrix with pairwise correlation ρ = −0.1. In other words, Σ = diag(Σ0, ⋯, Σ0), where Σ0 is a 10 × 10 equi-correlated matrix with correlation −0.1. Here, and the sparsity size is s0 = 10. As before, we examine the performances of various estimators when ρ varies. The percentage for testing error and the number of selected features in the estimators are shown in Table 6.
Table 6.
ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR | NB | Oracle | |
---|---|---|---|---|---|---|---|---|---|
error | 7.3(3.4) | 16.0(5.2) | 12.7(3.4) | 17.8(8.0) | 18.5(1.1) | 20.8(0.6) | 24.8(2.1) | 33.5(2.1) | 3.2(0.7) |
nonzero | 168.00(47.59) | 10.00(2.40) | 20.00(3.58) | 15.50(15.32) | 24.00(0.58) | 41.00(17.90) | 59.00(4.27) | – | – |
6.5. Random Correlation Setting, Double Exponential Signal
To evaluate the stability of the ROAD, we take a random matrix Σ as the correlation structure, and use a signal μ whose nonzero entries come from a double exponential distribution. A random covariance matrix Σ is generated as follows:
For a given integer m (here we take m = 10), generate a p × matrix Ω where Ωi,j ~ Unif(−1, 1). Then the matrix ΩΩT is positive semi-definite.
Denote cΩ = mini(ΩΩT)ii. Let Ξ = ΩΩT + cΩI, where I is the identity matrix. It is clear that Ξ is positive definite.
Normalize the matrix Ξ to get Σ whose diagonal elements are unity.
For the signal, we take μ to be a sparse vector with sparsity size s = 10, and the nonzero elements are generated from the double exponential distribution with density function
Table 7 summaries the results. It shows that even under random correlation setting and random signals, our procedure ROAD still outperforms other competing classification rules such as SCRDA, NSC and FAIR in terms of the classification error.
Table 7.
ROAD | S-ROAD1 | S-ROAD2 | D-ROAD | SCRDA | NSC | FAIR | NB | Oracle | |
---|---|---|---|---|---|---|---|---|---|
error | 2.0(0.6) | 11.0(5.2) | 5.8(3.9) | 17.0(2.2) | 5.2(1.1) | 16.2(1.3) | 17.0(1.6) | 46.2(2.4) | 1.3(0.5) |
nonzero | 83.00(39.54) | 4.00(8.13) | 9.00(10.69) | 1.00(3.89) | 1000.00(0.00) | 4.00(0.58) | 1.00(0.17) | – | – |
6.6. Real Data
Though the ROAD seems to perform best in a broad spectrum of idealized experiments, it has to be tested against reality. We now evaluate the performance of our newly proposed estimator on three popular gene expression data sets: “Leukemia” (Golub et al., 1999), “Lung Cancer” (Gordon et al., 2002), and “Neuroblastoma data set” (Oberthuer et al., 2006). The first two data sets come with predetermined, separate training and test sets of data vectors. The Leukemia data set contains p = 7, 129 genes for n1 = 27 acute lymphoblastic leukemia (ALL) and n2 = 11 acute myeloid leukemia (AML) vectors in the training set. The test set includes 20 ALL and 14 AML vectors. The Lung Cancer data set contains p = 12, 533 genes for n1 = 16 adenocarcinoma (ADCA) and n2 = 16 mesothelioma training vectors, along with 134 ADCA and 15 mesothelioma test vectors. The Neuroblastoma data set, obtained via the MicroArray Quality Control phase-II (MAQC-II) project, consists of gene expression profiles for p = 10, 707 genes from 251 patients of the German Neuroblastoma Trials NB90–NB2004, diagnosed between 1989 and 2004. We analyzed the gene expression data with the 3-year event-free survival (3-year EFS), which indicates whether a patient survived 3 years after the diagnosis of neuroblastoma. There are 239 subjects with the 3-year EFS information available (49 positives and 190 negatives). We randomly select 83 subjects (19 positives and 64 negatives, which are about one third of the total subjects) as the training set and the rest as the test set. The readers can find more details about the data sets in the original papers.
Following Dudoit et al. (2002) and Fan and Fan (2008), we standardized each sample to zero mean and unit variance. The classification results for ROAD, S-ROAD1, S-ROAD2, SCRDA, FAIR, NSC and NB are shown in Tables 8, 9 and 10. For the leukemia and lung cancer data, ROAD performs the best in terms of classification error. For the neuroblastoma data, NB performs best, however, it makes use of all 10,707 genes, which is not very desirable. In contrast, ROAD has a competitive performance in terms of classification error and it only selects 33 genes. Although SCRDA has a close performance, the number of selected variables varies a lot for the three data set (264, 2410, 1). Overall, ROAD is a robust classification tool for high-dimensional data.
Table 8.
ROAD | S-ROAD1 | S-ROAD2 | SCRDA | FAIR | NSC | NB | |
---|---|---|---|---|---|---|---|
Training Error | 0 | 0 | 0 | 1 | 1 | 1 | 0 |
Testing Error | 1 | 3 | 1 | 2 | 1 | 3 | 5 |
No. of selected genes | 40 | 49 | 66 | 264 | 11 | 24 | 7129 |
Table 9.
ROAD | S-ROAD1 | S-ROAD2 | SCRDA | FAIR | NSC | NB | |
---|---|---|---|---|---|---|---|
Training Error | 1 | 1 | 1 | 0 | 0 | 0 | 6 |
Testing Error | 1 | 4 | 1 | 3 | 7 | 10 | 36 |
No. of selected genes | 52 | 56 | 54 | 2410 | 31 | 38 | 12533 |
Table 10.
ROAD | S-ROAD1 | S-ROAD2 | SCRDA | FAIR | NSC | NB | |
---|---|---|---|---|---|---|---|
Training Error | 3 | 22 | 14 | 16 | 15 | 16 | 14 |
Testing Error | 33 | 47 | 37 | 37 | 44 | 35 | 32 |
No. of selected genes | 33 | 1 | 9 | 1 | 18 | 41 | 10707 |
7. Discussion
With a simple two-class gaussian model, we explored the bright side of using correlation structure for high dimensional classification. Targeting directly on the classification error, ROAD employs un-regularized pooled sample covariance matrix and sample mean difference vector without suffering from curse of dimensionality and noise accumulation. The sparsity of chosen features is evident in simulations and real data analysis; however, we have not discovered intuitively good conditions on Σ and μd, such that a certain desirable sparsity pattern of ŵc follows. We resolve a part of the problem by introducing screening-based variants of ROAD, but the precise control of the sparsity size is worth for further investigation. Furthermore, we can explore the conditions for the model selection consistency.
In this paper, we have restricted ourselves to the linear rules. They can be easily extended to nonlinear discriminants via transformations such as low order polynomials or spline basis functions. One may also use the popular “kernel tricks” in the machine learning community. See, for example, Hastie et al. (2009) for more details. After the features are transformed, we can hit the ROAD. One essential technical challenge of the current paper is rooted in a stochastic linear constraint. The precise role of this constraint has not been completely pinned down. Extension of the theoretical properties from binary case to multi-class is also interesting for future research.
Acknowledgements
The authors thank the Editor, the Associate Editor and two referees, whose comments have greatly improved the scope and presentation of the paper. The financial support from NSF grant DMS-0704337 and NIH Grant R01-GM072611 is greatly acknowledged.
A. Proofs
A.1. Proof of Theorem 1
We now show first part of the theorem. Let f0(w) = wTμd/(wTΣw)1/2, f1(w) = wT μ̂d/(wTΣw)1/2, and f2(w) = wT μ̂d/(wT Σ̂w)1/2. Then, it follows easily that
where . We now bound both terms separately in the following two steps.
Step 1(bound Λ1): For any w, we have
(20) |
Since maximizes f1(·), it follows that
(21) |
and similarly noticing wc maximizing f0(·), we have
(22) |
Combining the results of (21) and (22) and using (20), we conclude that
By the Lipschitz property of Φ,
Step 2(bound Λ2): Note that and ŵc both are in the set {w : wTμd = 1, ‖w‖1 ≤ 1}. Therefore, by definition of minimizers, we have
Consequently,
(23) |
By the same argument, we also have
(24) |
Combination of (23) and (24) leads to
Let g(x) = Φ(x−1/2). The function g is Lipschitz on (0,∞), as g′(x) is bounded on (0,∞). Hence, . Thus,
We now prove the second result of the Theorem. Since , we have
(25) |
By (20), (25), and the first part of the Theorem, we have
This completes the proof of Theorem.
A.2. Proof of Theorem 2
Let wλ = w∞ + γλ. Then, from the definition of wλ, we have
(26) |
where . In the last statement, we used the fact that
We write γ for γλ for short in what follows.
By (26), we have f(γ) ≤ f(0) = 0. This implies that
On the other hand, . Bringing the upper and lower bound of R(γ) together, we conclude that
The proof is now complete.
A.3. Proof of Theorem 5
By the positive definiteness of Σ, Σ−1 and are also positive definite. Let υ = Σ1/2w, then the transformation v ↦ w is linear. Define
where μ̄d = Σ−1/2μd. It is enough to show that vc is piecewise linear in c.
Let Ωc = {v : ‖Σ−1/2v‖1 ≤ c} and S = {v : vT μ̄d = 1}. When c is small, the solution set is ∅ when c is large, the constraint Ωc is inactive. Denote by “a” the smallest “c” such that Ωc ⋂ S ≠ ∅, and by “b” the smallest such that vc are the same for all c ≥ b. Hence we are interested in c ∈ [a, b], when changes in c actually affects the solution.
Let P be the projection of the origin O onto the hyperplane S in the p dimensional space. Let
where denotes an i-dimensional face of Ωc, i.e., represents a vertex, an edge, and a facet. It is clear that ℱc is a finite set.
Define a mapping φ : [a, b] → ℤ × ℤ, where φ(c) = (i, j) such that i) and ii) i is minimal. By definition, this mapping is single valued.
For any c0 ∈ (a, b], denote Dc0 = {(i, j)|∀ε > 0, ∃c ∈ [c0 − ε, c0) s.t. φ(c) = (i, j)}. The set Dc0 is non-empty because the collection is finite. Then the theorem follows from compactness of [a, b] and Lemma 2, Remark 4 and Lemma 3.
Lemma 1. ∀c0 ∈ (a, b], ∃ε > 0 such that ∀(i, j) ∈ Dc0 and ∀c ∈ (c0 − ε, c0), , where is the projection of P onto , and denotes the i-dimensional affine space in which embeds, and is the interior of , where the topology is the natural subspace topology restricted to .
Proof. Fix c0 ∈ (a, b]. For any (i, j) ∈ Dc0 and ε̄ > 0, by the definition of Dc0, there exists c′ ∈ [c0 − ε̄, c0) such that φ(c′) = (i, j). The minimality of i in the definition for φ implies that , which in the interior of . Therefore, . By arbitrariness of ε̄, ∃(cn) ↗ c0 such that for all n.
It can also be shown that is connected: let . For any is on the line segment with endpoints because are parallel affine subspace in ℝp. Let , then it is a cone. Since , we have . Then, . Hence, ∃εij > 0 such that for all c ∈ [c0 − εij, c0), . Take ε = min(i,j)∈Dc0 εij, the claim follows.
Lemma 2. ∀c0 ∈ (a, b], Dc0 is a singleton, and ∃ε′ > 0 such that vc is linear in c on (c0 − ε′, c0).
Proof. Fix c0 ∈ (a, b]. We claim that for some (i, j) ∈ Dc0, there exists positive ε′(≤ ε that validates Lemma 1) such that for any c ∈ (c0 − ε′, c0), . Assume that the claim is not correct, then pick any (i, j) ∈ Dc0, there exists a sequence {ck} (ck ≠ ck′ if k ≠ k′) converging to c0 from the left s.t. . Without loss of generality, take {ck} ⊂ (c0 − ε, c0). Lemma 1 implies that . If , we would have . Hence . By finiteness of the index pairs in ℱc, there exists (i′, j′) ≠ (i, j) such that φ(c) = (i′, j′) for c ∈ {ckl}, where {ckl} is some subsequence of {ck}. This implies (i′, j′) ∈ Dc0, which together with Lemma 1 implies for c ∈ {ckl}. Therefore
for c ∈ {ckl}.
On the other hand, because (i, j) ∈ Dc0, there exist infinitely many c′ ∈ (c0 − ε, c0) such that . Therefore,
changes signs infinitely many times on (c0 − ε, c0). This leads to a contradiction because are both linear functions of c. Hence, the conclusion holds.
To show that Dc0 is a singleton, suppose it has two distinct elements (i, j) and (i′, j′). We have shown that for all c in a left neighborhood of c0 (not including c0). Also we have by Lemma 1. This can be true only when (or vice versa), but then i < i′, contradicting with minimality in definition of Dc0.
Remark 4. Similarly, ∀c0 ∈ [a, b), ∃ε′ > 0 such that vc is linear in c on (c0, c0 + ε′).
Lemma 3. vc is a continuous function of c on [a, b].
Proof. The continuity follows from two parts i) and ii).
-
∀c0 ∈ [a, b), ∃ε > 0 such that vc is continuous on [c0, c0 + ε). Indeed, let
We know that the mapping is linear and hence continuous on (c0, c0 + ε) for some small ε > 0. It only remains to show that the mapping is right continuous at c0. Notice here for c ∈ (c0, c0 + ε). Let . It is clear that . Because L ∈ Ωc0 ∩ S, . This inequality has to take the equal sign because h(·) is monotone decreasing, and as c approaches c0 from the right. Because vc0 is unique, .
-
∀c0 ∈ (a, b], ∃ε > 0 such that vc is continuous on (c0 − ε, c0]. Again, it remains to show that there is no jump at c0. Let (ic0, jc0) = φ(c0). Clearly . Introduce a notion of parallelism of affine subspaces in ℝp. We denote , if only by translation, becomes a subset of S (or vice versa in other situations); use the notation otherwise.
If , for c in some left neighborhood of c0, exists and . Note , and as c approaches c0 from the left. Since h(·) is monotone decreasing, obviously . This shows the left continuity of h at c0. Suppose Dc0 = {(i, j)}, then we know on a left neighborhood of c0 (not including c0), . Let , then E ∈ Ωc0 ∩ S. Note that for all c in c0’s left neighborhood, so we have . On the other hand, by the definition of . Also, consider the uniqueness of distance minimizing point in Ωc0 ∩ S to origin , and hence vc has left continuity at c0.
If , ∃Q ∈ Ωc0−ε/2 ∩ S such that . When c goes from c0 − ε/2 to c0, there exists a point Qc ∈ Ωc ∩ S moving on the line segment from Q to . Therefore, h(·) is left continuous at c0. Replace by Qc in the previous paragraph, the left continuity of vc at c0 follows from the same argument.
References
- Ackermann M, Strimmer K. A general modular framework for gene set enrichment analysis. BMC Bioinformatics. 2009;10:47. doi: 10.1186/1471-2105-10-47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Antoniadis A, Lambert-Lacroix S, Leblanc F. Effective dimension reduction methods for tumor classification using gene expression data. Bioinformatics. 2003;19:563–570. doi: 10.1093/bioinformatics/btg062. [DOI] [PubMed] [Google Scholar]
- Bair E, Hastie T, Paul D, Tibshirani R. Prediction by supervised principal components. J. Amer. Statist. Assoc. 2006;101:119–137. [Google Scholar]
- Bickel P, Levina E. Some theory for fishers linear discriminant function, “naive bayese” and some alternatives when there are many more variables than observations. Bernoulli. 2004;10:989–1010. [Google Scholar]
- Boulesteix A-L. PLS dimension reduction for classification with microarray data. Stat. Appl. Genet. Mol. Biol. 2004;3:32. doi: 10.2202/1544-6115.1075. Art. 33 (electronic) [DOI] [PubMed] [Google Scholar]
- Boyd S, Vandenberghe L. Convex Optimization. Cambridge University Press; 2004. [Google Scholar]
- Breheny P, Huang J. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Annals of Applied Statistics. 2011;5:232–253. doi: 10.1214/10-AOAS388. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Domingos P, Pazzani M. On the optimality of the simple bayesian classifier under zero-one loss. Mach. Learn. 1997;29:103–130. [Google Scholar]
- Donoho DL, Johnstone IM. Ideal spatial adaptation by wavelet shrinkage. Biometrika. 1994;81:425–455. [Google Scholar]
- Dudoit S, Fridlyand J, Speed TP. Comparison of discrimination methods for the classification of tumors using gene expression data. J. Amer. Statist. Assoc. 2002;97:77–87. [Google Scholar]
- Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression. Ann. Statist. 2004;32:407–499. [Google Scholar]
- Fan J, Fan Y. High dimensional classification using features annealed independence rules. Ann. Statist. 2008;36:2605–2637. doi: 10.1214/07-AOS504. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001;96:1348–1600. [Google Scholar]
- Fan J, Lv J. Sure independence screening for ultra-high dimensional feature space(with discussion) J. R. Statist. Soc. B. 2008;70:849–911. doi: 10.1111/j.1467-9868.2008.00674.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Lv J. A selective overview of variable selection in high dimensional feature space. Statistica Sinica. 2010;20:101–148. [PMC free article] [PubMed] [Google Scholar]
- Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloomfield CD, Lander ES. Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring. Science. 1999;286:531–537. doi: 10.1126/science.286.5439.531. [DOI] [PubMed] [Google Scholar]
- Gordon GJ, Jensen RV, Hsiao L-L, Gullans SR, Blumenstock JE, Ramaswamy S, Richards WG, Sugarbaker DJ, Bueno R. Translation of microarray data into clinically relevant cancer diagnostic tests using gene expression ratios in lung cancer and mesothelioma. Cancer Research. 2002;62:4963–4967. [PubMed] [Google Scholar]
- Guo Y, Hastie T, Tibshirani R. Regularized discriminant analysis and its application in microarrays. Biostatistics. 2005;1:1–18. doi: 10.1093/biostatistics/kxj035. [DOI] [PubMed] [Google Scholar]
- Hastie T, Tibshirani R, Friedman JH. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd edition. Springer-Verlag Inc.; 2009. [Google Scholar]
- Huang XPW, P W. Linear regression and two-class classification with gene expression data. Bioinformatics. 2003;19:2072–2978. doi: 10.1093/bioinformatics/btg283. [DOI] [PubMed] [Google Scholar]
- Lewis DD. Naive (bayes) at forty: The independence assumption in information retrieval. Springer Verlag; 1998. pp. 4–15. [Google Scholar]
- Li K-C. Sliced inverse regression for dimension reduction. J. Amer. Statist. Assoc. 1991;86:316–342. With discussion and a rejoinder by the author. [Google Scholar]
- Nguyen DV, Rocke DM. Tumor classification by partial least squares using microarray gene expression data. Bioinformatics. 2002;18:39–50. doi: 10.1093/bioinformatics/18.1.39. [DOI] [PubMed] [Google Scholar]
- Oberthuer A, Berthold F, Warnat P, Hero B, Kahlert Y, Spitz R, Ernestus K, König R, Haas S, Eils R, Schwab M, Brors B, Westermann F, Fischer M. Customized oligonucleotide microarray gene expression based classification of neuroblastoma patients outperforms current clinical risk stratification. Journal of Clinical Oncology. 2006;24:5070–5078. doi: 10.1200/JCO.2006.06.1879. [DOI] [PubMed] [Google Scholar]
- Rosset S, Zhu J. Piecewise linear regularized solution paths. Ann. Statist. 2007;35:1012–1030. [Google Scholar]
- Ruszczynski A. Nonlinear Optimization. Princeton University Press; 2006. [Google Scholar]
- Shao J, Wang Y, Deng X, Wang S. Sparse linear discriminant analysis by thresholding for high dimensional data. Ann. Statist. 2011;39 to appear. [Google Scholar]
- Tibshirani R. Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B. 1996;58:267–288. [Google Scholar]
- Tibshirani R, Hastie T, Narasimhan B, Chu G. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proc. Natl. Acad. Sci. 2002;99:6567–6572. doi: 10.1073/pnas.082099299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tseng P. Convergence of a block coordinate descent method for nondifferentiable minimization. J. Optim. Theory Appl. 2001;109:475–494. [Google Scholar]
- Vapnik VN. The nature of statistical learning theory. New York: Springer-Verlag; 1995. [Google Scholar]
- Wang S, Zhu J. Improved centroids estimation for the nearest shrunken centroid classifier. Bioinformatics. 2007;23:972–979. doi: 10.1093/bioinformatics/btm046. [DOI] [PubMed] [Google Scholar]
- Zhang C-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010;38:894–942. [Google Scholar]
- Zhao DS, Li Y. Principled sure independence screening for cox models with ultra-high-dimensional covariates. 2010 doi: 10.1016/j.jmva.2011.08.002. Manuscript. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zou H. The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 2006;101:1418–1429. [Google Scholar]
- Zou H, Hastie T. Regularization and variable selection via the elastic net. J. R. Statist. Soc. B. 2005;67:768–768. [Google Scholar]
- Zou H, Hastie T, Tibshirani R. Sparse principal component analysis. J. Comput. Graph. Statist. 2006;15:265–286. [Google Scholar]
- Zou H, Li R. One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist. 2008;36:1509–1533. doi: 10.1214/009053607000000802. [DOI] [PMC free article] [PubMed] [Google Scholar]