Abstract
We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.
Key words and phrases: Classification, dimension reduction, quadratic discriminant analysis, Rayleigh quotient, oracle inequality
1. Introduction
Rapid developments of imaging technology, microarray data studies and many other applications call for the analysis of high-dimensional binary-labeled data. We consider the problem of finding a “nice” projection f : ℝd → ℝ that embeds all data into the real line. A projection such as f has applications in many statistical problems for analyzing high-dimensional binary-labeled data, including:
Dimension reduction: f provides a data reduction tool for people to visualize the high-dimensional data in a one-dimensional space.
Classification: f can be used to construct classification rules. With a carefully chosen set A ⊂ ℝ, we can classify a new data point x ∈ ℝd by checking whether or not f(x) ∈ A.
Feature selection: when f(x) only depends on a small number of coordinates of x, this projection selects just a few features from numerous observed ones.
A natural question is what kind of f is a “nice” projection? It depends on the goal of statistical analysis. For classification, a good f should yield to a small classification error. In feature selection, different criteria select distinct features, and they may suit different real problems. In this paper, we propose using the following criterion for finding f:
Under the mapping f, the data are as “separable” as possible between two classes, and as “coherent” as possible within each class.
It can be formulated as to maximize the Rayleigh quotient of f. Suppose all data are drawn independently from a joint distribution of (X, Y), where X ∈ ℝd, and Y ∈ {0, 1} is the label. The Rayleigh quotient of f is defined as
(1) |
Here, the numerator is the variance of X explained by the class label, and the denominator is the remaining variance of X. Simple calculation shows that Rq(f) = π (1 − π)R(f), where π ≡ ℙ(Y = 0) and
(2) |
Our goal is to develop a data-driven procedure to find f̂ such that Rq(f̂) is large, and f̂ is sparse in the sense that it depends on few coordinates of X.
The Rayleigh quotient, as a criterion for finding a projection f, serves different purposes. First, for dimension reduction, it takes care of both variance explanation and label explanation. In contrast, methods such as principal component analysis (PCA) only consider variance explanation. Second, when the data are normally distributed, a monotone transform of the Rayleigh quotient approximates the classification error; see Section 6. Therefore, an f with a large Rayleigh quotient enables us to construct nice classification rules. In addition, it is a convex optimization to maximize the Rayleigh quotient among linear and quadratic f (see Section 3), while minimizing the classification error is not. Third, with appropriate regularization, this criterion provides a new feature selection tool for data analysis.
The criterion (1), initially introduced by Fisher (1936) for classification, is known as Fisher’s linear discriminant analysis (LDA). In the literature of sufficient dimension reduction, the sliced inverse regression (SIR) proposed by Li (1991) can also be formulated as maximizing (1), where Y can be any variable not necessarily binary. In both LDA and SIR, f is restricted to be a linear function, and the dimension d cannot be larger than n. In this sense, our work compares directly to various versions of LDA and SIR generalized to nonlinear, high-dimensional settings. We provide a more detailed comparison to the literature in Section 8, but preview here the uniqueness of our work. First, we consider a setting where X|Y has an elliptical distribution and f is a quadratic function, which allows us to derive a simplified version of (1) and gain extra statistical efficiency; see Section 2 for details. This simplified version of (1) was never considered before. Furthermore, the assumption of conditional elliptical distribution does not satisfy the requirement of SIR and many other dimension reduction methods [Cook and Weisberg (1991), Li (1991)]. In Section 1.2, we explain the motivation of the current setting. Second, we utilize robust estimators of mean and covariance matrix, while many generalizations of LDA and SIR are based on sample mean and sample covariance matrix. As shown in Section 4, the robust estimators adapt better to heavy tails on the data. It is worth noting that QUADRO only considers the projection to a one-dimensional subspace. In contrast, more sophisticated dimension reduction methods (e.g., the kernel SIR) are able to find multiple projections f1, …, fm for m > 1. This reflects a tradeoff between modeling tractability and flexibility. More specifically, QUADRO achieves better computational and theoretical properties at the cost of sacrificing some flexibility.
1.1. Rayleigh quotient and classification error
Many popular statistical methods for analyzing high-dimensional binary-labeled data are based on classification error minimization, which is closely related to the Rayleigh quotient maximization. We summarize their connections and differences as follows:
In an “ideal” setting where two classes follow multivariate normal distributions with a common covariance matrix and the class of linear functions f is considered, the two criteria are exactly the same, with one being a monotone transform of the other.
In a “relaxed” setting where two classes follow multivariate normal distributions but with nonequal covariance matrices and the class of quadratic functions f (including linear functions as special cases) is considered, the two criteria are closely related in the sense that a monotone transform of the Rayleigh quotient is an approximation of the classification error.
In other settings, the two criteria can be very different.
We now show (a) and (c), and will discuss (b) in Section 6.
For each f, we define a family of classifiers hc(x) = I {f (x) < c} indexed by c, where I (·) is the indicator function. For each given c, we define the classification error of hc to be err(hc) ≡ ℙ(hc(X) ≠ Y). The classification error of f is then defined by
Most existing classification procedures aim at finding a data-driven projection f̂ such that Err(f̂) is small (the threshold c is usually easy to choose). Examples include linear discriminant analysis (LDA) and its variations in high dimensions [e.g., Cai and Liu (2011), Fan and Fan (2008), Fan, Feng and Tong (2012), Guo, Hastie and Tibshirani (2005), Han, Zhao and Liu (2013), Shao et al. (2011), Witten and Tibshirani (2011)], quadratic discriminant analysis (QDA), support vector machine (SVM), logistic regression, boosting, etc.
We now compare Rq(f) and Err(f). Let π = ℙ(Y = 0), μ1 = 𝔼(X|Y = 0), Σ1 = cov(X|Y = 0), μ2 = 𝔼(X|Y = 1) and Σ2 = cov(X|Y = 1). We consider linear functions {f(x) = a⊤x + b : a ∈ ℝd, b ∈ ℝ}, and write Rq(a) = Rq(a⊤x), Err(a) = Err(a⊤x) for short. By direct calculation, when the two classes have a common covariance matrix Σ,
Hence, the optimal aR = Σ−1(μ1 − μ2). On the other hand, when data follow multivariate normal distributions, the optimal classifier is , where aE = Σ−1(μ1 − μ2) and . It is observed that aR = aE and the two criteria are the same. In fact, for all vectors a such that a⊤(μ1 − μ2) > 0,
where Φ is the distribution function of a standard normal random variable, and we fix c = a⊤(μ1 + μ2)/2. Therefore, the classification error is a monotone transform of the Rayleigh quotient.
When we move away from these ideal assumptions, the above two criteria can be very different. We illustrate this point using a bivariate distribution, that is, d = 2, with different covariance matrices. Specifically, π = 0.55, μ1 = (0, 0)⊤, μ2 = (1.28, 0.8)⊤, Σ1 = diag(1, 1) and Σ2 = diag(3, 1/3). We still consider linear functions f(x) = a⊤x but select only one out of the two features, X1 or X2. Then the maximum Rayleigh quotients, by using each of the two features alone, are 0.853 and 0.923, respectively, whereas the minimum classification errors are 0.284 and 0.295, respectively. As a result, under the criterion of maximizing Rayleigh quotient, Feature 2 is selected, whereas under the criterion of minimizing classification error, Feature 1 is selected. Figure 1 displays the distributions of data after being projected to each of the two features. It shows that since data from the second class has a much larger variability at Feature 1 than at Feature 2, the Rayleigh quotient maximization favors Feature 2, although Feature 1 yields a smaller classification error.
Fig. 1.
An example in ℝ2. The green and purple represent class 1 and class 2, respectively. The ellipses are contours of distributions. Probability densities after being projected to X1 and X2 are also displayed. The dotted lines correspond to optimal thresholds for classification using each feature.
1.2. Objective of the paper
In this paper, we consider the Rayleigh quotient maximization problem in the following setting:
We consider sparse quadratic functions, that is, f(x) = x⊤Ωx − 2δ⊤x, where Ω is a sparse d × d symmetric matrix, and δ is a sparse d-dimensional vector.
The two classes can have different covariance matrices.
Data from these two classes follow elliptical distributions.
The dimension is large (it is possible that d ≫ n).
Compared to Fisher’s LDA, our setting has several new ingredients. First, we go beyond linear classifiers to enhance flexibility. It is well known that the linear classifiers are inefficient. For example, when two classes have the same mean, linear classifiers perform no better than random guesses. Instead of exploring arbitrary nonlinear functions, we consider the class of quadratic functions so that the Rayleigh quotient still has a nice parametric formulation, and at the same time it helps identify interaction effects between features. Second, we drop the requirement that the two classes share a common covariance matrix, which is a critical condition for Fisher’s rule and many other high-dimensional classification methods [e.g., Cai and Liu (2011), Fan and Fan (2008), Fan, Feng and Tong (2012)]. In fact, by using quadratic discriminant functions, we take advantage of the difference of covariance matrices between the two classes to enhance classification power. Third, we generalize multivariate normal distributions to the elliptical family, which includes many heavy-tailed distributions, such as multivariate t-distributions, Laplace distributions, and Cauchy distributions. This family of distributions allows us to avoid estimating all O(d4) fourth cross-moments of d predictors in computing the variance of quadratic statistics and hence overcomes the computation and noise accumulation issues.
In our setting, Fisher’s rule, that is, aR = Σ−1(μ1 − μ2), no longer maximizes the Rayleigh quotient. We propose a new method, called quadratic dimension reduction via Rayleigh optimization (QUADRO). It is a Rayleigh-quotient-oriented procedure and is a statistical tool for simultaneous dimension reduction and feature selection. QUADRO has several properties. First, it is a statistically efficient generalization of Fisher’s linear discriminant analysis to the quadratic setting. A naive generalization involves estimation of all fourth cross-moments of the two underlying distributions. In contrast, QUADRO only requires estimating a one-dimensional kurtosis parameter. Second, QUADRO adopts rank-based estimators and robust M-estimators of the covariance matrices and the means. Therefore, it is robust to possibly heavy-tail distributions. Third, QUADRO can be formulated as a convex programming and is computationally efficient.
Theoretically, we prove that under elliptical models, the Rayleigh quotient of the estimated quadratic function f̂ converges to population maximum Rayleigh quotient at rate , where s is the number of important features (counting both single terms and interaction terms). In addition, we establish a connection between our method and quadratic discriminant analysis (QDA) under elliptical models.
The rest of this paper is organized as follows. Section 2 formulates Rayleigh quotient maximization as a convex optimization problem. Section 3 describes QUADRO. Section 4 discusses rank-based estimators and robust M-estimators used in QUADRO. Section 5 presents theoretical analysis. Section 6 discusses the application of QUADRO in elliptically distributed classification problems. Section 7 contains numerical studies. Section 8 concludes the paper. All proofs are collected in Section 9.
Notation
For 0 ≤ q ≤ ∞, |v|q denotes the Lq-norm of a vector v, |A|q denotes the elementwise Lq-norm of a matrix A and ||A||q denotes the matrix Lq-norm of A. When q = 2, we omit the subscript q. λmin(A) and λmax(A) denote the minimum and maximum eigenvalues of A. det(A) denotes the determinant of A. Let I (·) be the indicator function: for any event B, I (B) = 1 if B happens and I (B) = 0 otherwise. Let sign(·) be the sign function, where sign(u) = 1 when u ≥ 0 and sign(u) = −1 when u < 0.
2. Rayleigh quotient for quadratic functions
We first study the population form of Rayleigh quotient for an arbitrary quadratic function. We show that it has a simplified form under the elliptical family.
For a quadratic function
using (2), its Rayleigh quotient is
(3) |
up to a constant multiplier. The Rayleigh quotient maximization can be expressed as
2.1. General setting
Suppose 𝔼(Z) = μ and cov(Z) = Σ. By direct calculation,
So 𝔼[Q(Z)] is a linear combination of the elements in {Ω(i, j), 1 ≤ i ≤ j ≤ d; δ(i), 1 ≤ i ≤ d}, and var[Q(Z)] is a quadratic form of these elements. The coefficients in 𝔼[Q(Z)] are functions of μ and Σ only. However, the coefficients in var[Q(Z)] also depend on all the fourth cross-moments of Z, and there are O(d4) of them.
Let us define M1(Ω, δ) = 𝔼[Q(X)|Y = 0], L1(Ω, δ) = var[Q(X)|Y = 0] and M2(Ω, δ), L2(Ω, δ) similarly. Also, let κ = (1 − π)/π. We have
Therefore, both the numerator and denominator are quadratic combinations of the elements in Ω and δ. We can stack the d(d + 1)/2 elements in Ω (assuming it is symmetric) and the d elements in δ into a long vector v. Then R(Ω, δ) can be written as
where a is a d′ × 1 vector, A is a d′ × d′ positive semi-definite matrix and d′ = d(d + 1)/2 + d. A and a are determined by the coefficients in the denominator and numerator of R(Ω, δ), respectively. Now, max(Ω, δ) R(Ω, δ) is equivalent to maxv R(v). It has explicit solutions. For example, when A is positive definite, the function R(v) is maximized at v* = A−1a. We can then reshape v* to get the desired (Ω*, δ*).
Practical implementation of the above idea is infeasible in high dimensions as it involves O(d4) cross moments of Z. This not only poses computational challenges, but also accumulates noise in the estimation. Furthermore, good estimates of fourth moments usually require the existence of eighth moments, which is not realistic for many heavy tailed distributions. These problems can be avoided under the elliptical family, as we now illustrate in the next subsection.
2.2. Elliptical distributions
The elliptical family contains multivariate distributions whose densities have elliptical contours. It generalizes multivariate normal distributions and inherits many of their nice properties.
Given a d × 1 vector μ and a d × d positive definite matrix Σ, a random vector Z that follows an elliptical distribution admits
(4) |
where U is a random vector which follows the uniform distribution on unit sphere Sd−1, and ξ is a nonnegative random variable independent of U. Denote the elliptical distribution by ℰ(μ, Σ, g), where g is the density of ξ. In this paper, we always assume that ℰξ4 < ∞ and require that ℰ (ξ2) = d for the model identifiability. Then Σ is the covariance matrix of Z.
Proposition 2.1
Suppose Z follows an elliptical distribution as in (4). Then
where is the kurtosis parameter.
The proof is given in the online supplementary material [Fan et al. (2014)]. The variance of Q(Z) does not involve any fourth cross-moments, but only the kurtosis parameter γ. For multivariate normal distributions, ξ2 follows a χ2-distribution with d degrees of freedom, and γ = 0. For multivariate t-distribution with degrees of freedom ν > 4, we have γ = 2/(ν − 4).
2.3. Rayleigh optimization
We assume that the two classes both follow elliptical distributions: X|(Y = 0) ~ ℰ(μ1, Σ1, g1) and X|(Y = 1) ~ ℰ (μ2, Σ2, g2). To facilitate the presentation, we assume the quantity γ is the same for both classes of conditional distributions. Let
(5) |
for k = 1 and 2. Combining (3) with Proposition 2.1, we have
(6) |
where κ = (1 − π)/π.
Note that if we multiply both Ω and δ by a common constant, R(Ω, δ) remains unchanged. Therefore, maximizing R(Ω, δ) is equivalent to solving the following constrained minimization problem:
(7) |
We call problem (7) the Rayleigh optimization. It is a convex problem whenever Σ1 and Σ2 are both positive semi-definite.
The formulation of the Rayleigh optimization only involves the means and covariance matrices, and the kurtosis parameter γ. Therefore, if we know γ (e.g., when we know which subfamily the distributions belong to) and have good estimates (μ̂1, μ̂2, Σ̂1, Σ̂2), we can solve the empirical version of (7) to obtain (Ω̂, δ̂), which is the main idea of QUADRO. In addition, (7) is a convex problem, with a quadratic objective and equality constraints. Hence it can be solved efficiently by many optimization algorithms.
3. Quadratic dimension reduction via Rayleigh optimization
Now, we formally introduce the QUADRO procedure. We fix a model parameter γ ≥ 0. Let M̂, L̂1 and L̂2 be the sample versions of M, L1, L2 in (5) by replacing (μ1, μ2, Σ1, Σ2) with their estimates. Details of these estimates will be given in Section 4. Let π̂ = n1/(n1 + n2) and κ = π̂/(1 − π̂). Given tuning parameters λ1 > 0 and λ2 > 0, we solve
(8) |
We propose a linearized augmented Lagrangian method to solve (8). To simplify the notation, we write L̂ = L̂1 + κL̂2, and omit the hat symbol on M and L when there is no confusion. The optimization problem is then
For an algorithm parameter ρ > 0, and a dual variable ν, we define the augmented Lagrangian as
Using zero as the initial value, we iteratively update:
δ(k) = argminδ{Fρ(Ω(k−1), δ, ν(k−1)) + λ2|δ|1},
Ω(k) = argminΩ: Ω= Ω⊤{Fρ (Ω, δ(k), ν(k−1)) + λ1|Ω|1},
ν(k) = ν(k−1) + ρ[M(Ω(k), δ(k)) − 1].
Here, the first two steps are primal updates, and the third step is a dual update.
First, we consider the update of δ. When Ω and ν are fixed, we can write
where
(9) |
and cρ (Ω, ν) does not depend on δ. Note that A is a positive semi-definite matrix. The update of δ is indeed a Lasso problem.
Next, we consider the update of Ω. When δ and ν are fixed, Fρ(Ω, δ, ν) is a convex function of Ω. We propose an approximate update step: we first “linearize” Fρ at Ω = Ω (k−1) to construct an upper envelope F̄ρ, and then minimize this upper envelope. In detail, at any Ω = Ω0, we consider the following upper bound of Fρ (Ω, δ, ν):
where τ is a large enough constant [e.g., we can take . We then minimize F̄ρ(Ω, δ, ν) + λ1|Ω|1 to update Ω. This modified update step has an explicit solution,
where 𝒮 (x, a) ≡ (|x| − a)+ sign(x) is the soft-thresholding function. We can write Ω* in a matrix form. Let
(10) |
where sym(B) = (B + B⊤)/2 for any square matrix B. By direct calculation,
We now describe our algorithm. Let us initialize Ω(0) = 0d×d, δ(0) = 0 and ν(0) = 0. At iteration k, the algorithm updates as follows:
Compute A = A(Ω(k−1), δ(k−1), ν(k−1)) and b = b(Ω(k−1), δ(k−1), ν(k−1)) using (9). Update δ(k) = argminδ{δ⊤Aδ − 2δ⊤b + λ2|δ|1}.
Compute D = D(Ω(k−1), δ(k−1), ν(k−1)) using (10). Update .
Update ν(k) = ν(k−1) + ρ[M(Ω(k), δ(k)) − 1].
Stop until max{ρ|Ω(k) − Ω(k−1)|, ρ|δ(k) − δ(k−1)|, |ν(k) − ν(k−1)|/ρ} ≤ ε for some pre-specified precision ε.
This is a modified version of the augmented Lagrangian method, where in the step of updating Ω, we minimize an upper envelope, which is obtained by locally linearizing the augmented Lagrangian.
Remark
QUADRO can be extended to folded concave penalties, for example, to SCAD [Fan and Li (2001)] or to adaptive Lasso [Zou (2006)]. Using the Local Linear Approximation algorithm [Fan, Xue and Zou (2014), Zou and Li (2008)], we can solve the SCAD-penalized QUADRO and the adaptive-Lasso-penalized QUADRO by solving L1-penalized QUADRO with multiple-step and one-step iterations, respectively.
4. Estimation of mean and covariance matrix
QUADRO requires estimates of the mean vector and covariance matrix for each class as inputs. We will show in Section 5 that the performance of QUADRO is closely related to the maxnorm estimation error on mean vectors and covariance matrices. Sample mean and sample covariance matrix work well for Gaussian data. However, when data are from elliptical distributions, they may have inferior performance as we estimate nonpolynomially many of means and variances. In Sections 4.1–4.2, we suggest a robust M-estimator to estimate the mean and a rank-based estimator to estimate the covariance matrix, which are more appropriate for non-Gaussian data. Moreover, in Section 4.3 we discuss how to estimate the model parameter γ when it is unknown.
4.1. Estimation of the mean
Suppose x1,…, xn are i.i.d. samples of a random vector X = (X1,…, Xd)⊤ from an elliptical distribution ℰ(μ,Σ, g). Let us denote μ = (μ1,…, μd)⊤ and xi = (xi1,…, xid) ⊤ for i = 1,…, n. We estimate each μj marginally using the data {x1j,…, xnj}.
One possible estimator is the sample median
It can be shown that even under heavy-tailed distributions, for small δ ∈ (0, 1), where A is a constant determined by the probability density at μj, for each fixed j. This combined with the union bound gives that .
Catoni (2012) proposed another M-estimator for the mean of heavy-tailed distributions. It works for distributions where mean is not necessarily equal to median, which is essential for estimating covariance of random variables. We denote the diagonal elements of the covariance matrix Σ as , and the off-diagonal elements as σkj for k ≠ j. The estimator μ̂C = (μ̂C,1,…, μ̂C,d) ⊤ is obtained as follows. For a strictly increasing function h:ℝ → ℝ such that −log(1 − y + y2/2) ≤ h(y) ≤ log(1 + y + y2/2), and a value δ ∈ (0, 1) such that n>2 log(1/δ), we let
where v is an upper bound of . For each j, we define μ̂Cj as the unique value that satisfies . It was shown in Catoni (2012) that when the variance of Xj exists. Therefore, by taking δ = 1/(n ∨ d)2, with probability at least 1 − (n ∨ d)−1, which gives the desired convergence rate.
To implement this estimator, we take h(y) = sgn(y) log(1+|y|+y2/2). For the choice of v, any value larger than would work in theory. Catoni (2012) introduced a Lepski’s adaptation method to choose v. For simplicity, we take , where is the sample covariance of Xj.
The two estimators, the median and the M-estimator, both have a convergence rate of in terms of the max-norm error. In our numerical experiments, the M-estimator has a better numerical performance, and we stick to this estimator.
4.2. Estimation of the covariance matrix
To estimate the covariance matrix Σ, we estimate the marginal covariances { , 1 ≤ j ≤ d} and the correlation matrix C separately. Again, we need robust estimates even though the data have fourth moments, as we simultaneously estimate nonpolynomial number of covariance parameters.
First, we consider estimating . Note that . We estimate and 𝔼(Xj) separately. To estimate , we use the M-estimator described above on the squared data { } and denote the estimator by η̂;Cj. This works as is finite for each j in our setting; in addition, the M-estimator applies to asymmetric distributions. We then define
where μ̂Cj is the M-estimator of 𝔼(Xj) and δ0 > 0 is a small constant ( ). It is easy to see that when the fourth moments of Xj are uniformly upper bounded by a constant and n ≥ 4 log(d2), .
Next, we consider estimating the correlation matrix C. For this, we use Kendall’s tau correlation matrix proposed by Han and Liu (2012). Kendall’s tau correlation coefficients [Kendall (1938)] are defined as
where X̃ is an independent copy of X. They have the following relationship to the true coefficients: for the elliptical family. Based on this equality, we first estimate Kendall’s tau correlation coefficients using rank-based estimators
and then estimate the correlation matrix by Ĉ = (Ĉjk) with
It is shown in Han and Liu (2012) that .
Finally, we combine { , 1 ≤ j ≤ d} and Ĉ to get Σ̂. Let
It follows immediately that . However, this estimator is not necessarily positive semi-definite. To implement QUADRO, we need Σ̂ to be positive semi-definite so that the optimization in (8) is a convex problem. We obtain Σ̂ by projecting Σ̃ onto the cone of positive semi-definite matrices through the convex optimization
(11) |
Note that |Σ̂ − Σ̃|∞ ≤ |Σ − Σ̃|∞ by definition. Therefore, . To compute Σ̂, we note that the optimization problem in (11) can be formulated as the dual of a graphical lasso problem corresponding to the smallest possible tuning parameter that still guarantees a feasible solution [Liu et al. (2012)]. Zhao, Roeder and Liu (2013) provide more algorithmic details.
4.3. Estimation of kurtosis parameter
When the kurtosis parameter γ is unknown, we can estimate it from data. Recall that . Using decomposition (4) and the properties of U, we have
Motivated by this equality, we propose the estimator
where μ̃ and Ω̃ are estimators of μ and Σ−1, respectively. Maruyama and Seo (2003) considered a similar estimator in low-dimensional settings, where they used the sample mean and sample covariance matrix. In high dimensions, we a robust estimate to guarantee uniform convergence. In particular, we take μ̃ = μ̂C and Ω̃ = Ω̂clime where Ω̂clime is the CLIME estimator proposed in Cai, Liu and Luo (2011). We can also take the covariance estimator in Section 4.2, but we will then need to establish its sampling property as a precision matrix estimator. We decide to use the CLIME estimator since such a property has already been established by Cai, Liu and Luo (2011). Denote by Σ−1 = (Ωjk)d×d. From simple algebra,
In Section 4.1, we have seen that . Moreover, Cai, Liu and Luo (2011) showed that under mild conditions, where ||·||1 is the matrix L1-norm. Therefore, provided that ||Σ−1||1 ≤ C, we immediately have .
5. Theoretical properties
In this section, we establish an oracle inequality for the Rayleigh quotient of the QUADRO estimates (Ω̂,δ̂). We assume that π and γ are known. For notational simplicity, we set λ1 = λ2 = λ. The results can be easily generalized to the case λ1 ≠ λ2. Moreover, we drop the symmetry constraint Ω = Ω⊤ in all optimization problems involved. This simplifies the expression of the regularity conditions. The analysis with the symmetry constraint is a trivial extension of current analysis.
Recall the definition of M, L1 and L2 in (5) and κ = (1 − π)/π and L = L1 + κL2, the Rayleigh quotient of (Ω, δ) is equal to (up to a multiplicative constant)
The QUADRO estimates are
We shall compare the Rayleigh quotient of (Ω̂,δ̂) with the Rayleigh quotients of a class of “oracle solutions.” This class includes the one that maximizes the true Rayleigh quotient, which we denote by ( ). Here we adopt a class of solutions as the “oracle” instead of only ( ), because we want the results not tied to the sparsity assumption on ( ) but a weaker assumption: at least one solution in this class is sparse.
Our theoretical development is technically nontrivial. Conventional oracle inequalities are derived in a setting of minimizing a data-dependent loss without constraint, and the risk function is the expectation of the loss. Here we minimize a data-dependent loss with a data-dependent equality constraint, and the risk function—the Rayleigh quotient—is not equal to the expectation of the loss. A similar setting was considered in Fan, Feng and Tong (2012), where they introduced a data-dependent intermediate solution to deal with such equality constraint. However, the rate they obtained depends on this intermediate solution, which is very hard to quantify. In contrast, the rate in our results purely depends on the oracle solution. To get rid of the intermediate solution in the rate, we need to carefully quantify its difference from both the QUADRO solution and the oracle solution. The technique is new, and potentially useful for other problems.
5.1. Oracle solutions, the restricted eigenvalue condition
For any λ0 ≥ 0, we define the oracle solution associated with λ0 to be
(12) |
We shall compare the Rayleigh quotient of (Ω̂,δ̂) to that of ( ), for an arbitrary λ0. In particular, when λ0 = 0, the associated oracle solution (may not be unique) becomes
It maximizes the true Rayleigh quotient.
Next, we introduce a restricted eigenvalue (RE) condition jointly on Σ1, Σ2, μ1 and μ2. For any matrices A and B, let vec(A) be the vectorization of A by stacking all the elements of A column by column, and A ⊗ B be the Kronecker product of A and B. We define the matrices
for k = 1, 2. We note that there are (d2 + d) coefficients to decide when maximizing R(Ω, δ): d2 elements of Ω and d elements of δ. We can stack all these coefficients into a long vector x = x(Ω, δ) in ℝd2+d defined as
(13) |
It can be shown that Lk(Ω, δ) = x⊤Qkx, for k = 1, 2; see Lemma 9.1. Therefore, L(Ω, δ) = x⊤Qx, where Q = Q1 +κQ2. Our RE condition is then imposed on the (d2 + d)×(d2 + d) matrix Q, and hence implicitly on (Σ1, Σ2,μ1,μ2).
We now formally introduce the RE condition. For a set S ⊂ {1, 2,…, d2 + d} and a nonnegative value c̄, we define the restricted eigenvalue in the following way:
Generally speaking, Θ(S; c̄) depends on (Σ1, Σ2,μ1,μ2) in a complicated way. For c̄ = 0, the following proposition builds a connection between Θ(S;0) and (Σ1, Σ2,μ1,μ2). For each S ⊂ {1, 2,…, d2 + d}, there exist sets U ⊂ {1,…, d} × {1,…, d} and V ⊂ {1,…, d} such that the support of x(Ω, δ) is S if and only if the support of Ω is U and the support of δ is V. Let
Then U ⊂ U′ ×U′. The following result is proved in Fan et al. (2014).
Proposition 5.1
For any set S ⊂ {1,…, d2 + d}, suppose U′ and V are defined as above. Let Σ̃k be the submatrix of Σk by restricting rows and columns to U′ ∪V, μ̃k be the subvector of μk by constraining elements to U′∪V, for k = 1, 2. If there exist constants v1, v2 > 0 such that for k = 1, 2, then
5.2. Oracle inequality on Rayleigh’s quotient
Suppose max{|Σk|∞, |μk|∞, k = 1, 2} ≤ 1 and |Σ̂k − Σk|∞ ≤ |Σk|∞, |μ̂k − μk|∞ ≤ |μk|∞ for k = 1, 2, without loss of generality. For any λ0 ≥ 0, let ( ) be the associated oracle solution and S be the support of . Let Δn = max{|Σ̂k − Σk|∞, |μ̂k − μk|∞, k = 1, 2}. We have the following result for any given estimators, the proof of which we postpone to Section 9.
Theorem 5.1
Given λ0 ≥ 0, let S be the support of , s0 = |S| and . Suppose that Θ(S, 0) ≥ c0, Θ(S, 3) ≥ a0 and , for some positive constants a0, c0 and u0. We assume and without loss of generality. Then there exist positive constants C = C(a0, c0,u0) and A = A(a0, c0,u0) such that for any η >1,
by taking .
In Theorem 5.1, the rate of convergence has two parts. The term s0Δn reflects how the stochastic errors of estimating (Σ 1, Σ 2,μ1,μ2) affect the Rayleigh quotient. The term is an extra term that depends on the oracle solution we aim to use for comparison. In particular, if we compare R(Ω̂,δ̂) with , the population maximum Rayleigh quotient with λ0 = 0, this extra term disappears. If we further use the estimators in Section 4, . We summarize the result as follows.
Corollary 5.1
Suppose that the condition of Theorem 5.1 holds with λ0 = 0. Then for some positive constants A and C, when , we have
Furthermore, if the mean vectors and covariance matrices are estimated by using the robust methods in Section 4, then when ,
with probability at least 1− (n ∨ d)−1.
From Corollary 5.1, when ( ) is truly sparse, R(Ω̂,δ̂) is close to the population maximum Rayleigh quotient Rmax. However, we note that Theorem 5.1 considers more general situations, including cases where ( ) is not sparse. As long as there exists an “approximately optimal” and sparse solution, that is, for a small λ0 the associated oracle solution ( ) is sparse, Theorem 5.1 guarantees that R(Ω̂,δ̂) is close to and hence close to Rmax.
Remark
Our results are analogous to oracle inequalities for prediction error in linear regressions; therefore, the condition Θ (S, c̄) is similar to the RE condition in linear regressions [Bickel, Ritov and Tsybakov (2009)]. To recover the support of ( ), conditions similar to the “irrepresentable condition” for Lasso [Zhao and Yu (2006)] are needed.
6. Application to classification
One important application of QUADRO is high-dimensional classification for elliptically-distributed data. Suppose (Ω̂,δ̂) are the QUADRO estimates. This yields the classification rule
In this section, we first show that for normally distributed data, the Rayleigh quotient is a proxy of the classification error, and then derive an analytic choice of c. Comparing with many other high-dimensional classification methods, QUADRO produces quadratic boundaries and can handle both non-Gaussian distributions and nonequal covariance matrices.
6.1. Approximation of classification errors
Given (Ω, δ) and a threshold c, a general quadratic rule h(x) = h(x; Ω, δ, c) is defined as
(14) |
We reparametrize c as
(15) |
Here s the mean of Q(X) in class k, for k = 1, 2. After the reparametrization, t is scale-free. As we will see below, in most cases, given Ω and δ, the optimal t that minimizes the classification error takes values on (0, 1).
From now on, we write h(x; Ω, δ, c) = h(x; Ω, δ, t). Let Err(Ω, δ, t) be the classification error of h(·;Ω, δ, t). Due to technical difficulties, we only give results for Gaussian distributions. Suppose X|(Y = 0) ~ 𝒩(μ1, Σ1) and X|(Y = 1) ~ 𝒩(μ2, Σ2). For k = 1, 2, we write
where Sk is a diagonal matrix containing the nonzero eigenvalues, and the columns of Kk are corresponding eigenvectors. Let . When max{|Sk|∞, |βk|∞, k = 1, 2} is bounded, the following proposition shows that an approximation of Err(Ω, δ, t) is
where M, L1 and L2 are defined in (5), Φ is the distribution function of a standard normal variable and Φ̄ = 1 − Φ. Its proof is contained in Section 9.
Proposition 6.1
Suppose that max{|Sk|∞, |βk|∞, k = 1, 2} ≤ C0 for some constant C0 > 0, and let q be the rank of Ω. Then as d goes to infinity,
In particular, if we consider all such (Ω, δ) that the variance of Q(X; Ω, δ) under both classes are lower bounded by c0dθ for some constants θ > 2/3 and c0 > 0, then we have .
We now take a closer look at . Let , which is monotone increasing on (0,∞). Writing for short M =M1 − M2, Mk =Mk(Ω, δ) and Lk = Lk(Ω, δ) for k = 1, 2, we have
Figure 2 shows that H(·) is nearly linear on an important range. This suggests the following approximation:
(16) |
where R(t) = R(t)(Ω, δ) is the R(Ω, δ) in (6) corresponding to the κ value
Fig. 2.
Function .
The approximation in (16) is quantified in the following proposition, which is proved in Fan et al. (2014).
Proposition 6.2
Given (Ω, δ, t), we write for short Rk = Rk(Ω, δ) = [M(Ω, δ)]2/Lk(Ω, δ), for k = 1, 2, and define
Then there exists a constant C > 0 such that
In particular, when t = 1/2,
where R0 = max{min{R1, 1/R1}, min{R2, 1/R2}} and ΔR = |R1 − R2|.
Note that L1 and L2 are the variances of Q(X) = X⊤ΩX − 2X⊤δ for two classes, respectively. In cases where |L1 − L2| ≪ min{L1, L2}, ΔR ≪ R0. Also, R0 is always bounded by 1, and it tends to 0 in many situations, for example, when R1, R2 → ∞, or R1, R2 → 0, or R1 → 0, R2 → ∞. Proposition 6.2 then implies that the approximation in (16) when t = 1/2 is good.
Combining Propositions 6.1 and 6.2, the classification error of a general quadratic rule h(·; Ω, δ, t) is approximately a monotone decreasing transform of the Rayleigh quotient R(t)(Ω, δ), corresponding to κ = κ(t). In particular, when t = 1/2 [i.e., c = (M1 + M2)/2], R(1/2)(Ω, δ) is exactly the one used in QUADRO. Consequently, if we fix the threshold to be c = (M1 + M2)/2, then the Rayleigh quotient (upon with a monotone transform) is a good proxy for classification error. This explains why Rayleigh-quotient based procedures can be used for classification.
Remark
Even in the region that H(·) is far from being linear such that the upper bound in Proposition 6.2 is not o(1), we can still find a monotone transform of the Rayleigh quotient as an upper bound of the classification error. To see this, note that for x ∈ [1/3, ∞), H(x) is a concave function. Therefore, the approximation in (16) becomes an inequality, that is, . For x ∈ (0, 1/3), H(x) ≤ 0.1248x. It follows that .
Remark
In the current setting, the Bayes classifier is a quadratic rule h(x; ΩB, δB, cB) with and . Let ( ) be the population solution of QUADRO when λ = 0. We note that (ΩB, δB) and ( ) are different: the former minimizes inft Err(Ω, δ, t), while the latter minimizes .
6.2. QUADRO as a classification method
Results in Section 6.1 suggest an analytic method to choose the threshold c, or equivalently t, with given (Ω, δ). Let
(17) |
and set
(18) |
Here (17) is a one-dimensional optimization problem and can be solved easily. The resulting QUADRO classification rule is
As a by-product, the method to decide c, described in (17) and (18), can be used in other classification procedures on Gaussian data, such as logistic regression, quadratic discriminant analysis (QDA) and kernel support vector machine, once (Ω̂, δ̂) are given. It provides a fast and purely data-driven way to decide the threshold value in quadratic classification rules. In our numerical experiments, it performs well.
7. Numerical studies
In this section, we investigate the performance of QUADRO in several simulation examples and a real data example. The simulation studies contain both Gaussian models and general elliptical models. We compare QUADRO with several classification-oriented procedures. Performances are evaluated in terms of classification errors.
7.1. Simulations under Gaussian models
Let n1 = n2 = 50 and d = 40. For each given μ1, μ2, Σ1 and Σ2, we generate 100 training datasets independently, each with n1 data from 𝒩(μ1, Σ1) and n2 data from 𝒩(μ2, Σ2). In QUADRO, we input the sample means and sample covariance matrices. We set λ2 = rλ1 and work with λ1 and r from now on. The two tuning parameters λ1 ≥ 0 and r > 0 are selected in the following way. For various pairs of (λ1, r), we apply QUADRO for each pair and evaluate the classification error via 4000 newly generated testing data; we then choose the (λ1, r) that minimize the classification error.
We compare QUADRO with five classification-oriented procedures:
Sparse logistic regression (SLR): We apply the sparse logistic regression to the augmented feature space {Xi, 1 ≤ i ≤ d; XiXj, 1 ≤ i ≤ j ≤ d}. The resulting estimator then gives a quadratic projection with (Ω, δ, c) decided from the fitted regression coefficients. We implement the sparse logistic regression using the R package glmnet.
Linear sparse logistic regression (L-SLR): We apply the sparse logistic regression directly to the original feature space {Xi, 1 ≤ i ≤ d}.
ROAD [Fan, Feng and Tong (2012)]: This is a linear classification method, which can be formulated equivalently as a modified version of QUADRO by enforcing Ω̂ as the zero matrix and plugging in the pooled sample covariance matrix.
Penalized-LDA (P-LDA) [Witten and Tibshirani (2011)]: This is a variant of LDA, which solves an optimization problem with a nonconvex objective and L1 penalties. Also, P-LDA only uses diagonals of the sample covariance matrices.
FAIR [Fan and Fan (2008)]: This is a variant of LDA for high-dimensional settings, where screening is adopted to pre-select features and only the diagonals of the sample covariance matrices are used.
To make a fair comparison, the tuning parameters in SLR and L-SLR are selected in the same way as in QUADRO based on 4000 testing data. ROAD and P-LDA are self-tuned by its package. The number of features chosen in FAIR is calculated in the way suggested in [Fan and Fan (2008)].
We consider four models:
Model 1: Σ1 is the identity matrix. Σ2 is a diagonal matrix in which the first 10 elements are equal to 1.3 and the rest are equal to 1. μ1 = 0, and μ2 = (0.7, …, 0.7, 0, …, 0)⊤ with the first 10 elements of μ2 being nonzero.
Model 1L: μ1, μ2 are the same as in model 1, and both Σ1 and Σ2 are the identity matrix.
Model 2: Σ1 is a block-diagonal matrix. Its upper left 20×20 block is an equal correlation matrix with ρ = 0.4, and its lower right 20 × 20 block is an identity matrix. . We also set μ1 = μ2 = 0. In this model, neither nor is sparse, but is.
Model 3: Σ1, Σ2 and μ1 are the same as in model 2, and μ2 is taken from model 1.
Figure 3 contains the boxplots for the classification errors of all methods. In all four models, QUADRO outperforms other methods in terms of classification error. In model 1L, Σ1 = Σ2, so the Bayes classifier is linear. In this case which favors linear methods, QUADRO is still competitive with the best of all linear classifiers. In model 2, μ1 = μ2, so linear methods can do no better than random guessing. Therefore, ROAD, L-SLR, P-LDA and FAIR all have very poor performances. For the two quadratic methods, QUADRO is significantly better than SLR. In models 1 and 3, μ1 ≠ μ2 and Σ1 ≠ Σ2, so in the Bayes classifier, both “linear” parts and “quadratic” parts play important roles. In model 1, both Σ1 and Σ2 are diagonal, and the setting favors methods using only diagonals of sample covariance matrices. As a result, P-LDA and FAIR perform quite well. In model 3, Σ1 and Σ2 are both nondiagonal and nonsparse (but Σ1 − Σ2 is sparse). We see that the performances of P-LDA and FAIR are unsatisfactory. QUADRO outperforms other methods in both models 1 and 3.
Fig. 3.
Distributions of minimum classification error based on 100 replications for four different normal models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors of 4000 testing samples. See Fan et al. (2014) for detailed numerical tables.
Comparing SLR and L-SLR, we see the former considers a broader class, while the latter is more robust, but neither of them perform uniformly better. However, QUADRO performs well in all cases. In terms of Rayleigh quotients, QUADRO also outperforms other methods in most cases.
7.2. Simulations under elliptical models
Let n1 = n2 = 50 and d = 40. For each given μ1, μ2, Σ1 and Σ2, data are generated from multivariate t distribution with degrees of freedom 5. In QUADRO, we input the robust M-estimators for means and the rank-based estimators for covariance matrices as described in Section 4. We compare the performance of QUADRO with the five methods compared under Gaussian settings. We also implement QUADRO with inputs of sample means and sample covariance matrices. We name this method QUADRO-0 to differentiate it from QUADRO.
We consider three models:
Model 4: Here we use same parameters as those in model 1.
Model 5: Σ1, μ1 and μ2 are the same as in model 1. Σ2 is the covariance matrix of a fractional white noise process, where the difference parameter l = 0.2. In other words, Σ2 has the polynomial off-diagonal decay |Σ2(i, j)| = O(|i − j|1−2l).
Model 6: Σ1, μ1 and μ2 are the same as in model 1. Σ2 is a matrix such that Σ2(i, j) = 0.6|i−j|; that is, Σ2 has an exponential off-diagonal decay.
Figure 4 contains the boxplots of average classification error over 100 replications. QUADRO outperforms the other methods in all settings. Also, QUADRO is better than QUADRO-0 (e.g., 0.161 versus 0.173, of the average classification error in model 5), which illustrates the advantage of using the robust estimators for means and covariance matrices.
Fig. 4.
Distributions of minimum classification error based on 100 replications across different elliptical distribution models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors. See Fan et al. (2014) for detailed numerical tables.
7.3. Real data analysis
We apply QUADRO to a large-scale genomic dataset, GPL96, and compare the performance of QUADRO with SLR, L-SLR, ROAD, P-LDA and FAIR. The GPL96 data set contains 20,263 probes and 8124 samples from 309 tissues. Among the tissues, breast tumor has 1142 samples, which is the largest set. We merge the probes from the same gene by averaging them, and finally get 12,679 genes and 8124 samples. We divide all samples into two groups: breast tumor or nonbreast tumor.
First, we look at the classification errors. We replicate our experiment 100 times. Each time, we proceed with the following steps:
Randomly choose a training set of 400 samples, 200 from breast tumor and 200 from nonbreast tumor.
For each training set, we use half of the samples to compute (Ω̂, δ̂) and the other half to select the tuning parameters by minimizing the classification error.
Use the remaining 942 samples from breast tumor and another randomly chosen 942 samples from nonbreast tumor as testing set, and calculate the testing error.
FAIR does not have any tuning parameters, so we use the whole training set to calculate classification frontier, and the rest to calculate testing error. The results are summarized in Table 1. We see that QUADRO outperforms all other methods.
Table 1.
Classification errors on GPL96 dataset, across methods QUADRO, SLR and L-SLR. Means and standard deviations (in the parenthesis) of 100 replications are reported
QUADRO | SLR | L-SLR | ROAD | Penalized-LDA | FAIR |
---|---|---|---|---|---|
0.014 (0.007) | 0.025 (0.007) | 0.025 (0.009) | 0.016 (0.007) | 0.060 (0.011) | 0.046 (0.009) |
Next, we look at gene selection and focus on the two quadratic methods, QUADRO and SLR. We apply two-fold cross-validation to both QUADRO and SLR. In the results, QUADRO selects 139 genes and SLR selects 128 genes. According to KEGG database, genes selected by QUADRO belong to 5 of the pathways that contain more than two genes; correspondingly, genes selected by SLR belong to 7 pathways. Using the ClueGo tool [Bindea et al. (2009)], we display the overall KEGG enrichment chart in Figure 5. We see from Figure 5 that both QUADRO and SLR have focal adhesion as its most important functional group. Nevertheless, QUADRO finds ECM-receptor interaction as another important functional group. ECM-receptor interaction is a class consisting of a mixture of structural and functional macromolecules, and it plays an important role in maintaining cell and tissue structures and functions. Massive studies [Luparello (2013), Wei and Li (2007)] have found evidence that this class is closely related to breast cancer.
Fig. 5.
Overall KEGG enrichment chart, using (a) QUADRO; (b) SLR.
Besides the pathway analysis, we also perform the Gene Ontology (GO) enrichment analysis on genes selected by QUADRO. This analysis was completed by DAVID Bioinformatics Resources, and the results are shown in Table 2. We present the biological processes with p-values smaller than 10−3. According to the table, we see that many biological processes are significantly enriched, and they are related to previously selected pathways. For instance, the biological process cell adhesion is known to be highly related to cell communication pathways, including focal adhesion and ECM-receptor interaction.
Table 2.
Enrichment analysis results according to Gene Ontology for genes selected by QUADRO. The four columns represent GO ID, GO attribute, number of selected genes having the attribute and their corresponding p-values. We rank them according to p-values in increasing order
GO ID | GO attribute | No. of genes | p-value |
---|---|---|---|
0048856 | Anatomical structure development | 58 | 3.7E–12 |
0032502 | Developmental process | 62 | 2.9E–10 |
0048731 | System development | 52 | 3.1E–10 |
0007275 | Multicellular organismal development | 55 | 1.8E–8 |
0001501 | Skeletal system development | 15 | 1.3E–6 |
0032501 | Multicellular organismal process | 66 | 1.4E–6 |
0048513 | Organ development | 37 | 1.4E–6 |
0009653 | Anatomical structure morphogenesis | 28 | 8.7E–6 |
0048869 | Cellular developmental process | 34 | 1.9E–5 |
0030154 | Cell differentiation | 33 | 2.1E–5 |
0007155 | Cell adhesion | 18 | 2.4E–4 |
0022610 | Biological adhesion | 18 | 2.2E–4 |
0042127 | Regulation of cell proliferation | 19 | 2.9E–4 |
0009888 | Tissue development | 17 | 3.7E–4 |
0007398 | Ectoderm development | 9 | 4.8E–4 |
0048518 | Positive regulation of biological process | 34 | 5.6E–4 |
0009605 | Response to external stimulus | 20 | 6.3E–4 |
0043062 | Extracellular structure organization | 8 | 7.4E–4 |
0007399 | Nervous system development | 22 | 8.4E–4 |
8. Conclusions and extensions
QUADRO is a robust sparse high-dimensional classifier, which allows us to use differences in covariance matrices to enhance discriminability. It is based on Rayleigh quotient optimization. The variance of quadratic statistics involves all fourth cross moments, and this can create both computational and statistical problems. These problems are avoided by limiting our applications to the elliptical class of distributions. Robust M-estimator and rank-based estimation of correlations allow us to obtain the uniform convergence for nonpolynomially many parameters, even when the underlying distributions have the finite fourth moments. This allows us to establish oracle inequalities under relatively weaker conditions.
Existing methods in the literature about constructing high-dimensional quadratic classifiers can be divided into two types. One is the regularized QDA, where regularized estimates of and are plugged into the Bayes classifier; see, for example, Friedman (1989). QUADRO avoids directly estimating inverse covariance matrices, which requires strong assumptions in high dimensions. The other is to combine linear classifiers with the inner-product kernel. The main difference between QUADRO and this approach is the simplification in Proposition 2.1. Due to this simplification, QUADRO avoids incorporating all fourth cross moments from the data and gains extra statistical efficiency.
QUADRO also has deep connections with the literature of sufficient dimension reduction. Dimension reduction methods, such as SIR [Li (1991)], SAVE [Cook and Weisberg (1991)] and Directional Regression [Li and Wang (2007)], can be equivalently formulated as maximizing some “quotients.” The population objective of SIR is to maximize var{𝔼[f(X|Y)]} subject to var[f(X)] = 1. Using the same constraint, SAVE and directional regression combine var{𝔼[f(X|Y)]} and 𝔼[var(f(X|Y))] in the objective. An interesting observation is that the Rayleigh quotient maximization is equivalent to the population objective of SIR, by noting that the denominator of (1) is equal to 𝔼[var(f(X|Y))] and var[f(X)] = 𝔼[var(f(X|Y))] + var{𝔼[f(X|Y)]}. This is not a coincidence, but due instead to the known equivalence between SIR and LDA in classification [Kent (1991), Li (2000)].
Despite similar population objectives, QUADRO and the aforementioned dimension reduction methods are different in important ways. First, we clarify that even when λ1, λ2 are 0, QUADRO is not the same procedure as SIR combined with the inner-product kernel [Wu (2008)], although they share the same population objective. The difference is that QUADRO utilizes a simplification of the Rayleigh quotient for quadratic f, relying on the assumption that X|Y is always elliptically distributed; moreover, it adopts robust estimators of the mean vectors and covariance matrices. Second, QUADRO is designed for high-dimensional settings, in which neither SIR, SAVE nor Directional Regression can be directly implemented. These methods need to either standardize the original data X ↦ Σ̂−1(X − X̄) or solve a generalized eigen-decomposition problem Av = λΣ̂v for some matrix A. Both methods require that the sample covariance matrix is well conditioned, which is often not the case in high dimensions. Possible solutions include Regularized SIR [Li and Yin (2008), Zhong et al. (2005)], solving generalized eigen-decomposition for an undetermined system [Coudret, Liquet and Saracco (2014)] and variable selection approaches [Chen, Zou and Cook (2010), Jiang and Liu (2013)]. However, these methods are not designed for Rayleigh quotient maximization. Third, our assumption on the model is different from that in dimension reduction. We require X|Y to be elliptically distributed, while many dimension reduction methods “implicitly” require X to be marginally elliptically distributed. Neither method is stronger than the other. Assuming conditional elliptical distribution is more natural in classification. In addition, our assumption is used only to simplify the variances of quadratic statistics, whereas the elliptical assumption is critical to SIR.
The Rayleigh optimization framework developed in this paper can be extended to the multi-class case. Suppose the data are drawn independently from a joint distribution of (X, Y), where X ∈ ℝd and Y takes values in {0, 1, …, K − 1}. Definition (1) for the Rayleigh quotient of a projection f : ℝd → ℝ is still well defined. Let πk = ℙ(Y = k), for k = 0, 1, …, K − 1. In this K-class situation,
(19) |
Let Mk(f) = 𝔼[f(X)|Y = k] and Lk(f) = var[f(X)|Y = k]. Similar to the two-class case, maximizing Rq(f) is equivalent to solving the following optimization problem:
However, this is not a convex problem. We consider an approximate Rayleigh-quotient-maximization problem as follows:
To solve this problem, we first pick an order of M1(f), …, MK(f) to remove the absolute values in the constraints. Then it becomes a convex problem. Therefore, the whole optimization can be carried out by simultaneously solving K! convex problems. When K is small, the computational cost is reasonable. In practice, we can apply more efficient algorithms to speed up the computation.
9. Proofs
9.1. Proof of Theorem 5.1
We prove the claim by first rewriting optimization problem (8) into a vector form. For any (Ω, δ), write x = [vec(Ω)⊤, δ⊤]⊤. Let Q be as defined in Section 5, and
We introduce the following lemma which is proved in the supplementary material [Fan et al. (2014)].
Lemma 9.1
M(Ω, δ) = q⊤x and L(Ω, δ) = x⊤Qx.
Let and x̂ = [vec(Ω̂)⊤, δ̂⊤]⊤. Using Lemma 9.1,
where Q̂ and q̂ are counterparts of Q and q, respectively, by replacing μ1, μ2, Σ1 and Σ2 with their estimates. Moreover, we have the Rayleigh quotient
In addition, we have the following lemma, which is proved in the supplementary material [Fan et al. (2014)].
Lemma 9.2
max{|Q̂ − Q|∞, |q̂ − q|∞} ≤ C0 max{|Σ̂k − Σk|∞, |μ̂k − μk|∞, k = 1, 2} for some constant C0 > 0.
Combining the above results, the claim follows immediately from the following theorem:
Theorem 9.1
For any λ0 ≥ 0, let S be the support of . Suppose Θ(S, 0) ≥ c0, Θ (S, 3) ≥ a0 and , for positive constants a0, c0 and u0. Let Δn = max{|Q̂ − Q|∞, |q̂ − q|∞}, s0 = |S| and . Suppose and . Then there exist positive constants C = C(a0, c0, u0) and A = A(a0, c0, u0), such that for any η > 1, by taking ,
The main part of the proof is to show Theorem 9.1. Write for short , R* = R(x*), V* = (R*)−1= (x*)⊤Qx*, V̄ * = (V*)1/2. Let , βn = Δn|x*|0 and . We define the quantity
Step 1
We introduce , a multiple of x, and use it to bound |x̂|1.
Let QSS be the submatrix of Q formed by rows and columns corresponding to S. Since λmin(QSS)= Θ(S, 0) ≥ c0, we have (x*)⊤Qx* ≥ c0|x*|2. Using this fact and by the Cauchy–Schwarz inequality,
(20) |
It follows that
(21) |
Let tn = q̂⊤x*. Then (21) says that . Noting that , we have by assumption. In particular, tn > 0. Let
Then . From the definition of x̂,
(22) |
By direct calculation,
(23) |
where the second equality is due to . We aim to bound . The following lemma is proved in the supplementary material [Fan et al. (2014)].
Lemma 9.3
When Θ(S, 0) ≥ c0, there exists a positive constant C1 = C1(c0) such that for any λ0 ≥ 0.
Since and ,
Here the third inequality follows from (20)–(21) and . The last inequality is obtained as follows: from Lemma 9.2, we know that |q̂|∞ ≤ |q|∞ + |q̂ − q|∞ ≤ 2C0 (see also the assumptions in the beginning of Section 5.2); we also use Lemma 9.3 and . By letting C = 8C2, the choice of for η > 1 ensures that
Plugging this result into (23) gives
(24) |
(25) |
First, since and , we immediately see from (25) that
(26) |
Second, note that . Plugging this into (25) gives
(27) |
Step 2
We use (26)–(27) to derive an upper bound for .
Note that
(28) |
where the last two inequalities are direct results of (27). Combining (22) and (28),
(29) |
Similar to (23), we have
(30) |
where
It follows that
Plugging this into (29), we obtain
(31) |
We can rewrite the second and third terms on the left-hand side of (31) as
Plugging this into (31) and by the triangular inequality , we find that
We drop the term on the left-hand side and apply the Cauchy–Schwarz inequality to the term . This gives
(32) |
Since (26) holds, by the definition of Θ(S, 3),
We write temporarily and b = C3βnV*. Combining these with (32),
Note that when u ≤ au + b, we have , and hence . As a result, the above inequality implies
(33) |
where we have used . Furthermore, (30) yields that
(34) |
where the second inequality is due to , and the last inequality is from (27). Recall that . As a result,
(35) |
Combining (33), (34) and (35) gives
(36) |
Step 3
We use (36) to give a lower bound of R(x̂).
Note that R(x̂) = (q⊤x̂)2/(x̂⊤Qx̂). First, we look at the denominator x̂⊤Qx̂. From (21) and that tn > 1/2,
Combining with (36) and noting that , we have
(37) |
Second, we look at the numerator q⊤x̂. Since q̂⊤x̂ = 1, by (27),
(38) |
(39) |
where A = A(a0, c0, u0) is a positive constant.
9.2. Proof of Proposition 6.1
Denote by ℙ(i|j) the probability that a new sample from class j is misclassified to class i, for i, j ∈ {1, 2} and i ≠ j. The classification error of h is
Write Mk = Mk(Ω, δ) and Lk = Lk(Ω, δ) for short. It suffices to show that
We only consider ℙ(2|1). The analysis of ℙ(1|2) is similar. Suppose . Define
so that Y ~ 𝒩(0, Id) and . Note that
(40) |
Recall that is the eigen-decomposition by excluding the 0 eigenvalues. Since Σ1 has full rank and the rank of Ω is q, the rank of is q. Therefore, S1 is a q × q diagonal matrix, and K1 is a d × q matrix satisfying . Let K̃1 be any d × (d − q) matrix such that K = [K1, K̃1] is a d × d orthogonal matrix. Since , we have
We recall that . Let and . It follows from (40) that
where Q̄1(w) = w⊤S1w +2w⊤β1 and F̄1(w) = 2w⊤β̃1. Therefore,
We write for convenience W = (W1, …, Wq)⊤, W̃= (Wq+1, …, Wd)⊤, β1 = (β11, …, β1q) and β̃1 = (β1(q+1), …, β1d)⊤, and notice that for 1 ≤ i≤ d. Moreover,
(41) |
where , for 1 ≤ i ≤ d. The right-hand side of (41) is a sum of independent variables, so we can apply the Edgeworth expansion to its distribution function, as described in detail below.
Note that and for nonnegative integers j. By direct calculation,
Notice that 𝔼(|ξi − 𝔼(ξi)|3) < ∞, as max{|si |, |β1i|, 1 ≤ i ≤ d}≤ C0 by assumption. Using results from Chapter XVI of Feller (1966), we know
where φ is the probability density function of the standard normal distribution. It is observed that η2 = L1(Ω, δ) and c1 + η1 = M1(Ω, δ). Also, c = tM1(Ω, δ) + (1 − t)M2(Ω, δ). As a result,
Plugging this into the expression of ℙ(2|1), the first term is . Moreover, since the function (1 − u2)φ(u) is uniformly bounded, the second term is . Here η2 = L1, and η3 = O(q) as si’s and β1i’s are abounded in magnitude. Combining the above gives
The proof is now complete.
Footnotes
SUPPLEMENTARY MATERIAL
Supplement to “QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization” (DOI: 10.1214/14-AOS1307SUPP;.pdf). Owing to space constraints, numerical tables for simulation and some of the technical proofs are relegated to a supplementary document. It contains proofs of Propositions 2.1, 5.1 and 6.2.
References
- Bickel PJ, Ritov Y, Tsybakov AB. Simultaneous analysis of lasso and Dantzig selector. Ann Statist. 2009;37:1705–1732. MR2533469. [Google Scholar]
- Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M, Kirilovsky A, Fridman WH, Pagès F, Trajanoski Z, Galon J. ClueGO: A cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009;25:1091–1093. doi: 10.1093/bioinformatics/btp101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cai T, Liu W. A direct estimation approach to sparse linear discriminant analysis. J Amer Statist Assoc. 2011;106:1566–1577. MR2896857. [Google Scholar]
- Cai T, Liu W, Luo X. A constrained ℓ1 minimization approach to sparse precision matrix estimation. J Amer Statist Assoc. 2011;106:594–607. MR2847973. [Google Scholar]
- Catoni O. Challenging the empirical mean and empirical variance: A deviation study. Ann Inst Henri Poincaré Probab Stat. 2012;48:1148–1185. MR3052407. [Google Scholar]
- Chen X, Zou C, Cook RD. Coordinate-independent sparse sufficient dimension reduction and variable selection. Ann Statist. 2010;38:3696–3723. MR2766865. [Google Scholar]
- Cook RD, Weisberg S. Comment on “Sliced inverse regression for dimension reduction. J Amer Statist Assoc. 1991;86:328–332. [Google Scholar]
- Coudret R, Liquet B, Saracco J. Comparison of sliced inverse regression approaches for underdetermined cases. J SFdS. 2014;155:72–96. MR3211755. [Google Scholar]
- Fan J, Fan Y. High-dimensional classification using features annealed independence rules. Ann Statist. 2008;36:2605–2637. doi: 10.1214/07-AOS504. MR2485009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Feng Y, Tong X. A road to classification in high dimensional space. J Roy Statist Soc B. 2012;74:745–771. doi: 10.1111/j.1467-9868.2012.01029.x. MR2965958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Statist Assoc. 2001;96:1348–1360. MR1946581. [Google Scholar]
- Fan J, Xue L, Zou H. Strong oracle optimality of folded concave penalized estimation. Ann Statist. 2014;42:819–849. doi: 10.1214/13-aos1198. MR3210988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fan J, Ke ZT, Liu H, Xia L. Supplement to “QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization”. 2015 doi: 10.1214/14-AOS1307SUPP. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feller W. An Introduction to Probability Theory and Its Applications. II. Wiley; New York: 1966. [Google Scholar]
- Fisher RA. The use of multiple measurements in taxonomic problems. Annals of Eugenics. 1936;7:179–188. [Google Scholar]
- Friedman JH. Regularized discriminant analysis. J Amer Statist Assoc. 1989;84:165–175. MR0999675. [Google Scholar]
- Guo Y, Hastie T, Tibshirani R. Regularized discriminant analysis and its application in microarrays. Biostatistics. 2005;1:1–18. doi: 10.1093/biostatistics/kxj035. [DOI] [PubMed] [Google Scholar]
- Han F, Liu H. Transelliptical component analysis. Adv Neural Inf Process Syst. 2012;25:368–376. [Google Scholar]
- Han F, Zhao T, Liu H. CODA: High dimensional copula discriminant analysis. J Mach Learn Res. 2013;14:629–671. MR3033343. [Google Scholar]
- Jiang B, Liu JS. Sliced inverse regression with variable selection and interaction detection. 2013 Preprint. Available at arXiv:1304.4056. [Google Scholar]
- Kendall MG. A new measure of rank correlation. Biometrika. 1938;30:81–93. [Google Scholar]
- Kent JT. Discussion of Li (1991) J Amer Statist Assoc. 1991;86:336–337. [Google Scholar]
- Li KC. Sliced inverse regression for dimension reduction. J Amer Statist Assoc. 1991;86:316–342. MR1137117. [Google Scholar]
- Li K-C. Lecture notes. Dept. Statistics, UCLA; Los Angeles, CA: 2000. High dimensional data analysis via the SIR/PHD approach. Available at http://www.stat.ucla.edu/~kcli/sir-PHD.pdf. [Google Scholar]
- Li B, Wang S. On directional regression for dimension reduction. J Amer Statist Assoc. 2007;102:997–1008. MR2354409. [Google Scholar]
- Li L, Yin X. Sliced inverse regression with regularizations. Biometrics. 2008;64:124–131. doi: 10.1111/j.1541-0420.2007.00836.x. MR2422826. [DOI] [PubMed] [Google Scholar]
- Liu H, Han F, Yuan M, Lafferty J, Wasserman L. High-dimensional semi-parametric Gaussian copula graphical models. Ann Statist. 2012;40:2293–2326. MR3059084. [Google Scholar]
- Luparello C. Aspects of collagen changes in breast cancer. J Carcinogene Mutagene S. 2013;13:007. doi: 10.4172/2157-2518.S13-007. [DOI] [Google Scholar]
- Maruyama Y, Seo T. Estimation of moment parameter in elliptical distributions. J Japan Statist Soc. 2003;33:215–229. MR2039896. [Google Scholar]
- Shao J, Wang Y, Deng X, Wang S. Sparse linear discriminant analysis by thresholding for high dimensional data. Ann Statist. 2011;39:1241–1265. MR2816353. [Google Scholar]
- Wei Z, Li H. A Markov random field model for network-based analysis of genomic data. Bioinformatics. 2007;23:1537–1544. doi: 10.1093/bioinformatics/btm129. [DOI] [PubMed] [Google Scholar]
- Witten DM, Tibshirani R. Penalized classification using Fisher’s linear discriminant. J R Stat Soc Ser B Stat Methodol. 2011;73:753–772. doi: 10.1111/j.1467-9868.2011.00783.x. MR2867457. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu HM. Kernel sliced inverse regression with applications to classification. J Comput Graph Statist. 2008;17:590–610. MR2528238. [Google Scholar]
- Zhao T, Roeder K, Liu H. Positive semidefinite rank-based correlation matrix estimation with application to semiparametric graph estimation. 2013 doi: 10.1080/10618600.2013.858633. Unpublished manuscript. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhao P, Yu B. On model selection consistency of Lasso. J Mach Learn Res. 2006;7:2541–2563. MR2274449. [Google Scholar]
- Zhong W, Zeng P, Ma P, Liu JS, Zhu Y. RSIR: Regularized sliced inverse regression for motif discovery. Bioinformatics. 2005;21:4169–4175. doi: 10.1093/bioinformatics/bti680. [DOI] [PubMed] [Google Scholar]
- Zou H. The adaptive lasso and its oracle properties. J Amer Statist Assoc. 2006;101:1418–1429. MR2279469. [Google Scholar]
- Zou H, Li R. One-step sparse estimates in nonconcave penalized likelihood models. Ann Statist. 2008;36:1509–1533. doi: 10.1214/009053607000000802. MR2435443. [DOI] [PMC free article] [PubMed] [Google Scholar]