Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 14.
Published in final edited form as: Ann Stat. 2015;43(4):1498–1534. doi: 10.1214/14-AOS1307

QUADRO: A SUPERVISED DIMENSION REDUCTION METHOD VIA RAYLEIGH QUOTIENT OPTIMIZATION

Jianqing Fan *,1, Zheng Tracy Ke †,1, Han Liu *,2, Lucy Xia *,1
PMCID: PMC4712455  NIHMSID: NIHMS749709  PMID: 26778864

Abstract

We propose a novel Rayleigh quotient based sparse quadratic dimension reduction method—named QUADRO (Quadratic Dimension Reduction via Rayleigh Optimization)—for analyzing high-dimensional data. Unlike in the linear setting where Rayleigh quotient optimization coincides with classification, these two problems are very different under nonlinear settings. In this paper, we clarify this difference and show that Rayleigh quotient optimization may be of independent scientific interests. One major challenge of Rayleigh quotient optimization is that the variance of quadratic statistics involves all fourth cross-moments of predictors, which are infeasible to compute for high-dimensional applications and may accumulate too many stochastic errors. This issue is resolved by considering a family of elliptical models. Moreover, for heavy-tail distributions, robust estimates of mean vectors and covariance matrices are employed to guarantee uniform convergence in estimating non-polynomially many parameters, even though only the fourth moments are assumed. Methodologically, QUADRO is based on elliptical models which allow us to formulate the Rayleigh quotient maximization as a convex optimization problem. Computationally, we propose an efficient linearized augmented Lagrangian method to solve the constrained optimization problem. Theoretically, we provide explicit rates of convergence in terms of Rayleigh quotient under both Gaussian and general elliptical models. Thorough numerical results on both synthetic and real datasets are also provided to back up our theoretical results.

Key words and phrases: Classification, dimension reduction, quadratic discriminant analysis, Rayleigh quotient, oracle inequality

1. Introduction

Rapid developments of imaging technology, microarray data studies and many other applications call for the analysis of high-dimensional binary-labeled data. We consider the problem of finding a “nice” projection f : ℝd → ℝ that embeds all data into the real line. A projection such as f has applications in many statistical problems for analyzing high-dimensional binary-labeled data, including:

  • Dimension reduction: f provides a data reduction tool for people to visualize the high-dimensional data in a one-dimensional space.

  • Classification: f can be used to construct classification rules. With a carefully chosen set A ⊂ ℝ, we can classify a new data point x ∈ ℝd by checking whether or not f(x) ∈ A.

  • Feature selection: when f(x) only depends on a small number of coordinates of x, this projection selects just a few features from numerous observed ones.

A natural question is what kind of f is a “nice” projection? It depends on the goal of statistical analysis. For classification, a good f should yield to a small classification error. In feature selection, different criteria select distinct features, and they may suit different real problems. In this paper, we propose using the following criterion for finding f:

Under the mapping f, the data are as “separable” as possible between two classes, and as “coherent” as possible within each class.

It can be formulated as to maximize the Rayleigh quotient of f. Suppose all data are drawn independently from a joint distribution of (X, Y), where X ∈ ℝd, and Y ∈ {0, 1} is the label. The Rayleigh quotient of f is defined as

Rq(f)var{E[f(X)Y]}var{f(X)-E[f(X)Y]}. (1)

Here, the numerator is the variance of X explained by the class label, and the denominator is the remaining variance of X. Simple calculation shows that Rq(f) = π (1 − π)R(f), where π ≡ ℙ(Y = 0) and

R(f){E[f(X)Y=0]-E[f(X)Y=1]}2πvar[f(X)Y=0]+(1-π)var[f(X)Y=1]. (2)

Our goal is to develop a data-driven procedure to find such that Rq() is large, and is sparse in the sense that it depends on few coordinates of X.

The Rayleigh quotient, as a criterion for finding a projection f, serves different purposes. First, for dimension reduction, it takes care of both variance explanation and label explanation. In contrast, methods such as principal component analysis (PCA) only consider variance explanation. Second, when the data are normally distributed, a monotone transform of the Rayleigh quotient approximates the classification error; see Section 6. Therefore, an f with a large Rayleigh quotient enables us to construct nice classification rules. In addition, it is a convex optimization to maximize the Rayleigh quotient among linear and quadratic f (see Section 3), while minimizing the classification error is not. Third, with appropriate regularization, this criterion provides a new feature selection tool for data analysis.

The criterion (1), initially introduced by Fisher (1936) for classification, is known as Fisher’s linear discriminant analysis (LDA). In the literature of sufficient dimension reduction, the sliced inverse regression (SIR) proposed by Li (1991) can also be formulated as maximizing (1), where Y can be any variable not necessarily binary. In both LDA and SIR, f is restricted to be a linear function, and the dimension d cannot be larger than n. In this sense, our work compares directly to various versions of LDA and SIR generalized to nonlinear, high-dimensional settings. We provide a more detailed comparison to the literature in Section 8, but preview here the uniqueness of our work. First, we consider a setting where X|Y has an elliptical distribution and f is a quadratic function, which allows us to derive a simplified version of (1) and gain extra statistical efficiency; see Section 2 for details. This simplified version of (1) was never considered before. Furthermore, the assumption of conditional elliptical distribution does not satisfy the requirement of SIR and many other dimension reduction methods [Cook and Weisberg (1991), Li (1991)]. In Section 1.2, we explain the motivation of the current setting. Second, we utilize robust estimators of mean and covariance matrix, while many generalizations of LDA and SIR are based on sample mean and sample covariance matrix. As shown in Section 4, the robust estimators adapt better to heavy tails on the data. It is worth noting that QUADRO only considers the projection to a one-dimensional subspace. In contrast, more sophisticated dimension reduction methods (e.g., the kernel SIR) are able to find multiple projections f1, …, fm for m > 1. This reflects a tradeoff between modeling tractability and flexibility. More specifically, QUADRO achieves better computational and theoretical properties at the cost of sacrificing some flexibility.

1.1. Rayleigh quotient and classification error

Many popular statistical methods for analyzing high-dimensional binary-labeled data are based on classification error minimization, which is closely related to the Rayleigh quotient maximization. We summarize their connections and differences as follows:

  1. In an “ideal” setting where two classes follow multivariate normal distributions with a common covariance matrix and the class of linear functions f is considered, the two criteria are exactly the same, with one being a monotone transform of the other.

  2. In a “relaxed” setting where two classes follow multivariate normal distributions but with nonequal covariance matrices and the class of quadratic functions f (including linear functions as special cases) is considered, the two criteria are closely related in the sense that a monotone transform of the Rayleigh quotient is an approximation of the classification error.

  3. In other settings, the two criteria can be very different.

We now show (a) and (c), and will discuss (b) in Section 6.

For each f, we define a family of classifiers hc(x) = I {f (x) < c} indexed by c, where I (·) is the indicator function. For each given c, we define the classification error of hc to be err(hc) ≡ ℙ(hc(X) ≠ Y). The classification error of f is then defined by

Err(f)minc{err(hc)}.

Most existing classification procedures aim at finding a data-driven projection such that Err() is small (the threshold c is usually easy to choose). Examples include linear discriminant analysis (LDA) and its variations in high dimensions [e.g., Cai and Liu (2011), Fan and Fan (2008), Fan, Feng and Tong (2012), Guo, Hastie and Tibshirani (2005), Han, Zhao and Liu (2013), Shao et al. (2011), Witten and Tibshirani (2011)], quadratic discriminant analysis (QDA), support vector machine (SVM), logistic regression, boosting, etc.

We now compare Rq(f) and Err(f). Let π = ℙ(Y = 0), μ1 = 𝔼(X|Y = 0), Σ1 = cov(X|Y = 0), μ2 = 𝔼(X|Y = 1) and Σ2 = cov(X|Y = 1). We consider linear functions {f(x) = ax + b : a ∈ ℝd, b ∈ ℝ}, and write Rq(a) = Rq(ax), Err(a) = Err(ax) for short. By direct calculation, when the two classes have a common covariance matrix Σ,

Rq(a)=π(1-π)[a(μ1-μ2)]2aa.

Hence, the optimal aR = Σ−1(μ1μ2). On the other hand, when data follow multivariate normal distributions, the optimal classifier is h(x)=I{aEx<c}, where aE = Σ−1(μ1μ2) and c=12μ1-1μ1-12μ2-1μ2+log(1-ππ). It is observed that aR = aE and the two criteria are the same. In fact, for all vectors a such that a(μ1μ2) > 0,

Err(a)=1-Φ(12[Rq(a)π(1-π)]1/2),

where Φ is the distribution function of a standard normal random variable, and we fix c = a(μ1 + μ2)/2. Therefore, the classification error is a monotone transform of the Rayleigh quotient.

When we move away from these ideal assumptions, the above two criteria can be very different. We illustrate this point using a bivariate distribution, that is, d = 2, with different covariance matrices. Specifically, π = 0.55, μ1 = (0, 0), μ2 = (1.28, 0.8), Σ1 = diag(1, 1) and Σ2 = diag(3, 1/3). We still consider linear functions f(x) = ax but select only one out of the two features, X1 or X2. Then the maximum Rayleigh quotients, by using each of the two features alone, are 0.853 and 0.923, respectively, whereas the minimum classification errors are 0.284 and 0.295, respectively. As a result, under the criterion of maximizing Rayleigh quotient, Feature 2 is selected, whereas under the criterion of minimizing classification error, Feature 1 is selected. Figure 1 displays the distributions of data after being projected to each of the two features. It shows that since data from the second class has a much larger variability at Feature 1 than at Feature 2, the Rayleigh quotient maximization favors Feature 2, although Feature 1 yields a smaller classification error.

Fig. 1.

Fig. 1

An example in ℝ2. The green and purple represent class 1 and class 2, respectively. The ellipses are contours of distributions. Probability densities after being projected to X1 and X2 are also displayed. The dotted lines correspond to optimal thresholds for classification using each feature.

1.2. Objective of the paper

In this paper, we consider the Rayleigh quotient maximization problem in the following setting:

  • We consider sparse quadratic functions, that is, f(x) = xΩx − 2δx, where Ω is a sparse d × d symmetric matrix, and δ is a sparse d-dimensional vector.

  • The two classes can have different covariance matrices.

  • Data from these two classes follow elliptical distributions.

  • The dimension is large (it is possible that dn).

Compared to Fisher’s LDA, our setting has several new ingredients. First, we go beyond linear classifiers to enhance flexibility. It is well known that the linear classifiers are inefficient. For example, when two classes have the same mean, linear classifiers perform no better than random guesses. Instead of exploring arbitrary nonlinear functions, we consider the class of quadratic functions so that the Rayleigh quotient still has a nice parametric formulation, and at the same time it helps identify interaction effects between features. Second, we drop the requirement that the two classes share a common covariance matrix, which is a critical condition for Fisher’s rule and many other high-dimensional classification methods [e.g., Cai and Liu (2011), Fan and Fan (2008), Fan, Feng and Tong (2012)]. In fact, by using quadratic discriminant functions, we take advantage of the difference of covariance matrices between the two classes to enhance classification power. Third, we generalize multivariate normal distributions to the elliptical family, which includes many heavy-tailed distributions, such as multivariate t-distributions, Laplace distributions, and Cauchy distributions. This family of distributions allows us to avoid estimating all O(d4) fourth cross-moments of d predictors in computing the variance of quadratic statistics and hence overcomes the computation and noise accumulation issues.

In our setting, Fisher’s rule, that is, aR = Σ−1(μ1μ2), no longer maximizes the Rayleigh quotient. We propose a new method, called quadratic dimension reduction via Rayleigh optimization (QUADRO). It is a Rayleigh-quotient-oriented procedure and is a statistical tool for simultaneous dimension reduction and feature selection. QUADRO has several properties. First, it is a statistically efficient generalization of Fisher’s linear discriminant analysis to the quadratic setting. A naive generalization involves estimation of all fourth cross-moments of the two underlying distributions. In contrast, QUADRO only requires estimating a one-dimensional kurtosis parameter. Second, QUADRO adopts rank-based estimators and robust M-estimators of the covariance matrices and the means. Therefore, it is robust to possibly heavy-tail distributions. Third, QUADRO can be formulated as a convex programming and is computationally efficient.

Theoretically, we prove that under elliptical models, the Rayleigh quotient of the estimated quadratic function converges to population maximum Rayleigh quotient at rate Op(slog(d)/n), where s is the number of important features (counting both single terms and interaction terms). In addition, we establish a connection between our method and quadratic discriminant analysis (QDA) under elliptical models.

The rest of this paper is organized as follows. Section 2 formulates Rayleigh quotient maximization as a convex optimization problem. Section 3 describes QUADRO. Section 4 discusses rank-based estimators and robust M-estimators used in QUADRO. Section 5 presents theoretical analysis. Section 6 discusses the application of QUADRO in elliptically distributed classification problems. Section 7 contains numerical studies. Section 8 concludes the paper. All proofs are collected in Section 9.

Notation

For 0 ≤ q ≤ ∞, |v|q denotes the Lq-norm of a vector v, |A|q denotes the elementwise Lq-norm of a matrix A and ||A||q denotes the matrix Lq-norm of A. When q = 2, we omit the subscript q. λmin(A) and λmax(A) denote the minimum and maximum eigenvalues of A. det(A) denotes the determinant of A. Let I (·) be the indicator function: for any event B, I (B) = 1 if B happens and I (B) = 0 otherwise. Let sign(·) be the sign function, where sign(u) = 1 when u ≥ 0 and sign(u) = −1 when u < 0.

2. Rayleigh quotient for quadratic functions

We first study the population form of Rayleigh quotient for an arbitrary quadratic function. We show that it has a simplified form under the elliptical family.

For a quadratic function

Q(X)=XΩX-2δX,

using (2), its Rayleigh quotient is

R(Ω,δ)={E[Q(X)Y=0]-E[Q(X)Y=1]}2πvar[Q(X)Y=0]+(1-π)var[Q(X)Y=1] (3)

up to a constant multiplier. The Rayleigh quotient maximization can be expressed as

max(Ω,δ):Ω=ΩR(Ω,δ).

2.1. General setting

Suppose 𝔼(Z) = μ and cov(Z) = Σ. By direct calculation,

E[Q(Z)]=tr(Ω)+μΩμ-2δμ,var[Q(Z)]=E[tr(ΩZZΩZZ)]-4E[δZZΩZ]+4δδ+4(δμ)2-{E[Q(Z)]}2.

So 𝔼[Q(Z)] is a linear combination of the elements in {Ω(i, j), 1 ≤ ijd; δ(i), 1 ≤ id}, and var[Q(Z)] is a quadratic form of these elements. The coefficients in 𝔼[Q(Z)] are functions of μ and Σ only. However, the coefficients in var[Q(Z)] also depend on all the fourth cross-moments of Z, and there are O(d4) of them.

Let us define M1(Ω, δ) = 𝔼[Q(X)|Y = 0], L1(Ω, δ) = var[Q(X)|Y = 0] and M2(Ω, δ), L2(Ω, δ) similarly. Also, let κ = (1 − π)/π. We have

R(Ω,δ)=[M1(Ω,δ)-M2(Ω,δ)]2L1(Ω,δ)+κL2(Ω,δ).

Therefore, both the numerator and denominator are quadratic combinations of the elements in Ω and δ. We can stack the d(d + 1)/2 elements in Ω (assuming it is symmetric) and the d elements in δ into a long vector v. Then R(Ω, δ) can be written as

R(v)=(av)2vAv,

where a is a d′ × 1 vector, A is a d′ × d′ positive semi-definite matrix and d′ = d(d + 1)/2 + d. A and a are determined by the coefficients in the denominator and numerator of R(Ω, δ), respectively. Now, max(Ω, δ) R(Ω, δ) is equivalent to maxv R(v). It has explicit solutions. For example, when A is positive definite, the function R(v) is maximized at v* = A−1a. We can then reshape v* to get the desired (Ω*, δ*).

Practical implementation of the above idea is infeasible in high dimensions as it involves O(d4) cross moments of Z. This not only poses computational challenges, but also accumulates noise in the estimation. Furthermore, good estimates of fourth moments usually require the existence of eighth moments, which is not realistic for many heavy tailed distributions. These problems can be avoided under the elliptical family, as we now illustrate in the next subsection.

2.2. Elliptical distributions

The elliptical family contains multivariate distributions whose densities have elliptical contours. It generalizes multivariate normal distributions and inherits many of their nice properties.

Given a d × 1 vector μ and a d × d positive definite matrix Σ, a random vector Z that follows an elliptical distribution admits

Z=μ+ξ1/2U, (4)

where U is a random vector which follows the uniform distribution on unit sphere Sd−1, and ξ is a nonnegative random variable independent of U. Denote the elliptical distribution by ℰ(μ, Σ, g), where g is the density of ξ. In this paper, we always assume that ℰξ4 < ∞ and require that ℰ (ξ2) = d for the model identifiability. Then Σ is the covariance matrix of Z.

Proposition 2.1

Suppose Z follows an elliptical distribution as in (4). Then

E[Q(Z)]=tr(Ω)+μΩμ-2μδ,var[Q(Z)]=2(1+γ)tr(ΩΩ)+γ[tr(Ω)]2+4(Ωμ-δ)(Ωμ-δ),

where γ=E(ξ4)d(d+2)-1 is the kurtosis parameter.

The proof is given in the online supplementary material [Fan et al. (2014)]. The variance of Q(Z) does not involve any fourth cross-moments, but only the kurtosis parameter γ. For multivariate normal distributions, ξ2 follows a χ2-distribution with d degrees of freedom, and γ = 0. For multivariate t-distribution with degrees of freedom ν > 4, we have γ = 2/(ν − 4).

2.3. Rayleigh optimization

We assume that the two classes both follow elliptical distributions: X|(Y = 0) ~ ℰ(μ1, Σ1, g1) and X|(Y = 1) ~ ℰ (μ2, Σ2, g2). To facilitate the presentation, we assume the quantity γ is the same for both classes of conditional distributions. Let

M(Ω,δ)=-μ1Ωμ1+μ2Ωμ2+2(μ1-μ2)δ-tr(Ω(1-2)),Lk(Ω,δ)=2(1+γ)tr(ΩkΩk)+γ[tr(Ωk)]2+4(Ωμk-δ)k(Ωμk-δ), (5)

for k = 1 and 2. Combining (3) with Proposition 2.1, we have

R(Ω,δ)=[M(Ω,δ)]2L1(Ω,δ)+κL2(Ω,δ), (6)

where κ = (1 − π)/π.

Note that if we multiply both Ω and δ by a common constant, R(Ω, δ) remains unchanged. Therefore, maximizing R(Ω, δ) is equivalent to solving the following constrained minimization problem:

min(Ω,δ):M(Ω,δ)=1,Ω=Ω{L1(Ω,δ)+κL2(Ω,δ)}. (7)

We call problem (7) the Rayleigh optimization. It is a convex problem whenever Σ1 and Σ2 are both positive semi-definite.

The formulation of the Rayleigh optimization only involves the means and covariance matrices, and the kurtosis parameter γ. Therefore, if we know γ (e.g., when we know which subfamily the distributions belong to) and have good estimates (μ̂1, μ̂2, Σ̂1, Σ̂2), we can solve the empirical version of (7) to obtain (Ω̂, δ̂), which is the main idea of QUADRO. In addition, (7) is a convex problem, with a quadratic objective and equality constraints. Hence it can be solved efficiently by many optimization algorithms.

3. Quadratic dimension reduction via Rayleigh optimization

Now, we formally introduce the QUADRO procedure. We fix a model parameter γ ≥ 0. Let , 1 and 2 be the sample versions of M, L1, L2 in (5) by replacing (μ1, μ2, Σ1, Σ2) with their estimates. Details of these estimates will be given in Section 4. Let π̂ = n1/(n1 + n2) and κ = π̂/(1 − π̂). Given tuning parameters λ1 > 0 and λ2 > 0, we solve

min(Ω,δ):M^(Ω,δ)=1,Ω=Ω{L^1(Ω,δ)+κL^2(Ω,δ)+λ1Ω1+λ2δ1}. (8)

We propose a linearized augmented Lagrangian method to solve (8). To simplify the notation, we write = 1 + κL̂2, and omit the hat symbol on M and L when there is no confusion. The optimization problem is then

min(Ω,δ):M(Ω,δ)=1,Ω=Ω{L(Ω,δ)+λ1Ω1+λ2δ1}.

For an algorithm parameter ρ > 0, and a dual variable ν, we define the augmented Lagrangian as

Fρ(Ω,δ,ν)=L(Ω,δ)+ν[M(Ω,δ)-1]+(ρ/2)[M(Ω,δ)-1]2.

Using zero as the initial value, we iteratively update:

  • δ(k) = argminδ{Fρ(Ω(k−1), δ, ν(k−1)) + λ2|δ|1},

  • Ω(k) = argminΩ: Ω= Ω{Fρ (Ω, δ(k), ν(k−1)) + λ1|Ω|1},

  • ν(k) = ν(k−1) + ρ[M(Ω(k), δ(k)) − 1].

Here, the first two steps are primal updates, and the third step is a dual update.

First, we consider the update of δ. When Ω and ν are fixed, we can write

Fρ(Ω,δ,ν)=δAδ-2δb+cρ(Ω,ν),

where

A=4(1+κ2)+2ρ(μ1-μ2)(μ1-μ2),b=4(1Ωμ1+κ2Ωμ2)+[ρtr(Ω(1-2))+ρμ1Ωμ1-ρμ2Ωμ2+(ρ-ν)](μ1-μ2), (9)

and cρ (Ω, ν) does not depend on δ. Note that A is a positive semi-definite matrix. The update of δ is indeed a Lasso problem.

Next, we consider the update of Ω. When δ and ν are fixed, Fρ(Ω, δ, ν) is a convex function of Ω. We propose an approximate update step: we first “linearize” Fρ at Ω = Ω (k−1) to construct an upper envelope ρ, and then minimize this upper envelope. In detail, at any Ω = Ω0, we consider the following upper bound of Fρ (Ω, δ, ν):

F¯ρ(Ω,δ,ν)Fρ(Ω0,δ,ν)+1ijd[Ω(i,j)-Ω0(i,j)]Fρ(Ω0,δ,ν)Ω(i,j)+τ21ijd[Ω(i,j)-Ω0(i,j)]2,

where τ is a large enough constant [e.g., we can take τ=1ijd2Fρ(Ω0,δ,ν)Ω(i,j)2. We then minimize ρ(Ω, δ, ν) + λ1|Ω|1 to update Ω. This modified update step has an explicit solution,

Ω(i,j)=S(Ω0(i,j)-1τFρ(Ω0,δ,ν)Ω(i,j),λ1τ),

where 𝒮 (x, a) ≡ (|x| − a)+ sign(x) is the soft-thresholding function. We can write Ω* in a matrix form. Let

D=4(1+γ)(1Ω1+κ2Ω2)+2γ[tr(Ω1)1+κtr(Ω2)2]+4sym(1(Ωμ1-δ)μ1+κ2(Ωμ2-δ)μ2), (10)

where sym(B) = (B + B)/2 for any square matrix B. By direct calculation,

Ω=S(Ω0-1τD,λ1τ).

We now describe our algorithm. Let us initialize Ω(0) = 0d×d, δ(0) = 0 and ν(0) = 0. At iteration k, the algorithm updates as follows:

  • Compute A = A(Ω(k−1), δ(k−1), ν(k−1)) and b = b(Ω(k−1), δ(k−1), ν(k−1)) using (9). Update δ(k) = argminδ{δAδ − 2δb + λ2|δ|1}.

  • Compute D = D(Ω(k−1), δ(k−1), ν(k−1)) using (10). Update Ω(k)=S(Ω(k-1)-1τD,λ1τ).

  • Update ν(k) = ν(k−1) + ρ[M(Ω(k), δ(k)) − 1].

Stop until max{ρ|Ω(k)Ω(k−1)|, ρ|δ(k)δ(k−1)|, |ν(k)ν(k−1)|/ρ} ≤ ε for some pre-specified precision ε.

This is a modified version of the augmented Lagrangian method, where in the step of updating Ω, we minimize an upper envelope, which is obtained by locally linearizing the augmented Lagrangian.

Remark

QUADRO can be extended to folded concave penalties, for example, to SCAD [Fan and Li (2001)] or to adaptive Lasso [Zou (2006)]. Using the Local Linear Approximation algorithm [Fan, Xue and Zou (2014), Zou and Li (2008)], we can solve the SCAD-penalized QUADRO and the adaptive-Lasso-penalized QUADRO by solving L1-penalized QUADRO with multiple-step and one-step iterations, respectively.

4. Estimation of mean and covariance matrix

QUADRO requires estimates of the mean vector and covariance matrix for each class as inputs. We will show in Section 5 that the performance of QUADRO is closely related to the maxnorm estimation error on mean vectors and covariance matrices. Sample mean and sample covariance matrix work well for Gaussian data. However, when data are from elliptical distributions, they may have inferior performance as we estimate nonpolynomially many of means and variances. In Sections 4.1–4.2, we suggest a robust M-estimator to estimate the mean and a rank-based estimator to estimate the covariance matrix, which are more appropriate for non-Gaussian data. Moreover, in Section 4.3 we discuss how to estimate the model parameter γ when it is unknown.

4.1. Estimation of the mean

Suppose x1,, xn are i.i.d. samples of a random vector X = (X1,, Xd) from an elliptical distribution ℰ(μ,Σ, g). Let us denote μ = (μ1,, μd) and xi = (xi1,, xid) for i = 1,, n. We estimate each μj marginally using the data {x1j,, xnj}.

One possible estimator is the sample median

μ^Mj=median({x1j,,xnj}).

It can be shown that even under heavy-tailed distributions, P(μ^Mj-μj>Alog(δ-1)/n)δ for small δ ∈ (0, 1), where A is a constant determined by the probability density at μj, for each fixed j. This combined with the union bound gives that μ^M-μ=Op(log(d)/n).

Catoni (2012) proposed another M-estimator for the mean of heavy-tailed distributions. It works for distributions where mean is not necessarily equal to median, which is essential for estimating covariance of random variables. We denote the diagonal elements of the covariance matrix Σ as σ12,σ22,,σd2, and the off-diagonal elements as σkj for kj. The estimator μ̂C = (μ̂C,1,, μ̂C,d) is obtained as follows. For a strictly increasing function h:ℝ → ℝ such that −log(1 − y + y2/2) ≤ h(y) ≤ log(1 + y + y2/2), and a value δ ∈ (0, 1) such that n>2 log(1), we let

αδ={2log(δ-1)n[v+(2vlog(δ-1))/(n-2log(δ-1))]}1/2,

where v is an upper bound of max{σ12,,σd2}. For each j, we define μ̂Cj as the unique value that satisfies i=1nh(αδ(xij-μ^Cj))=0. It was shown in Catoni (2012) that P(μ^Cj-μj>2vlog(δ-1)n(1-2log(δ-1)/n))δ when the variance of Xj exists. Therefore, by taking δ = 1/(nd)2, μ^M-μClog(d)/n with probability at least 1 − (nd)−1, which gives the desired convergence rate.

To implement this estimator, we take h(y) = sgn(y) log(1+|y|+y2/2). For the choice of v, any value larger than max{σ12,,σd2} would work in theory. Catoni (2012) introduced a Lepski’s adaptation method to choose v. For simplicity, we take v=3max{σ12,,σd2}, where σj2 is the sample covariance of Xj.

The two estimators, the median and the M-estimator, both have a convergence rate of Op(log(d)/n) in terms of the max-norm error. In our numerical experiments, the M-estimator has a better numerical performance, and we stick to this estimator.

4.2. Estimation of the covariance matrix

To estimate the covariance matrix Σ, we estimate the marginal covariances { σj2, 1 ≤ jd} and the correlation matrix C separately. Again, we need robust estimates even though the data have fourth moments, as we simultaneously estimate nonpolynomial number of covariance parameters.

First, we consider estimating σj2. Note that σj2=E(Xj2)-E2(Xj). We estimate E(Xj2) and 𝔼(Xj) separately. To estimate E(Xj2), we use the M-estimator described above on the squared data { x1j2,,xnj2} and denote the estimator by η̂;Cj. This works as E(Xj4) is finite for each j in our setting; in addition, the M-estimator applies to asymmetric distributions. We then define

σ^Cj2=max{η^Cj-μ^Cj2,δ0},

where μ̂Cj is the M-estimator of 𝔼(Xj) and δ0 > 0 is a small constant ( δ0<min{σ12,,σd2}). It is easy to see that when the fourth moments of Xj are uniformly upper bounded by a constant and n ≥ 4 log(d2), max{σ^Cj-σj,1jd}=Op(log(d)/n).

Next, we consider estimating the correlation matrix C. For this, we use Kendall’s tau correlation matrix proposed by Han and Liu (2012). Kendall’s tau correlation coefficients [Kendall (1938)] are defined as

τjk=((Xj-Xj)(Xk-Xk)>0)-((Xj-Xj)(Xk-Xk)<0),

where is an independent copy of X. They have the following relationship to the true coefficients: Cjk=sin(π2τjk) for the elliptical family. Based on this equality, we first estimate Kendall’s tau correlation coefficients using rank-based estimators

τ^jk={2n(n-1)1i<insign((xij-xij)(xik-xik))jk,1,j=k,

and then estimate the correlation matrix by Ĉ = (Ĉjk) with

C^jk=sin(π2τ^jk).

It is shown in Han and Liu (2012) that C^-C=Op(log(d)/n).

Finally, we combine { σ^j2, 1 ≤ jd} and Ĉ to get Σ̂. Let

jk=σ^jσ^kC^jk,1j,kd

It follows immediately that -=Op(log(d)/n). However, this estimator is not necessarily positive semi-definite. To implement QUADRO, we need Σ̂ to be positive semi-definite so that the optimization in (8) is a convex problem. We obtain Σ̂ by projecting Σ̃ onto the cone of positive semi-definite matrices through the convex optimization

^=argminA:Aispositivesemidefinite{A-}. (11)

Note that |Σ̂Σ̃| ≤ |ΣΣ̃| by definition. Therefore, ^-^-+-2-=Op(log(d)/n). To compute Σ̂, we note that the optimization problem in (11) can be formulated as the dual of a graphical lasso problem corresponding to the smallest possible tuning parameter that still guarantees a feasible solution [Liu et al. (2012)]. Zhao, Roeder and Liu (2013) provide more algorithmic details.

4.3. Estimation of kurtosis parameter

When the kurtosis parameter γ is unknown, we can estimate it from data. Recall that γ=1d(d+2)E(ξ4)-1. Using decomposition (4) and the properties of U, we have

E(ξ4)=E{[(X-μ)-1(X-μ)]2}.

Motivated by this equality, we propose the estimator

γ^=max{1d(d+2)1ni=1n[(xi-μ)Ω(xi-μ)]2-1,0},

where μ̃ and Ω̃ are estimators of μ and Σ−1, respectively. Maruyama and Seo (2003) considered a similar estimator in low-dimensional settings, where they used the sample mean and sample covariance matrix. In high dimensions, we a robust estimate to guarantee uniform convergence. In particular, we take μ̃ = μ̂C and Ω̃ = Ω̂clime where Ω̂clime is the CLIME estimator proposed in Cai, Liu and Luo (2011). We can also take the covariance estimator in Section 4.2, but we will then need to establish its sampling property as a precision matrix estimator. We decide to use the CLIME estimator since such a property has already been established by Cai, Liu and Luo (2011). Denote by Σ−1 = (Ωjk)d×d. From simple algebra,

γ^-γmax1j,kdμjΩjkμk-μjΩjkμkCmax{μ-μ,Ω--1}.

In Section 4.1, we have seen that μ^C-μ=Op(log(d)/n). Moreover, Cai, Liu and Luo (2011) showed that Ω--1=-11·Op(log(d)/n) under mild conditions, where ||·||1 is the matrix L1-norm. Therefore, provided that ||Σ−1||1C, we immediately have γ^-γ=Op(log(d)/n).

5. Theoretical properties

In this section, we establish an oracle inequality for the Rayleigh quotient of the QUADRO estimates (Ω̂,δ̂). We assume that π and γ are known. For notational simplicity, we set λ1 = λ2 = λ. The results can be easily generalized to the case λ1λ2. Moreover, we drop the symmetry constraint Ω = Ω in all optimization problems involved. This simplifies the expression of the regularity conditions. The analysis with the symmetry constraint is a trivial extension of current analysis.

Recall the definition of M, L1 and L2 in (5) and κ = (1 − π) and L = L1 + κL2, the Rayleigh quotient of (Ω, δ) is equal to (up to a multiplicative constant)

R(Ω,δ)=[M(Ω,δ)]2L(Ω,δ).

The QUADRO estimates are

(Ω^,δ^)=argmin(Ω,δ):M^(Ω,δ)=1{L^(Ω,δ)+λΩ1+λδ1}.

We shall compare the Rayleigh quotient of (Ω̂,δ̂) with the Rayleigh quotients of a class of “oracle solutions.” This class includes the one that maximizes the true Rayleigh quotient, which we denote by ( Ω0,δ0). Here we adopt a class of solutions as the “oracle” instead of only ( Ω0,δ0), because we want the results not tied to the sparsity assumption on ( Ω0,δ0) but a weaker assumption: at least one solution in this class is sparse.

Our theoretical development is technically nontrivial. Conventional oracle inequalities are derived in a setting of minimizing a data-dependent loss without constraint, and the risk function is the expectation of the loss. Here we minimize a data-dependent loss with a data-dependent equality constraint, and the risk function—the Rayleigh quotient—is not equal to the expectation of the loss. A similar setting was considered in Fan, Feng and Tong (2012), where they introduced a data-dependent intermediate solution to deal with such equality constraint. However, the rate they obtained depends on this intermediate solution, which is very hard to quantify. In contrast, the rate in our results purely depends on the oracle solution. To get rid of the intermediate solution in the rate, we need to carefully quantify its difference from both the QUADRO solution and the oracle solution. The technique is new, and potentially useful for other problems.

5.1. Oracle solutions, the restricted eigenvalue condition

For any λ0 ≥ 0, we define the oracle solution associated with λ0 to be

(Ωλ0,δλ0)=argmin(Ω,δ):M(Ω,δ)=1{L(Ω,δ)+λ0Ω1+λ0δ1}. (12)

We shall compare the Rayleigh quotient of (Ω̂,δ̂) to that of ( Ωλ0,δλ0), for an arbitrary λ0. In particular, when λ0 = 0, the associated oracle solution (may not be unique) becomes

(Ω0,δ0)=argmin(Ω,δ):M(Ω,δ)=1{L(Ω,δ)}.

It maximizes the true Rayleigh quotient.

Next, we introduce a restricted eigenvalue (RE) condition jointly on Σ1, Σ2, μ1 and μ2. For any matrices A and B, let vec(A) be the vectorization of A by stacking all the elements of A column by column, and AB be the Kronecker product of A and B. We define the matrices

Qk=[(2(1+γ)k+4μkμk)k+γvec(k)vec(k)-4μkk-4μkk4k],

for k = 1, 2. We note that there are (d2 + d) coefficients to decide when maximizing R(Ω, δ): d2 elements of Ω and d elements of δ. We can stack all these coefficients into a long vector x = x(Ω, δ) in ℝd2+d defined as

x(Ω,δ)[vec(Ω),δ]. (13)

It can be shown that Lk(Ω, δ) = xQkx, for k = 1, 2; see Lemma 9.1. Therefore, L(Ω, δ) = xQx, where Q = Q1 +κQ2. Our RE condition is then imposed on the (d2 + d)×(d2 + d) matrix Q, and hence implicitly on (Σ1, Σ2,μ1,μ2).

We now formally introduce the RE condition. For a set S ⊂ {1, 2,, d2 + d} and a nonnegative value , we define the restricted eigenvalue in the following way:

Θ(S;c¯)=minv:vSc1c¯vS1vQvvS2.

Generally speaking, Θ(S; c̄) depends on (Σ1, Σ2,μ1,μ2) in a complicated way. For = 0, the following proposition builds a connection between Θ(S;0) and (Σ1, Σ2,μ1,μ2). For each S ⊂ {1, 2,, d2 + d}, there exist sets U ⊂ {1,, d} × {1,, d} and V ⊂ {1,, d} such that the support of x(Ω, δ) is S if and only if the support of Ω is U and the support of δ is V. Let

U=(i,j)U{i,j}.

Then UU′ ×U′. The following result is proved in Fan et al. (2014).

Proposition 5.1

For any set S ⊂ {1,…, d2 + d}, suppose U′ and V are defined as above. Let Σ̃k be the submatrix of Σk by restricting rows and columns to U′V, μ̃k be the subvector of μk by constraining elements to U′V, for k = 1, 2. If there exist constants v1, v2 > 0 such that λmin(k-v1μkμk)12λmin(k)v22 for k = 1, 2, then

Θ(S,0)(1+γ)(1+κ)v2min{v2,4v12+v1(1+γ)}>0.

5.2. Oracle inequality on Rayleigh’s quotient

Suppose max{|Σk|, |μk|, k = 1, 2} ≤ 1 and |Σ̂kΣk| ≤ |Σk|, |μ̂kμk| ≤ |μk| for k = 1, 2, without loss of generality. For any λ0 ≥ 0, let ( Ωλ0,δλ0) be the associated oracle solution and S be the support of xλ0=[vec(Ωλ0),(δλ0)]. Let Δn = max{|Σ̂kΣk|, |μ̂kμk|, k = 1, 2}. We have the following result for any given estimators, the proof of which we postpone to Section 9.

Theorem 5.1

Given λ0 ≥ 0, let S be the support of xλ0, s0 = |S| and k0=max{s0,R(Ωλ0,δλ0)}. Suppose that Θ(S, 0) ≥ c0, Θ(S, 3) ≥ a0 and R(Ωλ0,δλ0)u0, for some positive constants a0, c0 and u0. We assume 4s0Δn2a0c0 and max{s0Δn,s01/2k01/2λ0}<1 without loss of generality. Then there exist positive constants C = C(a0, c0,u0) and A = A(a0, c0,u0) such that for any η >1,

R(Ω^,δ^)R(Ωλ0,δλ0)1-Aη2max{s0Δn,s01/2k01/2λ0},

by taking λ=Cηmax{s01/2Δn,k01/2λ0}[R(Ωλ0,δλ0)]-1/2.

In Theorem 5.1, the rate of convergence has two parts. The term s0Δn reflects how the stochastic errors of estimating (Σ 1, Σ 2,μ1,μ2) affect the Rayleigh quotient. The term s01/2k01/2λ0 is an extra term that depends on the oracle solution we aim to use for comparison. In particular, if we compare R(Ω̂,δ̂) with RmaxR(Ω0,δ0), the population maximum Rayleigh quotient with λ0 = 0, this extra term disappears. If we further use the estimators in Section 4, Δn=Op(log(d)/n). We summarize the result as follows.

Corollary 5.1

Suppose that the condition of Theorem 5.1 holds with λ0 = 0. Then for some positive constants A and C, when λ>Cs01/2Rmax-1/2Δn, we have

R(Ω^,δ^)(1-As0Δn)Rmax.

Furthermore, if the mean vectors and covariance matrices are estimated by using the robust methods in Section 4, then when λ>Cs01/2Rmax-1/2log(d)/n,

R(Ω^,δ^)(1-As0log(d)/n)Rmax,

with probability at least 1− (nd)−1.

From Corollary 5.1, when ( Ω0,δ0) is truly sparse, R(Ω̂,δ̂) is close to the population maximum Rayleigh quotient Rmax. However, we note that Theorem 5.1 considers more general situations, including cases where ( Ω0,δ0) is not sparse. As long as there exists an “approximately optimal” and sparse solution, that is, for a small λ0 the associated oracle solution ( Ωλ0,δλ0) is sparse, Theorem 5.1 guarantees that R(Ω̂,δ̂) is close to R(Ωλ0,δ0) and hence close to Rmax.

Remark

Our results are analogous to oracle inequalities for prediction error in linear regressions; therefore, the condition Θ (S, c̄) is similar to the RE condition in linear regressions [Bickel, Ritov and Tsybakov (2009)]. To recover the support of ( Ω0,δ0), conditions similar to the “irrepresentable condition” for Lasso [Zhao and Yu (2006)] are needed.

6. Application to classification

One important application of QUADRO is high-dimensional classification for elliptically-distributed data. Suppose (Ω̂,δ̂) are the QUADRO estimates. This yields the classification rule

h^(x)=I{xΩ^x-2δ^x<c}.

In this section, we first show that for normally distributed data, the Rayleigh quotient is a proxy of the classification error, and then derive an analytic choice of c. Comparing with many other high-dimensional classification methods, QUADRO produces quadratic boundaries and can handle both non-Gaussian distributions and nonequal covariance matrices.

6.1. Approximation of classification errors

Given (Ω, δ) and a threshold c, a general quadratic rule h(x) = h(x; Ω, δ, c) is defined as

h(x;Ω,δ,c)=I{xΩx-2xδ<c}. (14)

We reparametrize c as

c=tM1(Ω,δ)+(1-t)M2(Ω,δ). (15)

Here Mk(Ω,δ)=μkΩμk-2μkδ+tr(Ωk) s the mean of Q(X) in class k, for k = 1, 2. After the reparametrization, t is scale-free. As we will see below, in most cases, given Ω and δ, the optimal t that minimizes the classification error takes values on (0, 1).

From now on, we write h(x; Ω, δ, c) = h(x; Ω, δ, t). Let Err(Ω, δ, t) be the classification error of h(·;Ω, δ, t). Due to technical difficulties, we only give results for Gaussian distributions. Suppose X|(Y = 0) ~ 𝒩(μ1, Σ1) and X|(Y = 1) ~ 𝒩(μ2, Σ2). For k = 1, 2, we write

k1/2Ωk1/2=KkSkKkT,

where Sk is a diagonal matrix containing the nonzero eigenvalues, and the columns of Kk are corresponding eigenvectors. Let βk=KkTk(Ωμk-δ). When max{|Sk|, |βk|, k = 1, 2} is bounded, the following proposition shows that an approximation of Err(Ω, δ, t) is

Err¯(Ω,δ,t)πΦ¯((1-t)M(Ω,δ)L1(Ω,δ))+(1-π)Φ¯(tM(Ω,δ)L2(Ω,δ)),

where M, L1 and L2 are defined in (5), Φ is the distribution function of a standard normal variable and Φ̄ = 1 − Φ. Its proof is contained in Section 9.

Proposition 6.1

Suppose that max{|Sk|, |βk|, k = 1, 2} ≤ C0 for some constant C0 > 0, and let q be the rank of Ω. Then as d goes to infinity,

ERR(Ω,δ,t)-Err¯(Ω,δ,t)=O(q)+o(d)[min{L1(Ω,δ),L2(Ω,δ)}]3/2.

In particular, if we consider all such (Ω, δ) that the variance of Q(X; Ω, δ) under both classes are lower bounded by c0dθ for some constants θ > 2/3 and c0 > 0, then we have Err-Err¯=o(1).

We now take a closer look at Err¯. Let H(x)=Φ¯(1/x), which is monotone increasing on (0,∞). Writing for short M =M1M2, Mk =Mk(Ω, δ) and Lk = Lk(Ω, δ) for k = 1, 2, we have

Err¯(Ω,δ,t)=πH(L1(1-t)2M2)+(1-π)H(L2t2M2).

Figure 2 shows that H(·) is nearly linear on an important range. This suggests the following approximation:

Err¯(Ω,δ,t)H(πL1(1-t)2M2+(1-π)L2t2M2)=H(π(1-t)21R(t)), (16)

where R(t) = R(t)(Ω, δ) is the R(Ω, δ) in (6) corresponding to the κ value

Fig. 2.

Fig. 2

Function H(x)=Φ¯(1/x).

κ(t)1-ππ(1-t)2t2.

The approximation in (16) is quantified in the following proposition, which is proved in Fan et al. (2014).

Proposition 6.2

Given (Ω, δ, t), we write for short Rk = Rk(Ω, δ) = [M(Ω, δ)]2/Lk(Ω, δ), for k = 1, 2, and define

V1=V1(Ω,δ,t)=min{(1-t)2R1,1(1-t)2R1},V2=V2(Ω,δ,t)=min{t2R2,1t2R2},V=V(Ω,δ,t)=max{V1/V2,V2/V1}.

Then there exists a constant C > 0 such that

|Err¯(Ω,δ,t)-H(π(1-t)2R(t)(Ω,δ))|C[max{V1,V2}]1/2V-12.

In particular, when t = 1/2,

|Err¯(Ω,δ,t)-H(π(1-t)2R(t)(Ω,δ))|CR01/2.(ΔRR0)2,

where R0 = max{min{R1, 1/R1}, min{R2, 1/R2}} and ΔR = |R1R2|.

Note that L1 and L2 are the variances of Q(X) = XΩX − 2Xδ for two classes, respectively. In cases where |L1L2| ≪ min{L1, L2}, ΔRR0. Also, R0 is always bounded by 1, and it tends to 0 in many situations, for example, when R1, R2 → ∞, or R1, R2 → 0, or R1 → 0, R2 → ∞. Proposition 6.2 then implies that the approximation in (16) when t = 1/2 is good.

Combining Propositions 6.1 and 6.2, the classification error of a general quadratic rule h(·; Ω, δ, t) is approximately a monotone decreasing transform of the Rayleigh quotient R(t)(Ω, δ), corresponding to κ = κ(t). In particular, when t = 1/2 [i.e., c = (M1 + M2)/2], R(1/2)(Ω, δ) is exactly the one used in QUADRO. Consequently, if we fix the threshold to be c = (M1 + M2)/2, then the Rayleigh quotient (upon with a monotone transform) is a good proxy for classification error. This explains why Rayleigh-quotient based procedures can be used for classification.

Remark

Even in the region that H(·) is far from being linear such that the upper bound in Proposition 6.2 is not o(1), we can still find a monotone transform of the Rayleigh quotient as an upper bound of the classification error. To see this, note that for x ∈ [1/3, ∞), H(x) is a concave function. Therefore, the approximation in (16) becomes an inequality, that is, Err¯(Ω,δ,t)H(πR(t)(1-t)2). For x ∈ (0, 1/3), H(x) ≤ 0.1248x. It follows that Err¯(Ω,δ,t)0.1248πR(t)(1-t)2.

Remark

In the current setting, the Bayes classifier is a quadratic rule h(x; ΩB, δB, cB) with ΩB=1-1-2-1,δB=1-1μ1-2-1μ2 and cB=μ22-1μ2-μ11-1μ1. Let ( Ω0,δ0) be the population solution of QUADRO when λ = 0. We note that (ΩB, δB) and ( Ω0,δ0) are different: the former minimizes inft Err(Ω, δ, t), while the latter minimizes Err¯(Ω,δ,1/2).

6.2. QUADRO as a classification method

Results in Section 6.1 suggest an analytic method to choose the threshold c, or equivalently t, with given (Ω, δ). Let

t^mint{πΦ¯((1-t)M^(Ω,δ)L^1(Ω,δ))+(1-π)Φ¯(tM^(Ω,δ)L^2(Ω,δ))}, (17)

and set

c^=(1-t^)M^1(Ω,δ)+t^M^2(Ω,δ). (18)

Here (17) is a one-dimensional optimization problem and can be solved easily. The resulting QUADRO classification rule is

h^Quad(x)=I{xΩ^x-2xδ^-c^<0}.

As a by-product, the method to decide c, described in (17) and (18), can be used in other classification procedures on Gaussian data, such as logistic regression, quadratic discriminant analysis (QDA) and kernel support vector machine, once (Ω̂, δ̂) are given. It provides a fast and purely data-driven way to decide the threshold value in quadratic classification rules. In our numerical experiments, it performs well.

7. Numerical studies

In this section, we investigate the performance of QUADRO in several simulation examples and a real data example. The simulation studies contain both Gaussian models and general elliptical models. We compare QUADRO with several classification-oriented procedures. Performances are evaluated in terms of classification errors.

7.1. Simulations under Gaussian models

Let n1 = n2 = 50 and d = 40. For each given μ1, μ2, Σ1 and Σ2, we generate 100 training datasets independently, each with n1 data from 𝒩(μ1, Σ1) and n2 data from 𝒩(μ2, Σ2). In QUADRO, we input the sample means and sample covariance matrices. We set λ2 = 1 and work with λ1 and r from now on. The two tuning parameters λ1 ≥ 0 and r > 0 are selected in the following way. For various pairs of (λ1, r), we apply QUADRO for each pair and evaluate the classification error via 4000 newly generated testing data; we then choose the (λ1, r) that minimize the classification error.

We compare QUADRO with five classification-oriented procedures:

  • Sparse logistic regression (SLR): We apply the sparse logistic regression to the augmented feature space {Xi, 1 ≤ id; XiXj, 1 ≤ ijd}. The resulting estimator then gives a quadratic projection with (Ω, δ, c) decided from the fitted regression coefficients. We implement the sparse logistic regression using the R package glmnet.

  • Linear sparse logistic regression (L-SLR): We apply the sparse logistic regression directly to the original feature space {Xi, 1 ≤ id}.

  • ROAD [Fan, Feng and Tong (2012)]: This is a linear classification method, which can be formulated equivalently as a modified version of QUADRO by enforcing Ω̂ as the zero matrix and plugging in the pooled sample covariance matrix.

  • Penalized-LDA (P-LDA) [Witten and Tibshirani (2011)]: This is a variant of LDA, which solves an optimization problem with a nonconvex objective and L1 penalties. Also, P-LDA only uses diagonals of the sample covariance matrices.

  • FAIR [Fan and Fan (2008)]: This is a variant of LDA for high-dimensional settings, where screening is adopted to pre-select features and only the diagonals of the sample covariance matrices are used.

To make a fair comparison, the tuning parameters in SLR and L-SLR are selected in the same way as in QUADRO based on 4000 testing data. ROAD and P-LDA are self-tuned by its package. The number of features chosen in FAIR is calculated in the way suggested in [Fan and Fan (2008)].

We consider four models:

  • Model 1: Σ1 is the identity matrix. Σ2 is a diagonal matrix in which the first 10 elements are equal to 1.3 and the rest are equal to 1. μ1 = 0, and μ2 = (0.7, …, 0.7, 0, …, 0) with the first 10 elements of μ2 being nonzero.

  • Model 1L: μ1, μ2 are the same as in model 1, and both Σ1 and Σ2 are the identity matrix.

  • Model 2: Σ1 is a block-diagonal matrix. Its upper left 20×20 block is an equal correlation matrix with ρ = 0.4, and its lower right 20 × 20 block is an identity matrix. 2=(1-1+I)-1. We also set μ1 = μ2 = 0. In this model, neither 1-1 nor 2-1 is sparse, but 1-1-2-1 is.

  • Model 3: Σ1, Σ2 and μ1 are the same as in model 2, and μ2 is taken from model 1.

Figure 3 contains the boxplots for the classification errors of all methods. In all four models, QUADRO outperforms other methods in terms of classification error. In model 1L, Σ1 = Σ2, so the Bayes classifier is linear. In this case which favors linear methods, QUADRO is still competitive with the best of all linear classifiers. In model 2, μ1 = μ2, so linear methods can do no better than random guessing. Therefore, ROAD, L-SLR, P-LDA and FAIR all have very poor performances. For the two quadratic methods, QUADRO is significantly better than SLR. In models 1 and 3, μ1μ2 and Σ1Σ2, so in the Bayes classifier, both “linear” parts and “quadratic” parts play important roles. In model 1, both Σ1 and Σ2 are diagonal, and the setting favors methods using only diagonals of sample covariance matrices. As a result, P-LDA and FAIR perform quite well. In model 3, Σ1 and Σ2 are both nondiagonal and nonsparse (but Σ1Σ2 is sparse). We see that the performances of P-LDA and FAIR are unsatisfactory. QUADRO outperforms other methods in both models 1 and 3.

Fig. 3.

Fig. 3

Distributions of minimum classification error based on 100 replications for four different normal models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors of 4000 testing samples. See Fan et al. (2014) for detailed numerical tables.

Comparing SLR and L-SLR, we see the former considers a broader class, while the latter is more robust, but neither of them perform uniformly better. However, QUADRO performs well in all cases. In terms of Rayleigh quotients, QUADRO also outperforms other methods in most cases.

7.2. Simulations under elliptical models

Let n1 = n2 = 50 and d = 40. For each given μ1, μ2, Σ1 and Σ2, data are generated from multivariate t distribution with degrees of freedom 5. In QUADRO, we input the robust M-estimators for means and the rank-based estimators for covariance matrices as described in Section 4. We compare the performance of QUADRO with the five methods compared under Gaussian settings. We also implement QUADRO with inputs of sample means and sample covariance matrices. We name this method QUADRO-0 to differentiate it from QUADRO.

We consider three models:

  • Model 4: Here we use same parameters as those in model 1.

  • Model 5: Σ1, μ1 and μ2 are the same as in model 1. Σ2 is the covariance matrix of a fractional white noise process, where the difference parameter l = 0.2. In other words, Σ2 has the polynomial off-diagonal decay |Σ2(i, j)| = O(|ij|1−2l).

  • Model 6: Σ1, μ1 and μ2 are the same as in model 1. Σ2 is a matrix such that Σ2(i, j) = 0.6|ij|; that is, Σ2 has an exponential off-diagonal decay.

Figure 4 contains the boxplots of average classification error over 100 replications. QUADRO outperforms the other methods in all settings. Also, QUADRO is better than QUADRO-0 (e.g., 0.161 versus 0.173, of the average classification error in model 5), which illustrates the advantage of using the robust estimators for means and covariance matrices.

Fig. 4.

Fig. 4

Distributions of minimum classification error based on 100 replications across different elliptical distribution models. The tuning parameters for QUADRO, SLR and L-SLR are chosen to minimize the classification errors. See Fan et al. (2014) for detailed numerical tables.

7.3. Real data analysis

We apply QUADRO to a large-scale genomic dataset, GPL96, and compare the performance of QUADRO with SLR, L-SLR, ROAD, P-LDA and FAIR. The GPL96 data set contains 20,263 probes and 8124 samples from 309 tissues. Among the tissues, breast tumor has 1142 samples, which is the largest set. We merge the probes from the same gene by averaging them, and finally get 12,679 genes and 8124 samples. We divide all samples into two groups: breast tumor or nonbreast tumor.

First, we look at the classification errors. We replicate our experiment 100 times. Each time, we proceed with the following steps:

  • Randomly choose a training set of 400 samples, 200 from breast tumor and 200 from nonbreast tumor.

  • For each training set, we use half of the samples to compute (Ω̂, δ̂) and the other half to select the tuning parameters by minimizing the classification error.

  • Use the remaining 942 samples from breast tumor and another randomly chosen 942 samples from nonbreast tumor as testing set, and calculate the testing error.

FAIR does not have any tuning parameters, so we use the whole training set to calculate classification frontier, and the rest to calculate testing error. The results are summarized in Table 1. We see that QUADRO outperforms all other methods.

Table 1.

Classification errors on GPL96 dataset, across methods QUADRO, SLR and L-SLR. Means and standard deviations (in the parenthesis) of 100 replications are reported

QUADRO SLR L-SLR ROAD Penalized-LDA FAIR
0.014 (0.007) 0.025 (0.007) 0.025 (0.009) 0.016 (0.007) 0.060 (0.011) 0.046 (0.009)

Next, we look at gene selection and focus on the two quadratic methods, QUADRO and SLR. We apply two-fold cross-validation to both QUADRO and SLR. In the results, QUADRO selects 139 genes and SLR selects 128 genes. According to KEGG database, genes selected by QUADRO belong to 5 of the pathways that contain more than two genes; correspondingly, genes selected by SLR belong to 7 pathways. Using the ClueGo tool [Bindea et al. (2009)], we display the overall KEGG enrichment chart in Figure 5. We see from Figure 5 that both QUADRO and SLR have focal adhesion as its most important functional group. Nevertheless, QUADRO finds ECM-receptor interaction as another important functional group. ECM-receptor interaction is a class consisting of a mixture of structural and functional macromolecules, and it plays an important role in maintaining cell and tissue structures and functions. Massive studies [Luparello (2013), Wei and Li (2007)] have found evidence that this class is closely related to breast cancer.

Fig. 5.

Fig. 5

Overall KEGG enrichment chart, using (a) QUADRO; (b) SLR.

Besides the pathway analysis, we also perform the Gene Ontology (GO) enrichment analysis on genes selected by QUADRO. This analysis was completed by DAVID Bioinformatics Resources, and the results are shown in Table 2. We present the biological processes with p-values smaller than 10−3. According to the table, we see that many biological processes are significantly enriched, and they are related to previously selected pathways. For instance, the biological process cell adhesion is known to be highly related to cell communication pathways, including focal adhesion and ECM-receptor interaction.

Table 2.

Enrichment analysis results according to Gene Ontology for genes selected by QUADRO. The four columns represent GO ID, GO attribute, number of selected genes having the attribute and their corresponding p-values. We rank them according to p-values in increasing order

GO ID GO attribute No. of genes p-value
0048856 Anatomical structure development 58 3.7E–12
0032502 Developmental process 62 2.9E–10
0048731 System development 52 3.1E–10
0007275 Multicellular organismal development 55 1.8E–8
0001501 Skeletal system development 15 1.3E–6
0032501 Multicellular organismal process 66 1.4E–6
0048513 Organ development 37 1.4E–6
0009653 Anatomical structure morphogenesis 28 8.7E–6
0048869 Cellular developmental process 34 1.9E–5
0030154 Cell differentiation 33 2.1E–5
0007155 Cell adhesion 18 2.4E–4
0022610 Biological adhesion 18 2.2E–4
0042127 Regulation of cell proliferation 19 2.9E–4
0009888 Tissue development 17 3.7E–4
0007398 Ectoderm development 9 4.8E–4
0048518 Positive regulation of biological process 34 5.6E–4
0009605 Response to external stimulus 20 6.3E–4
0043062 Extracellular structure organization 8 7.4E–4
0007399 Nervous system development 22 8.4E–4

8. Conclusions and extensions

QUADRO is a robust sparse high-dimensional classifier, which allows us to use differences in covariance matrices to enhance discriminability. It is based on Rayleigh quotient optimization. The variance of quadratic statistics involves all fourth cross moments, and this can create both computational and statistical problems. These problems are avoided by limiting our applications to the elliptical class of distributions. Robust M-estimator and rank-based estimation of correlations allow us to obtain the uniform convergence for nonpolynomially many parameters, even when the underlying distributions have the finite fourth moments. This allows us to establish oracle inequalities under relatively weaker conditions.

Existing methods in the literature about constructing high-dimensional quadratic classifiers can be divided into two types. One is the regularized QDA, where regularized estimates of 1-1 and 2-1 are plugged into the Bayes classifier; see, for example, Friedman (1989). QUADRO avoids directly estimating inverse covariance matrices, which requires strong assumptions in high dimensions. The other is to combine linear classifiers with the inner-product kernel. The main difference between QUADRO and this approach is the simplification in Proposition 2.1. Due to this simplification, QUADRO avoids incorporating all fourth cross moments from the data and gains extra statistical efficiency.

QUADRO also has deep connections with the literature of sufficient dimension reduction. Dimension reduction methods, such as SIR [Li (1991)], SAVE [Cook and Weisberg (1991)] and Directional Regression [Li and Wang (2007)], can be equivalently formulated as maximizing some “quotients.” The population objective of SIR is to maximize var{𝔼[f(X|Y)]} subject to var[f(X)] = 1. Using the same constraint, SAVE and directional regression combine var{𝔼[f(X|Y)]} and 𝔼[var(f(X|Y))] in the objective. An interesting observation is that the Rayleigh quotient maximization is equivalent to the population objective of SIR, by noting that the denominator of (1) is equal to 𝔼[var(f(X|Y))] and var[f(X)] = 𝔼[var(f(X|Y))] + var{𝔼[f(X|Y)]}. This is not a coincidence, but due instead to the known equivalence between SIR and LDA in classification [Kent (1991), Li (2000)].

Despite similar population objectives, QUADRO and the aforementioned dimension reduction methods are different in important ways. First, we clarify that even when λ1, λ2 are 0, QUADRO is not the same procedure as SIR combined with the inner-product kernel [Wu (2008)], although they share the same population objective. The difference is that QUADRO utilizes a simplification of the Rayleigh quotient for quadratic f, relying on the assumption that X|Y is always elliptically distributed; moreover, it adopts robust estimators of the mean vectors and covariance matrices. Second, QUADRO is designed for high-dimensional settings, in which neither SIR, SAVE nor Directional Regression can be directly implemented. These methods need to either standardize the original data XΣ̂−1(X) or solve a generalized eigen-decomposition problem Av = λΣ̂v for some matrix A. Both methods require that the sample covariance matrix is well conditioned, which is often not the case in high dimensions. Possible solutions include Regularized SIR [Li and Yin (2008), Zhong et al. (2005)], solving generalized eigen-decomposition for an undetermined system [Coudret, Liquet and Saracco (2014)] and variable selection approaches [Chen, Zou and Cook (2010), Jiang and Liu (2013)]. However, these methods are not designed for Rayleigh quotient maximization. Third, our assumption on the model is different from that in dimension reduction. We require X|Y to be elliptically distributed, while many dimension reduction methods “implicitly” require X to be marginally elliptically distributed. Neither method is stronger than the other. Assuming conditional elliptical distribution is more natural in classification. In addition, our assumption is used only to simplify the variances of quadratic statistics, whereas the elliptical assumption is critical to SIR.

The Rayleigh optimization framework developed in this paper can be extended to the multi-class case. Suppose the data are drawn independently from a joint distribution of (X, Y), where X ∈ ℝd and Y takes values in {0, 1, …, K − 1}. Definition (1) for the Rayleigh quotient of a projection f : ℝd → ℝ is still well defined. Let πk = ℙ(Y = k), for k = 0, 1, …, K − 1. In this K-class situation,

Rq(f)=0k<lK-1πkπl{E[f(X)Y=k]-E[f(X)Y=l]}20kK-1πkvar[f(X)Y=k]. (19)

Let Mk(f) = 𝔼[f(X)|Y = k] and Lk(f) = var[f(X)|Y = k]. Similar to the two-class case, maximizing Rq(f) is equivalent to solving the following optimization problem:

minfk=0K-1πkLk(f)s.t.0k<lK-1πkπlMk(f)-Ml(f)2=1.

However, this is not a convex problem. We consider an approximate Rayleigh-quotient-maximization problem as follows:

minfk=0K-1πkLk(f)s.t.πkπlMk(f)-Ml(f)1,0k<lK-1.

To solve this problem, we first pick an order of M1(f), …, MK(f) to remove the absolute values in the constraints. Then it becomes a convex problem. Therefore, the whole optimization can be carried out by simultaneously solving K! convex problems. When K is small, the computational cost is reasonable. In practice, we can apply more efficient algorithms to speed up the computation.

9. Proofs

9.1. Proof of Theorem 5.1

We prove the claim by first rewriting optimization problem (8) into a vector form. For any (Ω, δ), write x = [vec(Ω), δ]. Let Q be as defined in Section 5, and

q=[vec(2+μ2μ2-1-μ1μ1),2(μ1-μ2)].

We introduce the following lemma which is proved in the supplementary material [Fan et al. (2014)].

Lemma 9.1

M(Ω, δ) = qx and L(Ω, δ) = xQx.

Let xλ0=[vec(Ωλ0),(δλ0)] and = [vec(Ω̂), δ̂]. Using Lemma 9.1,

xλ0=minxd:qx=1{xQx+λ0x1},x^=argminxd:q^x=1{xQ^x+λx1},

where and are counterparts of Q and q, respectively, by replacing μ1, μ2, Σ1 and Σ2 with their estimates. Moreover, we have the Rayleigh quotient

R(Ω,δ)=R(x)(qx)2xQx.

In addition, we have the following lemma, which is proved in the supplementary material [Fan et al. (2014)].

Lemma 9.2

max{|Q|, |q|} ≤ C0 max{|Σ̂kΣk|, |μ̂kμk|, k = 1, 2} for some constant C0 > 0.

Combining the above results, the claim follows immediately from the following theorem:

Theorem 9.1

For any λ0 ≥ 0, let S be the support of xλ0. Suppose Θ(S, 0) ≥ c0, Θ (S, 3) ≥ a0 and R(xλ0)u0, for positive constants a0, c0 and u0. Let Δn = max{|Q|, |q|}, s0 = |S| and k0=max{s0,R(xλ0)}. Suppose 4s0Δn2<c0u0 and max{s0Δn,s01/2k01/2λ0}<1. Then there exist positive constants C = C(a0, c0, u0) and A = A(a0, c0, u0), such that for any η > 1, by taking λ=Cηmax{s01/2Δn,k01/2λ0}[R(xλ0)]-1/2,

R(x^)R(xλ0)1-Aη2max{s0Δn,s01/2k01/2λ0}.

The main part of the proof is to show Theorem 9.1. Write for short x=xλ0, R* = R(x*), V* = (R*)−1= (x*)Qx*, * = (V*)1/2. Let αn=Δnx01/2, βn = Δn|x*|0 and Tn(x)=max{s0Δn,s01/2k01/2λ0}. We define the quantity

Γ(x)=Qx-(xQx)q(xQx)1/2foranyx.
Step 1

We introduce x1, a multiple of x, and use it to bound ||1.

Let QSS be the submatrix of Q formed by rows and columns corresponding to S. Since λmin(QSS)= Θ(S, 0) ≥ c0, we have (x*)Qx*c0|x*|2. Using this fact and by the Cauchy–Schwarz inequality,

x1x0xc0-1/2x0V¯. (20)

It follows that

q^x-qxq^-qx1c0-1/2Δnx0V¯=c0-1/2αnV¯. (21)

Let tn = x*. Then (21) says that tn-1c0-1/2αnV¯. Noting that V¯=(R)1/2u0-1/2, we have tn-1(c0u0)-1/2s01/2Δn<1/2 by assumption. In particular, tn > 0. Let

x1=tn-1x.

Then q^x1=1. From the definition of ,

x^Q^x^+λx^1(x1)Q^x1+λx11. (22)

By direct calculation,

x^Q^x^-(x1)Q^x1=(x^-x1)Q^(x^-x1)+2(x^-x1)Q^x1=(x^-x1)Q^(x^-x1)+2(x^-x1)(Q^x1-Vq^)2(x^-x1)(Q^x1-Vq^), (23)

where the second equality is due to q^x^=q^x1=1. We aim to bound Q^x1-Vq^. The following lemma is proved in the supplementary material [Fan et al. (2014)].

Lemma 9.3

When Θ(S, 0) ≥ c0, there exists a positive constant C1 = C1(c0) such that Γ(xλ0)C1λ0[max{s0,R(xλ0)}]1/2 for any λ0 ≥ 0.

Since x1=tn-1x and tn-1<2,

Q^x1-Vq^tn-1Q^x-Vq^+Vtn-1-1q^2(Qx-Vq+Q^-Qx1+Vq^-q+Vtn-1q^)2[Γ(x)V¯+c0-1/2αnV¯+u0-1/2ΔnV¯+q^c0-1/2u0-1αnV¯]C2(λ0k01/2+s01/2Δn)V¯.

Here the third inequality follows from (20)–(21) and V=V¯(R)-1/2u0-1/2V¯. The last inequality is obtained as follows: from Lemma 9.2, we know that || ≤ |q| + |q| ≤ 2C0 (see also the assumptions in the beginning of Section 5.2); we also use Lemma 9.3 and αnV¯u0-1/2s01/2Δn. By letting C = 8C2, the choice of λ=Cηmax{s01/2Δn,k01/2λ0}V¯ for η > 1 ensures that

Q^x1-q^λ/4.

Plugging this result into (23) gives

x^Q^x^-(x1)Q^x1-λ2x^-x11. (24)

Combining (22) and (24) gives

λx^1-λ2x^-x11λx11. (25)

First, since x^1=x^S1+x^Sc1x1S1-x^S-x1S1+x^Sc1 and x^-x11=x^S-x1S1+x^Sc1, we immediately see from (25) that

(x^-x1)Sc13(x^-x1)S1. (26)

Second, note that x^-x11x^1+x11. Plugging this into (25) gives

x^13x11=3tn-1x16c0-1/2x0V¯. (27)
Step 2

We use (26)–(27) to derive an upper bound for (x^)Qx^-(x1)Qx1.

Note that

x^Q^x^-(x1)Q^x1x^Qx^-(x1)Qx1-(x^Q^x^-x^Qx^+(x1)Q^x1-(x1)Qx1)x^Qx^-(x1)Qx1-(Q^-Qx^12+Q^-Qx112)x^Qx^-(x1)Qx1-10tn-2Q^-Qx12x^Qx^-(x1)Qx1-C3βnV, (28)

where the last two inequalities are direct results of (27). Combining (22) and (28),

x^Qx^+λx^1(x1)Qx1+λx11+C3βnV. (29)

Similar to (23), we have

x^Qx^-(x1)Qx1=(x^-x1)Q(x^-x1)+2(x^-x1)(Qx1-Vq^), (30)

where

Qx1-Vq^tn-1(Qx-Vq+Vq^-q)+Vtn-1-1q^2[Γ(x)V¯+u0-1/2ΔnV¯+q^c0-1/2u0-1αnV¯]λ/4.

It follows that

x^Qx^-(x1)Qx1(x^-x1)Q(x^-x1)-λ2x^-x11.

Plugging this into (29), we obtain

(x^-x1)Q(x^-x1)+λx^1-λ2x^-x11λx11+C3βnV. (31)

We can rewrite the second and third terms on the left-hand side of (31) as

λx^S1-λ2x^S-x1S1+λ2x^Sc1.

Plugging this into (31) and by the triangular inequality x1S1-x^S1x^S-x1S1, we find that

(x^-x1)Q(x^-x1)+λ2x^Sc13λ2x^S-x1S1+C3βnV.

We drop the term λ2x^Sc1 on the left-hand side and apply the Cauchy–Schwarz inequality to the term x^S-x1S1. This gives

(x^-x1)Q(x^-x1)3λ2x10x^1S-x1S+C3βnV. (32)

Since (26) holds, by the definition of Θ(S, 3),

(x^-x1)Q(x^-x1)a0x^S-x1S2.

We write temporarily Y=(x^-x1)Q(x^-x1) and b = C3βnV*. Combining these with (32),

Y3λ2a0x10Y+b.

Note that when uau + b, we have (u-a2)2b+a24, and hence u22[a24+(u-a2)2]a2+2b. As a result, the above inequality implies

(x^-x1)Q(x^-x1)9λ24a0x0+2C3βnV, (33)

where we have used x10=x0. Furthermore, (30) yields that

x^Qx^-(x1)Qx1(x^-x1)Q(x^-x1)+λ2x^-x11(x^-x1)Q(x^-x1)+2λx11(x^-x1)Q(x^-x1)+4c0-1/2V¯λx0, (34)

where the second inequality is due to x^-x11x^1+x14x11, and the last inequality is from (27). Recall that λ=Cηmax{k01/2λ0,s01/2Δn}V¯. As a result,

λx0=Cηmax{k01/2s01/2λ0,s0Δn}V¯=CηTn(x)V¯. (35)

Combining (33), (34) and (35) gives

x^Qx^-(x1)Qx19C24a0η2[Tn(x)]2V+4Cc0-1/2ηTn(x)V+2C3βnVC4η2Tn(x)V. (36)
Step 3

We use (36) to give a lower bound of R().

Note that R() = (q)2/(Qx̂). First, we look at the denominator Qx̂. From (21) and that tn > 1/2,

tn-2-1=tn-1(1+tn-1)tn-16c0-1/2αnV¯.

Combining with (36) and noting that (x1)Qx1=tn-2(x)Qx=tn-2V, we have

x^Qx^[tn-2+C4η2Tn(x)](x)Qx[1+6c0-1/2αnV¯+C4η2Tn(x)](x)Qx[1+C5η2Tn(x)](x)Qx. (37)

Second, we look at the numerator q. Since = 1, by (27),

qx^-1q^-qx^16c0-1/2αnV¯C6Tn(x). (38)

Combining (37) and (38) gives

R(x^)=(qx^)2x^Qx^[1-C6Tn(x)]21+C5η2Tn(x)1(x)Qx[1-Aη2Tn(x)](qx)2(x)Qx=[1-Aη2Tn(x)]R(x), (39)

where A = A(a0, c0, u0) is a positive constant.

9.2. Proof of Proposition 6.1

Denote by ℙ(i|j) the probability that a new sample from class j is misclassified to class i, for i, j ∈ {1, 2} and ij. The classification error of h is

err(h)=π(21)+(1-π)(12).

Write Mk = Mk(Ω, δ) and Lk = Lk(Ω, δ) for short. It suffices to show that

(21)=Φ¯((1-t)ML1)+O(q)+o(d)L13/2,(12)=Φ¯(tML2)+O(q)+o(d)L23/2.

We only consider ℙ(2|1). The analysis of ℙ(1|2) is similar. Suppose Xclass1=(d)Z~N(μ1,1). Define

Y=1-1/2(Z-μ1),

so that Y ~ 𝒩(0, Id) and Z=11/2Y+μ1. Note that

Q(Z)=(11/2Y+μ1)Ω(11/2Y+μ1)-2(11/2Y+μ1)δ=Y11/2Ω11/2Y+2Y11/2(Ωμ1-δ)+μ1Ωμ1-2μ1δ. (40)

Recall that 11/2Ω11/2=K1S1K1 is the eigen-decomposition by excluding the 0 eigenvalues. Since Σ1 has full rank and the rank of Ω is q, the rank of 11/2Ω11/2 is q. Therefore, S1 is a q × q diagonal matrix, and K1 is a d × q matrix satisfying K1K1=Iq. Let 1 be any d × (dq) matrix such that K = [K1, 1] is a d × d orthogonal matrix. Since Id=KK=K1K1+K1K1, we have

Y11/2(Ωμ1-δ)=YK1K111/2(Ωμ1-δ)+YK1K111/2(Ωμ1-δ).

We recall that β1=K111/2(Ωμ1-β). Let β1=K111/2(Ωμ1-δ),W=K1Y,W=K1Y and c1=μ1Ωμ1-2μ1δ. It follows from (40) that

Q(Z)=YK1S1K1Y+2YK1β1+2YK1β1+c1=WS1W+2Wβ1+2Wβ1+c1Q¯1(W)+F¯1(W)+c1,

where 1(w) = wS1w +2wβ1 and 1(w) = 2wβ̃1. Therefore,

(21)=(Q(Z)>c)=(Q¯1(W)+F¯1(W)>c-c1).

We write for convenience W = (W1, …, Wq), = (Wq+1, …, Wd), β1 = (β11, …, β1q) and β̃1 = (β1(q+1), …, β1d), and notice that Wii.i.d.N(0,1) for 1 ≤ id. Moreover,

Q¯1(W)+F¯1(W)=i=1q(siWi2+2Wiβ1i)+i=q+1d2Wiβ1ii=1dξi, (41)

where ξi=siWi2I{1iq}+2Wiβ1i, for 1 ≤ id. The right-hand side of (41) is a sum of independent variables, so we can apply the Edgeworth expansion to its distribution function, as described in detail below.

Note that E(Wi2)=1,E(Wi4)=3,E(Wi6)=15 and E(Wi2j+1)=0 for nonnegative integers j. By direct calculation,

η1i=1dE(ξi)=i=1qsi=tr(S1)=tr(Ω1),η2i=1dvar(ξi)=i=1q(2si2+4β1i2)+i=q+1d4β1i2=2tr(S12)+4β12+4β12=2tr(Ω1Ω1)+4(Ωμ1-δ)1(Ωμ1-δ),η3=i=1dE[ξi-E(ξi)]3=i=1d(8si3+24β1i2si)=8tr(S13)+24β1S1β1=8tr[(Ω1)3]+24(Ωμ1-δ)1Ω1(Ωμ1-δ).

Notice that 𝔼(|ξi − 𝔼(ξi)|3) < ∞, as max{|si |, |β1i|, 1 ≤ id}≤ C0 by assumption. Using results from Chapter XVI of Feller (1966), we know

(21)=(i=1dξi>c-c1)=(i=1dξi-E(i=1dξi)i=1dvar(ξi)>c-c1-E(i=1dξi)i=1dvar(ξi))=Φ¯(c-c1-η1η2)+η3(1-((c1-c+η1)2/η2))6η23/2φ(c1-c+η1η2)+o(dη23/2),

where φ is the probability density function of the standard normal distribution. It is observed that η2 = L1(Ω, δ) and c1 + η1 = M1(Ω, δ). Also, c = tM1(Ω, δ) + (1 − t)M2(Ω, δ). As a result,

c-c1-η1η2=[tM1+(1-t)M2]-M1L1=(1-t)(M2-M1)L1=(1-t)ML1.

Plugging this into the expression of ℙ(2|1), the first term is Φ¯((1-t)ML1). Moreover, since the function (1 − u2)φ(u) is uniformly bounded, the second term is O(η3η23/2). Here η2 = L1, and η3 = O(q) as si’s and β1i’s are abounded in magnitude. Combining the above gives

(21)=Φ¯((1-t)ML1)+O(q)+o(d)L13/2.

The proof is now complete.

Footnotes

SUPPLEMENTARY MATERIAL

Supplement to “QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization” (DOI: 10.1214/14-AOS1307SUPP;.pdf). Owing to space constraints, numerical tables for simulation and some of the technical proofs are relegated to a supplementary document. It contains proofs of Propositions 2.1, 5.1 and 6.2.

References

  1. Bickel PJ, Ritov Y, Tsybakov AB. Simultaneous analysis of lasso and Dantzig selector. Ann Statist. 2009;37:1705–1732. MR2533469. [Google Scholar]
  2. Bindea G, Mlecnik B, Hackl H, Charoentong P, Tosolini M, Kirilovsky A, Fridman WH, Pagès F, Trajanoski Z, Galon J. ClueGO: A cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009;25:1091–1093. doi: 10.1093/bioinformatics/btp101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Cai T, Liu W. A direct estimation approach to sparse linear discriminant analysis. J Amer Statist Assoc. 2011;106:1566–1577. MR2896857. [Google Scholar]
  4. Cai T, Liu W, Luo X. A constrained ℓ1 minimization approach to sparse precision matrix estimation. J Amer Statist Assoc. 2011;106:594–607. MR2847973. [Google Scholar]
  5. Catoni O. Challenging the empirical mean and empirical variance: A deviation study. Ann Inst Henri Poincaré Probab Stat. 2012;48:1148–1185. MR3052407. [Google Scholar]
  6. Chen X, Zou C, Cook RD. Coordinate-independent sparse sufficient dimension reduction and variable selection. Ann Statist. 2010;38:3696–3723. MR2766865. [Google Scholar]
  7. Cook RD, Weisberg S. Comment on “Sliced inverse regression for dimension reduction. J Amer Statist Assoc. 1991;86:328–332. [Google Scholar]
  8. Coudret R, Liquet B, Saracco J. Comparison of sliced inverse regression approaches for underdetermined cases. J SFdS. 2014;155:72–96. MR3211755. [Google Scholar]
  9. Fan J, Fan Y. High-dimensional classification using features annealed independence rules. Ann Statist. 2008;36:2605–2637. doi: 10.1214/07-AOS504. MR2485009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Fan J, Feng Y, Tong X. A road to classification in high dimensional space. J Roy Statist Soc B. 2012;74:745–771. doi: 10.1111/j.1467-9868.2012.01029.x. MR2965958. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Statist Assoc. 2001;96:1348–1360. MR1946581. [Google Scholar]
  12. Fan J, Xue L, Zou H. Strong oracle optimality of folded concave penalized estimation. Ann Statist. 2014;42:819–849. doi: 10.1214/13-aos1198. MR3210988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Fan J, Ke ZT, Liu H, Xia L. Supplement to “QUADRO: A supervised dimension reduction method via Rayleigh quotient optimization”. 2015 doi: 10.1214/14-AOS1307SUPP. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Feller W. An Introduction to Probability Theory and Its Applications. II. Wiley; New York: 1966. [Google Scholar]
  15. Fisher RA. The use of multiple measurements in taxonomic problems. Annals of Eugenics. 1936;7:179–188. [Google Scholar]
  16. Friedman JH. Regularized discriminant analysis. J Amer Statist Assoc. 1989;84:165–175. MR0999675. [Google Scholar]
  17. Guo Y, Hastie T, Tibshirani R. Regularized discriminant analysis and its application in microarrays. Biostatistics. 2005;1:1–18. doi: 10.1093/biostatistics/kxj035. [DOI] [PubMed] [Google Scholar]
  18. Han F, Liu H. Transelliptical component analysis. Adv Neural Inf Process Syst. 2012;25:368–376. [Google Scholar]
  19. Han F, Zhao T, Liu H. CODA: High dimensional copula discriminant analysis. J Mach Learn Res. 2013;14:629–671. MR3033343. [Google Scholar]
  20. Jiang B, Liu JS. Sliced inverse regression with variable selection and interaction detection. 2013 Preprint. Available at arXiv:1304.4056. [Google Scholar]
  21. Kendall MG. A new measure of rank correlation. Biometrika. 1938;30:81–93. [Google Scholar]
  22. Kent JT. Discussion of Li (1991) J Amer Statist Assoc. 1991;86:336–337. [Google Scholar]
  23. Li KC. Sliced inverse regression for dimension reduction. J Amer Statist Assoc. 1991;86:316–342. MR1137117. [Google Scholar]
  24. Li K-C. Lecture notes. Dept. Statistics, UCLA; Los Angeles, CA: 2000. High dimensional data analysis via the SIR/PHD approach. Available at http://www.stat.ucla.edu/~kcli/sir-PHD.pdf. [Google Scholar]
  25. Li B, Wang S. On directional regression for dimension reduction. J Amer Statist Assoc. 2007;102:997–1008. MR2354409. [Google Scholar]
  26. Li L, Yin X. Sliced inverse regression with regularizations. Biometrics. 2008;64:124–131. doi: 10.1111/j.1541-0420.2007.00836.x. MR2422826. [DOI] [PubMed] [Google Scholar]
  27. Liu H, Han F, Yuan M, Lafferty J, Wasserman L. High-dimensional semi-parametric Gaussian copula graphical models. Ann Statist. 2012;40:2293–2326. MR3059084. [Google Scholar]
  28. Luparello C. Aspects of collagen changes in breast cancer. J Carcinogene Mutagene S. 2013;13:007. doi: 10.4172/2157-2518.S13-007. [DOI] [Google Scholar]
  29. Maruyama Y, Seo T. Estimation of moment parameter in elliptical distributions. J Japan Statist Soc. 2003;33:215–229. MR2039896. [Google Scholar]
  30. Shao J, Wang Y, Deng X, Wang S. Sparse linear discriminant analysis by thresholding for high dimensional data. Ann Statist. 2011;39:1241–1265. MR2816353. [Google Scholar]
  31. Wei Z, Li H. A Markov random field model for network-based analysis of genomic data. Bioinformatics. 2007;23:1537–1544. doi: 10.1093/bioinformatics/btm129. [DOI] [PubMed] [Google Scholar]
  32. Witten DM, Tibshirani R. Penalized classification using Fisher’s linear discriminant. J R Stat Soc Ser B Stat Methodol. 2011;73:753–772. doi: 10.1111/j.1467-9868.2011.00783.x. MR2867457. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Wu HM. Kernel sliced inverse regression with applications to classification. J Comput Graph Statist. 2008;17:590–610. MR2528238. [Google Scholar]
  34. Zhao T, Roeder K, Liu H. Positive semidefinite rank-based correlation matrix estimation with application to semiparametric graph estimation. 2013 doi: 10.1080/10618600.2013.858633. Unpublished manuscript. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Zhao P, Yu B. On model selection consistency of Lasso. J Mach Learn Res. 2006;7:2541–2563. MR2274449. [Google Scholar]
  36. Zhong W, Zeng P, Ma P, Liu JS, Zhu Y. RSIR: Regularized sliced inverse regression for motif discovery. Bioinformatics. 2005;21:4169–4175. doi: 10.1093/bioinformatics/bti680. [DOI] [PubMed] [Google Scholar]
  37. Zou H. The adaptive lasso and its oracle properties. J Amer Statist Assoc. 2006;101:1418–1429. MR2279469. [Google Scholar]
  38. Zou H, Li R. One-step sparse estimates in nonconcave penalized likelihood models. Ann Statist. 2008;36:1509–1533. doi: 10.1214/009053607000000802. MR2435443. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES