Abstract
Simultaneously performing variable selection and inference in high-dimensional regression models is an open challenge in statistics and machine learning. The increasing availability of vast amounts of variables requires the adoption of specific statistical procedures to accurately select the most important predictors in a high-dimensional space, while controlling the false discovery rate (FDR) associated with the variable selection procedure. In this paper, we propose the joint adoption of the Mirror Statistic approach to FDR control, coupled with outcome randomisation to maximise the statistical power of the variable selection procedure, measured through the true positive rate. Through extensive simulations, we show how our proposed strategy allows us to combine the benefits of the two techniques. The Mirror Statistic is a flexible method to control FDR, which only requires mild model assumptions, but requires two sets of independent regression coefficient estimates, usually obtained after splitting the original dataset. Outcome randomisation is an alternative to data splitting that allows to generate two independent outcomes, which can then be used to estimate the coefficients that go into the construction of the Mirror Statistic. The combination of these two approaches provides increased testing power in a number of scenarios, such as highly correlated covariates and high percentages of active variables. Moreover, it is scalable to very high-dimensional problems, since the algorithm has a low memory footprint and only requires a single run on the full dataset, as opposed to iterative alternatives such as multiple data splitting.
Keywords: False discovery rate, gene expression, high-dimensional regression, post-selection inference, variable selection
1. Introduction
Advances in data collection capabilities have allowed researchers to get access to thousands of features on multiple subjects in a relatively short time. Examples are next-generation sequencing technologies that allow fast DNA and RNA sequencing and nuclear magnetic resonance spectroscopy used to identify metabolites. 1 But also wearable personal devices that can measure several anthropometric variables and clinical outcomes of interest, such as blood pressure and blood glucose levels, continuously over time, thus producing a vast collection of measurements. 2 Moreover, it is now common to integrate these multiple input sources into a single study, and repeating these measurements over multiple time points, in a longitudinal fashion, in order to capture potentially relevant trends in the outcomes of interest, given some specific interventions or treatments.
The combination of all these factors generates an incredibly vast and complex set of features that need to be jointly analysed.3,4 In many such studies, feature selection becomes an important task, a typical example being biomarker discovery. This can be a daunting task, for at least two reasons: (a) the available sample size, for example, the number of patients recruited in a cohort study, is very often much lower than the number of available features (high-dimensional problem). Therefore, standard statistical or machine learning models struggle to extract the underlying fundamental associations, due to model limitations. (b) Even when a model can deal with a sample size lower than the number of features, the high probability of selecting false positive effects, poses a serious threat to the validity of the analysis.
We address the first problem of variable selection by using all available features during the training process and building a statistical model that can automatically select the most relevant variables. This allows us to retain the full interpretability of the effects of the covariates on the outcome of interest, in contrast with dimensionality reduction techniques such as principal component analysis. 5 In high-dimensional regression (or classification) problems, popular choices are the least absolute shrinkage and selection operator (LASSO), ElasticNet, least-angle regression (LARS), and smoothly clipped absolute deviation (SCAD), which provide efficient algorithms that scale well to a large number of features.6–9 Alternatives also exist in the Bayesian framework, where variable selection is performed through an appropriate choice of the prior distributions. 10 Popular choices are the two-group Spike and Slab prior distribution, which explicitly provide posterior probabilities of inclusion for each variable; and the one-group prior, such as the Laplace prior distribution, which shrinks coefficients toward zero, acting as a regularisation in a similar fashion to the penalty in the LASSO.
In addition to selecting only a subset of variables, in many applications of real data, it is also essential to be able to make proper inferences on the selected subset of features. This is essential to have a reliable estimate of the regression coefficients confidence intervals and to be able to control some form of error rate, such as the false discovery rate (FDR). 11 However, simply proceeding with inference, after a data-dependent variable selection step, does not allow to perform valid post-selection inference, as explained in Berk et al. 12 This is because data-driven variable selection procedures, such as the ones mentioned above, generate a model that is not deterministic and the straightforward application of classic approaches to inference, such as ordinary least squares (OLS), do not account for this additional randomness. Therefore, the estimated coefficients and the corresponding confidence intervals will be biased, leading to a potential increase in erroneous classifications.
A number of methods exist to control error rates in multiple testing. Meinshausen and Buhlmann 13 propose stability selection, as a method to control the number of false discoveries in a LASSO regression setting. Lee et al. 14 take a different approach, providing an analytical solution for the linear regression problem when using a LASSO penalty, formally accounting for the variable selection process at the inferential step, thus obtaining correct confidence intervals for the selected coefficients. Rügamer et al. 15 provide a partial extension to the case of additive and linear mixed models, using Monte Carlo approximations, however, the proposed solution is computationally intensive. Overall, this formal treatment is limited to simple cases, such as linear regression, and extensions are difficult.
Barber and Candes, 16 in their pioneering work, propose the knockoff filters procedure as a direct way to control the FDR. The knockoff method works by augmenting the space of covariates, adding a perturbed version of each feature to the design matrix and performing variable selection on the new feature space. The authors provide an upper-bound estimate of the FDR and a new knockoff test statistic through which it is possible to control the FDR at any specific level. This approach, however, has some limitations, that is, the knowledge of the joint distribution of the covariates is required to construct the knockoff filters and the procedure is limited to the case . Candes et al. 17 provide an extension to the high-dimensional scenarios, but still requires knowledge of the joint distribution of the covariates, which is not known in most real data applications.
Despite the limitations of the knockoff, the idea of feature perturbation has been successfully adopted in other works as a way to control FDR. Xing et al. 18 develop the Gaussian Mirrors procedure which allows to control FDR without requiring any distributional assumption on the covariates. However, this approach is inefficient because it only evaluates one variable at a time. Using a similar approach, Dai et al. 19 develop the Mirror Statistic, by substituting the feature perturbation step with a two-step procedure based on data splitting (DS). The first step is variable selection on a subset of the data and the second step is statistical inference on a second subset of data, independent of the first. Since the two sets are independent by construction, the conditional distribution of the inference set given the output of the selection step is the same as the unconditional one, so classical procedures can be used to provide valid inference for the selected parameters. 20 One drawback of this approach is the loss of power caused by the reduced sample size available for the inference step (and similarly for the variable selection step). To mitigate this problem the authors propose a variation called multiple DS (MDS), more akin to stability selection, showing increased power in simulations. 19 The authors show that this approach outperforms the knockoff filter in many situations. Nevertheless, MDS is computationally much more costly than DS, since the same procedure has to be repeated multiple times (at least 50 according to the authors).
In this paper, we propose to use the Mirror Statistic, but, instead of creating two independent sub-samples using DS, we borrow the idea of outcome randomisation from Rasines and Young, 20 where the authors propose a simple mechanism to create two independent sets of data by adding some random noise to the outcome, splitting more efficiently the original information available into two independent new pseudo-outcomes. The result is an increase in statistical power and a more computationally efficient algorithm. We provide a performance comparison via numerical experiments, replicating the results of Dai et al. 19 and extending the simulation scenarios to more challenging settings.
Throughout the article, we use the following notation: RandMS to indicate our proposed model with outcome Randomisation plus Mirror Statistic, DS for single Data Splitting with Mirror Statistic, and MDS for Multiple Data Splitting with Mirror Statistic. indicates the set of features with a true null coefficient and is the set of features with a true non-null coefficient (active variables). denotes the whole matrix of covariates, denotes column and denotes the vector outcome. We make use of the terms variables, covariates and features interchangeably.
The remainder of the article is organised as follows. In Section 2, we review the methodology underlying FDR control via the Mirror Statistic and DS. We then introduce Randomisation and provide the algorithm for our proposed strategy. In Section 3, we show in detail the results of our simulations and the computational performance of the algorithm. In Section 4, we apply the proposed method to the selection of genes in a high-dimensional real-world study. Finally, in Section 5, we summarise our contributions, limitations, and potential extensions of the method.
2. Methods
2.1. FDR control
The FDR has been introduced as a less conservative approach to false positive error control compared to the family-wise error rate, which can be too restrictive when testing a large number of hypotheseis. 11 FDR is defined as the expectation of the false discovery proportion (FDP):
| (1) |
where the expectation is taken with respect to the stochastic model selection procedure and the randomness in the data. represents the set of all selected features.
Benjamini and Hochberg 11 provide a correction method for the -values so that a specific FDR level can be achieved, under the assumption of independence of the -values. Although some extensions exist that allow for some form of dependence, 21 for many of the aforementioned algorithms in Section 1, -values are not available at all (e.g. LASSO). Hence, the necessity of using methods that can achieve FDR control without explicitly calculating -values, such as the Mirror Statistic (equation (2)).
For completeness, we also specify the formula for the true positive rate (TPR), defined as
The TPR is used to measure the power of the method, that is, the ability to select the true active variables.
2.2. DS and randomisation
The practice of splitting a given dataset into multiple smaller independent slices is common and the standard in most machine learning applications, 22 where, generally, a training, validation and testing set is generated from the original sample. While in machine learning, splitting is done to validate the prediction accuracy of a model, in classic statistical inference the same idea can be used to create valid inferential procedures. Dai et al. 19 use this strategy in order to control FDR via the test statistic Mirror Statistic, defined for variable as:
| (2) |
where is a non-negative, exchangeable and monotonically increasing function and and are two distinct estimates of the regression coefficient obtained on two independent subsets of the sample. The logic behind equation (2) is that for features that are relevant, the corresponding will get a positive relatively large value, because the two independent estimates, and , will have concordant signs. Conversely, if the estimated coefficients have discordant signs, will always get a negative value and if the estimates are relatively small, meaning that probably the feature is not relevant, then will be small as well.
Under the assumption that, for a feature , the sampling distribution of at least one of the two coefficients is symmetric around zero, then also will be symmetric around zero. This property, plus the Mirror Statistic construction, provides an upper bound on the number of false positives:
| (3) |
which can be directly used to approximate the false discovery proportion in equation (1).
As we can see from the definition, to use the Mirror Statistic estimator we need two independent sets of observations. Although DS is universally valid and straightforward to use, it comes at the cost of a much-reduced sample size, which can have detrimental effects on the power of the statistical test and on the stability of the variable selection. To counteract this downside, Dai et al. 19 propose MDS as a way to increase the power of the test statistic. MDS amounts to repeating multiple times the whole procedure of variable selection with simple DS and then aggregating the results. In simulations, MDS seems to provide higher power; however, this improvement comes at the cost of a much higher computational burden and an additional uncertainty due to the choice of the aggregation strategy.
This is where randomisation offers an alternative way of distributing the available sample information and helps avoid the randomness of the simple DS, creating two pseudo-independent sets, by perturbing the outcome with some random noise . 20 The general idea is to use a randomisation scheme through which we only allow ourselves to observe the outcome through at the variable selection step, while at the inference step we only observe , with constructed to be independent of .
Here we consider the case where the outcome of interest has a normal distribution and whose mean is a function of some features .
| (4) |
where is the -dimensional identity matrix.
Given , or an estimate , we can generate an -dimensional vector of random normal noise as , where the scalar allows to distribute information for variable selection and inference, respectively. Rasines and Young 20 show how using is equivalent to splitting the data into two halves of equal size, which is the most natural choice in the absence of any additional information. Any other choice would place more information on either the LASSO or the OLS.
Then, building , we have that and , with .
Randomisation can be interpreted as averaging information over all possible data splits of the same size. Rasines and Young 20 show that for a normally distributed outcome , randomisation guarantees a power that is always at least as high as DS.
The complete inferential procedure is detailed in the following algorithm:
If the residual variance is unknown, it can be estimated using the residual sum of squares obtained from the LASSO model fitted with penalisation parameter tuned by -fold cross-validation, 20 that is:
where is the sample size, is the number of non-zero coefficients and is the predicted outcome.
This is the strategy that we adopt in the numerical experiments in Section 3, as well as in the real-world data application in Section 4, where in both cases we use folds.
We further explore the implications of an incorrectly estimated variance through additional simulations where we control the value of used in the model. In the Appendix, we report the results of this experiment. When the variance is underestimated the FDR is not properly controlled, while when the variance is overestimated the algorithm becomes more conservative in terms of FDR control.
2.3. Assumptions
The theory underlying the development of the Mirror Statistic is based on the independence of the two sets of coefficient estimates and the following symmetry and weak dependence assumptions: 19
Symmetry: For each feature , the sampling distribution of at least one of the two coefficients, and , is symmetric around zero
- Weak dependence: The mirror statistics are continuous random variables, and there exist a constant and such that
where is the number of null features. This assumption translates into a restriction on the correlation among the null features.
A requirement for the symmetry condition is that all variables with a non-null coefficient are selected in the variable selection step. This is generally referred to as the sure screening property. In our simulations and real-world application, we use the combination of LASSO and OLS, respectively, to perform variable selection and inference, in linear models. For the LASSO, the sure screening property depends on the Signal strength condition, defined as:
In order to better understand the impact of violating the assumption of symmetry we run a controlled simulation where we monitor the individual steps of the randomisation plus Mirror Statistic algorithm. Results are detailed in the Appendix.
3. Simulations
To prove the effectiveness of our proposed approach we perform several simulations in multiple scenarios, comparing our method against DS and MDS.
We first repeat the simulations done in Dai et al., 19 to check whether we can achieve a similar performance to what was reported in the original paper. We then perform additional simulations under different scenarios, with the purpose of finding the limit of the approach, that is, when the performance deteriorates too much, and to check in which situations our method can perform better than the alternatives.
The common strategy for all simulations is to perform variable selection with LASSO and inference with a standard linear model. For all simulations, we set in equation (2). 19
The metrics that we track are the FDR and the TPR or power. An ideal model selection will have a TPR close to for every level of FDR we wish to control for.
3.1. Replication of the simulations from the original paper with DS and MDS
The data for this scenario is generated using the following parameters:
sample size,
number of covariates,
number of non-zero coefficients, , equivalent to of
covariates correlation,
regression coefficients signal strength,
error variance,
randomisation variance factor,
predictor in equation (4) set to be
Throughout the paper, the correlation coefficient represents the highest value from which the correlation matrix is built, following the structure defined in equation (5), unless otherwise specified.
The covariates are generated as independent random draws from a multivariate normal distribution, , where the covariance is constructed as a diagonal Toeplitz matrix, with each block defined as:
| (5) |
where is the dimension of the block.
The regression coefficients are randomly generated from a normal distribution with mean zero and standard deviation .
Replicating the strategy used by Dai et al., 19 we first compare the performance of the three algorithms fixing the signal strength to , varying the degree of correlation , and then fixing the correlation to and varying the signal strength. MDS is run using 50 replications of DS, as recommended by the authors.
Fifty independent replications are run for each combination. Here we show the boxplot of the results.
In Figures 1 and 2, top plots, we can observe some common patterns: all three algorithms achieve, on average, FDR control at the nominal level of ; MDS is always more conservative, achieving on average a lower FDR.
Figure 1.
Paper replication study – . Top: False discovery rate. Bottom: True positive rate.
Figure 2.
Paper replication study – signal strength . Top: False discovery rate. Bottom: True positive rate.
In the bottom plots, we show the TPR estimates: MDS does not seem to be significantly better than the simple DS; RandMS achieves a performance comparable to DS and MDS, with often higher (better) median values.
3.2. New simulations
We now concentrate on testing our method on new scenarios that were not covered in the original paper. We explore the performance in contexts with a higher correlation, near ill-conditioned covariance matrices, higher proportions of non-zero regression coefficients, non-block diagonal covariance matrices and regression coefficients sampled from a fixed known pool of values.
3.2.1. from fixed pool
The first additional scenario that we test is the situation where the regression coefficients are not drawn from a distribution, as done in Section 3.1, but rather randomly sampled from a fixed known pool of values. For this simulation, we sample values from the set .
Selecting the coefficients from a known set of values allows for better control and understanding of the variable selection capabilities of the algorithms, since the random draws from a normal distribution will naturally be concentrated around the zero mean, making it more difficult to really understand which coefficients can actually be considered non-zero. This set of regression coefficients is used in all of the following simulations.
Keeping all the other simulation settings as before, we run the same simulations. From Figure 3 (top), we see that the median values and variability of FDR are very similar to the ones obtained before, validating the ability of the Mirror Statistic to control the FDR. On the other hand, the results for the TPR in Figure 3 (bottom) suggest that variable selection with fixed coefficients is somewhat easier, as expected, which is reflected in higher median TPR values, concentrated near 1.
Figure 3.
from fixed pool. Top: False discovery rate. Bottom: True positive rate.
All three methods have a comparable performance, both in terms of FDR control and TPR, with the only exception of MDS, which has, on average, a lower TPR than DS and RandMS. This behaviour could be explained by the fact that MDS has a very conservative control over FDR, thus also ending up with a lower TPR.
3.2.2. Higher percentages of non-zero coefficients
A natural question arises regarding whether DS, MDS and RandMS, can cope with a higher percentage of active variables, potentially highly correlated. To this end, we increase the complexity of the simulated data by increasing the percentage of active variables and allowing very high degrees of correlations across the covariates.
We start by increasing the percentage of active variables to . From Figure 4 (top), we see that the FDR is still under control and MDS is conservative, as before. In Figure 4 (bottom), we can observe that RandMS is achieving a TPR always at least comparable to DS and MDS. Common to all three methods is the sharp decrease in TPR for higher correlation structures.
Figure 4.
Active coefficients . Top: False discovery rate. Bottom: True positive rate.
By increasing the percentage of active variables to , we can appreciate a sharper difference, at the advantage of RandMS, both in terms of FDR and TPR. In Figure 5 (top), we see that DS and MDS start to loose control over the FDR, in particular, MDS is no longer as conservative as before. RandMS is still able to achieve the required FDR control at the pre-specified level.
Figure 5.
Active coefficients . Top: False discovery rate. Bottom: True positive rate.
In Figure 5 (bottom), we see that RandMS outperform the competitors in terms of TPR, in particular, for correlations up to, and including, 0.5. On the other hand, DS is totally unable to retain enough power.
Finally, setting the percentage of active variables to 30%, the difference is even more striking, again in favour of RandMS. Figure 6 (top) and (bottom) shows the results for FDR and TPR, respectively.
Figure 6.
Active coefficients 30%. Top: False discovery rate. Bottom: True positive rate.
3.2.3. Different covariance matrix structure
The next simulation is performed with covariates generated from a normal distribution whose inverse covariance matrix is near ill-conditioned, meaning that the lowest eigenvalues of are nearly . This results in a more unstable data generation.
The covariance matrix is constructed starting from the identity matrix and changing only the first off-diagonal entries to be equal to some specified values, here denoted by . This covariance structure implies that
In Figure 7 (top), we see that MDS is still conservative in terms of FDR, while DS and RandMS correctly control FDR at . However, as we increase the percentage of active variables to , Figure 8 (top), we see that MDS is not conservative anymore, while RandMS still works well.
Figure 7.
Ill-conditioned covariance – active coefficients 10%. Top: False discovery rate. Bottom: True positive rate.
Figure 8.
Ill-conditioned covariance – active coefficients 30%. Top: False discovery rate. Bottom: True positive rate.
In terms of TPR, the advantage of RandMS is more clear. Already with a percentage of active coefficients of 10%, Figure 7 (bottom), RandMS does a better job than DS and MDS, and, increasing that percentage to 30%, Figure 8 (bottom), RandMS totally outperforms the competitors.
3.2.4. Increasing
Here we test the performance of RandMS with a higher number of covariates, , while keeping the sample size fixed at . We repeat the simulations for different proportions of non-zero coefficients, from to , which correspond to and , respectively.
In Figure 9, we report the performance for . RandMS is able to control the FDR at as required and is comparatively better than the other methods that show a higher variability. The TPR is much higher for RandMS up to, and including, a correlation factor of 0.7, with TPR values close to 1. The performance decreases for more extreme correlation levels as expected.
Figure 9.
Block covariance matrix – with . Top: False discovery rate. Bottom: True positive rate.
3.3. Computational performance
We run a benchmark simulation of the computational requirements for the proposed Randomisation plus Mirror Statistic versus MDS (run with 50 splits). The benchmark, as well as all other analysis, has been done in Julia, 23 version , on a Lenovo ThinkPad machine, with Linux OS and equipped with a 13th Gen Intel®Core™ i5-1345U 12 CPU.
In Figure 10, we show the average time (in seconds) and the memory requirements (in gigabytes) to run the algorithms on a linear regression with an increasing number of variables , with a fixed number of active variables , sample size and a block diagonal covariance matrix. For the randomisation method, both time and memory requirements scale linearly with the number of variables; as an example, with the time required to estimate the full model is about 4 s, while the memory required is about GB. In comparison, MDS requires an order of magnitude more of time and memory. For example, with variables, MDS takes on average 45 s and requires over 7 GB of memory.
Figure 10.
Computational benchmark of randomisation with mirror statistic versus multiple data splitting. Left: CPU time in seconds. Right: Memory requirement in Gigabytes.
4. Real data application to the identification of genes regulating fasting triglyceride levels
Elevated serum triglyceride (TG) levels in the blood are strongly associated with an increased risk of cardiovascular diseases (CVDs). Serum TG levels can be reduced by a healthy diet, but there are also large inter-individual variations in fasting TG levels. An improved understanding of this variation would be beneficial for CVD prevention. In this example, we are interested in relating fasting TG levels to gene expression through a linear regression model. We have data from the screening visit of a randomised controlled dietary intervention trial, presented in detail in Ulven et al. 24 In this trial, gene expression was measured in peripheral blood mononuclear cells (PBMC). These are immune system cells and because they are circulating cells, they are exposed to nutrients, metabolites and peripheral tissues and may therefore reflect whole-body health. We include all individuals from whom we have both PBMC gene expressions and fasting TG levels, in total 251 individuals. The outcome is the log measurement of blood TGs, while we use measurements from 13,967 genes expression as covariates ( Expression BeadChips. Preprocessed gene expression probe level intensity values).
The log-transformed TG outcome is well approximated by a normal distribution, while the gene expression data have been already preprocessed and are also well approximated by a normal distribution.
We proceed to analyse the data with our proposed method, that is, outcome Randomisation plus Mirror Statistic, in order to identify which genes could potentially contribute to the differences in TG levels. We use LASSO for variable selection on the randomised outcome and a standard linear regression model for the coefficients estimation on the randomised outcome (Algorithm 1); we set the randomisation parameter and we choose for the test statistic in equation (2). The FDR target level is set to 0.1.
If we look only at the variable selection part of Algorithm 1 (i.e. LASSO), the number of selected genes is 30, while only two of those (plus the intercept) have been selected using the full RandMS algorithm. Both our selected genes can be linked to atherosclerosis, and thus, seem biologically plausible. MYLIP ( ) is related to TG level through low-density lipoprotein transport, while an overexpression of ABCG1 ( ) results in increased efflux of cellular cholesterol to high-density lipoprotein, which has an inverse association with TG. The numbers in parentheses are the estimated multiplicative effect with the respective confidence interval in square brackets, thus an increased expression of both genes seems to have a positive effect on TG levels.
5. Discussion
We propose the adoption of outcome randomisation instead of DS, in combination with the Mirror Statistic, in order to effectively control the FDR in high-dimensional linear regressions. Intuitively, randomisation acts as information averaging and helps avoid the pitfalls of DS. When combined with the Mirror Statistic, it allows to correctly control the FDR at the target level, while providing higher power and a more computationally efficient algorithm.
Our extensive simulations show superior performance compared to DS strategies, in various scenarios of increasing complexity. Even in very high-dimensional cases, we can retain the good scalability of the proposed method.
Finally, we perform a real data analysis, where the outcome of interest is blood TG levels, and the covariates are gene expression data. The dimension of the covariates space, compared to the sample size, makes this problem a perfect example of high-dimensional linear regression. We use our method to perform variable selection and inference, with a target FDR of 10%. We are able to identify two genes, potentially responsible for the variation of TG levels.
This extension is currently limited to normally distributed outcomes, where randomisation takes a closed-form analytical solution and the symmetry requirement of the regression coefficients to apply the Mirror Statistic is satisfied. It is therefore possible to use the method for the inference on linear regression models and Gaussian graphical models, for example. It would be interesting to explore possible extensions to outcomes following arbitrary distributions and high-dimensional mixed models. Leiner et al. 25 provide an extension of randomisation to distributions belonging to the exponential family, relying on the concept of conjugate distributions from Bayesian statistics. Their result could potentially be used, for example, for a high-dimensional logistic regression model. Dai et al. 26 extend the use of the Mirror Statistic to high-dimensional logistic regression, however, their method relies on the computationally expensive procedure of de-biasing the LASSO estimate, which is needed to satisfy the symmetry requirement to use the Mirror Statistic. Future efforts to combine these two extensions could be of practical interest. Another area of further improvement could be the adoption of different variable selection techniques in Algorithm 1. For example, substituting LASSO with a model that can better select highly correlated covariates, for example, ElasticNet. 7
Acknowledgments
The authors wish to thank Professor Stine Ulven, Department of Nutrition, University of Oslo, for providing the TG dataset.
Appendix A.
We explore the implications of an incorrectly estimated variance through additional simulations. We generate data for a linear regression model with and we evaluate the performance of the algorithm for a range of values of , which are directly provided as input into the model, and a target FDR of 0.1. In Figure A1, we can see how different values of the variance affect our proposed strategy in terms of FDR and Power, as well as a number of variables selected by the variable selection step (LASSO). For each value of , we average the metrics over 20 generated datasets. When the variance is underestimated, the FDR is not properly controlled. The estimated variance is needed for the randomisation of the outcome in Algorithm 1 and if the variance is too small, relative to the ground-truth value, then and are not independent. As a consequence, the variable selection becomes biased and overconfident.
Figure A1.
Average false discovery rate (FDR), power and proportion of selected variables against different values of .
When the variance is overestimated, the algorithm becomes more conservative and the FDR is controlled. The number of variables selected stays roughly constant for the selected range of values.
In order to better understand the impact of violating the assumption of symmetry, we run a controlled simulation where we monitor the individual steps of the randomisation plus Mirror Statistic algorithm. We generate data from a linear regression model with and and non-null regression coefficients . The covariates are random draws from a multivariate normal distribution with a covariance matrix built according to equation (5) with correlation factor . The experiment is repeated 50 times on repeated samples. By making the experiment hard on purpose, we produce a LASSO step where not all active variables are selected, therefore nullifying the sure screening property, which then causes the Mirror Statistic coefficients to not be distributed symmetrically around 0. In Figure A2, we compare the proportion of true non-null variables selected by the LASSO ( -axis) against the FDR achieved with RandMS. It is clear how the violation of the sure screening property affects the final FDR. In Figure A3, we actually see how this is caused by a violation of the symmetry requirement, which is fundamental for the approximation in equation (3) to work. If the symmetry assumption were fulfilled, then the number of coefficients (associated with the true null-regression coefficients) above and below the optimal threshold calculated through Algorithm 1 should be roughly the same, while this is clearly not the case for this experiment.
Figure A2.
Proportion of true non-null variables selected by the LASSO ( -axis) against the FDR achieved with RandMS. LASSO: least absolute shrinkage and selection operator; FDR: false discovery rate.
Figure A3.
Distribution of the number of mirror statistic coefficients (associated with the true null-coefficients) lying above and below the optimal threshold calculated for each repeated sampling.
Footnotes
Funding: The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interest: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Code availability: The code used to perform all the simulations, the real data analysis and the computational performance benchmark is available at https://github.com/marcoelba/SelectiveInference.
ORCID iDs: Marco Molinari https://orcid.org/0000-0002-3374-9099
Magne Thoresen https://orcid.org/0000-0003-1511-5938
References
- 1.Borah K, Das HS, Seth S, et al. A review on advancements in feature selection and feature extraction for high-dimensional NGS data analysis. Funct Integr Genomics 2024; 24: 139. [DOI] [PubMed] [Google Scholar]
- 2.Berry SE, Valdes AM, Drew DA, et al. Human postprandial responses to food and potential for precision nutrition. Nat Med 2020; 26: 964–973. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Konietschke F, Schwab K, Pauly M. Small sample sizes: a big data problem in high-dimensional data analysis. Stat Methods Med Res 2020; 30: 687–701. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Rahnenführer J, De Bin R, Benner A, et al. Statistical analysis of high-dimensional biomedical data: a gentle introduction to analytical goals, common approaches and challenges. BMC Med 2023; 21: 182. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Ayesha S, Hanif MK, Talib R. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf Fusion 2020; 59: 44–58. [Google Scholar]
- 6.Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc: Ser B (Methodol) 1996; 58: 267–288. [Google Scholar]
- 7.Zou H, Hastie T. Regularization and variable selection via the elastic net. J R Stat Soc Ser B: Stat Methodol 2005; 67: 301–320. [Google Scholar]
- 8.Efron B, Hastie T, Johnstone I, et al. Least angle regression. Ann Stat 2004; 32: 407–499. [Google Scholar]
- 9.Fan J, Lv J. Nonconcave penalized likelihood with NP-dimensionality. IEEE Trans Inf Theory 2011; 57: 5467–5484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.O’Hara RB, Sillanpää MJ. A review of Bayesian variable selection methods: what, how and which. Bayesian Anal 2009; 4: 85–117. [Google Scholar]
- 11.Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Stat Soc: Ser B (Methodol) 1995; 57: 289–300 . [Google Scholar]
- 12.Berk R, Brown L, Buja A, et al. Valid post-selection inference. Ann Stat 2013; 41: 802–837. [Google Scholar]
- 13.Meinshausen N, Bühlmann P. Stability selection. J R Stat Soc Ser B: Stat Methodol 2010; 72: 417–473. [Google Scholar]
- 14.Lee JD, Sun DL, Sun Y, et al. Exact post-selection inference, with application to the lasso. Ann Stat 2016; 44: 907–927. [Google Scholar]
- 15.Rügamer D, Baumann PF, Greven S. Selective inference for additive and linear mixed models. Comput Stat Data Anal 2022; 167: 107350. [Google Scholar]
- 16.Barber RF, Candès EJ. Controlling the false discovery rate via knockoffs. Ann Stat 2015; 43: 2055–2085. [Google Scholar]
- 17.Candès E, Fan Y, Janson L, et al. Panning for gold: ‘model-x’ knockoffs for high dimensional controlled variable selection. J R Stat Soc Ser B: Stat Methodol 2018; 80: 551–577. [Google Scholar]
- 18.Xing X, Zhao Z, Liu JS. Controlling false discovery rate using Gaussian mirrors. J Am Stat Assoc 2021; 118: 222–241. [Google Scholar]
- 19.Dai C, Lin B, Xing X, et al. False discovery rate control via data splitting. J Am Stat Assoc 2022; 118: 2503–2520. [Google Scholar]
- 20.Rasines DG, Young GA. Splitting strategies for post-selection inference. Biometrika 2022; 110: 597–614. [Google Scholar]
- 21.Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann Stat 2001; 29: 1165–1188. [Google Scholar]
- 22.Bishop CM. Pattern recognition and machine learning. New York, NY: Springer, 2006. [Google Scholar]
- 23.Bezanson J, Edelman A, Karpinski S, et al. Julia: a fresh approach to numerical computing. SIAM Rev 2017; 59: 65–98. [Google Scholar]
- 24.Ulven SM, Leder L, Elind E, et al. Exchanging a few commercial, regularly consumed food items with improved fat quality reduces total cholesterol and LDL-cholesterol: a double-blind, randomised controlled trial. Br J Nutr 2016; 116: 1383–1393. [DOI] [PubMed] [Google Scholar]
- 25.Leiner J, Duan B, Wasserman L, et al. Data fission: splitting a single data point. J Am Stat Assoc 2023; 1–22. DOI: 10.1080/01621459.2023.2270748. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Dai C, Lin B, Xing X, et al. A scale-free approach for false discovery rate control in generalized linear models. J Am Stat Assoc 2023; 118: 1551–1565. [DOI] [PMC free article] [PubMed] [Google Scholar]














